nanite is definitely one of the biggest plus that unreal has over unity. the ability of throwing billion of polygons to the engine and that engine managing to reduce the mesh complexity and create few triangles that maintain an acceptable level of detail from the big mesh, this ability is quite amazing.
my question is why you people don’t just build your own nanite tech? I’m sure you can do it, after all unreal source code is visible and I’m sure you checked how it works. I don’t mean this in the bad way, is normal they look to your code, you look to their code. is normal. If they wanted the code to be hidden they should had kept the engine closed source.
Because there’s more than one way to skin a cat you could find a way to make it work in a similar way without stealing what they did.
So what keeps you from this? they already are a few years ahead.
Well weird. I remember Unity staff saying stuff to the contrary but there you go.
The cynical side of me wants to think Unity China just straight up reverse engineered Unreal’s nanite in a way that wouldn’t fly legally anywhere else but China.
The reason why Unity in general and Unity China are different entities that can’t neccessarilly share code should be pretty obvious if you have been keeping up with the geopolitical situation at all. China doesn’t like US or EU-owned companies operating in China, so having a standalone entity that licences the Unity engine from Unity for sale in China makes a lot of legal problems go poof.
The Nanite-like thing Unity China is doing has been rumored to be something they have licensed from a third party, which means that western Unity would have to get a seperate license, if the developers are even interested in doing that, which isn’t a given
Maybe there is code included that cannot be distributed under current laws.
Also something like Nanite integration is very tricky, as requires lot of baking and extra data to represent the world. Having it as default could be a massive risk imo
I’m not sure how accurate this guy is but he seems to think Nanite is overhyped and actually not that good.
The TLDW is that a scene that’s been well optimized by a tech artist will beat out nanite performance with comparable visual quality.
Of course the problem is you need an artist who knows what they’re doing, nanite lets you dump in raw scans. Yet do we really want people shipping games that are 100GB+ because they just chucked in a few megascans?
I guess it would be cool to have, but I’d rather they just focus more energy on the unified render pipeline. Ray tracing and Realtime lighting is over hyped! Let’s see some more baked stuff.
I’m glad this has been posted here because sometimes when the topic of Nanite comes up I feel like I’m the crazy one. People keep saying that nanite makes Fortnite look good and perform really good on bad pcs, but every single time I try Fortnite out, it’s a stuttery mess. If not even Epic can figure out the Unreal stuttering what hope do other companies have? On top of that Unity has a decade of decadence to catch up on currently, they have better things to worry about than an overhyped, possibly not necessary features.
A (very good) Mesh LOD system will basically be the same visual effect as nanite.
I remember in the making of Half Life Alyx, they mentioned they used an extra LOD level that they wouldn’t normally use in a traditional game. If LOD0 would typically be 6k polys at 1m - 5m, they would load a LOD-1 at 20k poly at the 0m - 1m distance.
Total overkill for most games, but very smart in a VR game.
There is three main part of the nanite, meshlet geometry, meshlet LOD, software rasterizer.
Meshlets are form of compact representation, they reduce amount of data that need the represent vertices, indices and reduce the amount of calculation for culling.
But they required to “uncompress” an culled before fragment shaders by custom access patterns,
so standard vertex shader cant fetch them in a useful way, because of that you need to use them with fairly new mesh shaders, or compute shaders if you have software rasterizer. This both ways are not viable in mobile, mesh shaders not supported at all for example.
This should be the enough reason the keep unity away probably since their main income is mobile ads.
The other part is meshlet lod with streaming which this is the part makes nanite unique, it requires very clever geometry tricks to not produce visible artifacts when creating this lower poly LODs.
And also there is streaming part of this, Unity cant handle even the texture streaming, lets not hope for this at all.
And the software rasterizer is useful when you have very small triangles on screen, or you want to have meshlets without mesh shaders or you want more control over your rasterizer, since rasterizer fixed function hardware it will not allow these. But this also have downsides too.
In my opinion just using first option will give substantial performance in desktop and console,
but before even try to use mesh shaders they need to make dx12-vulkan default backend and update their shader compiler that something not 10 years old so can access these new shader model versions😐
There is very good presentation about mesh shaders:
From what I’ve seen virtual geometry seems to work quite well in the Chinese version. I’m not sure we’ll ever get a straight answer why the western version can’t have these Chinese developed features.
Nanotech by Chris Kahler looks good but who knows if it will ever be released.
Worth noting that the latest Chinese phones with the Snapdragon 8 Elite are being advertised with being able to support nanite, I think it’s something Unity should have an answer too.
Indeed, to support denser/more detailed worlds with higher visual fidelity, we have been prioritizing solutions to improve performance across a wider range of device and content than what Virtual Geometry allows given the Graphics API and hardware requirements:
GPU Resident Drawer (U6)
GPU Occlusion Culling (U6)
STP (U6)
MeshLODs (planned for U6.1)
We are in contact with our Chinese peers, we learn from each other’s experiences, but since code bases are diverging, sharing is not an option at the moment, and we have different priorities for our 2 products and teams.
Virtual Geometry is still considered, but not a top priority (at the moment behind render pipeline unification for example).