I have no ideal about the tech behind, but looks very similar to Unreal Nanite.
Any thoughts?
Also author (Chris Kahler) replies to one of commenters,
Q: Would this be on the asset store?
A: Maybe next year, it also depends on how many unity users want it. I was more thinking about a Patreon campain with github access.
Yes this does look incredibly similar to nanite, no idea under the hood what differences there are but the overall approach seems the same
Which begs the question, when does unitys internal version of the same thing come out? If one guy is making this unity must be at least looking into it after all the buzz nanite has created right?
Nanite might be UE specific but the overall approach will become a standard across all engines and I would be excited to hear / see something about what that may look like in unity
No real time and cinematic render farm based rendering are too different beast altogether, the culture toward asset is different, in render farm culture, you are not looking to optimize performance (money does it), you are looking to optimized visual and workflow speed. Which is why the paradigm shift of real time cinematography is happening.
Apparently you are not taking into account they have a real time renderer to preview with prior to putting it through their cinematic render pipe. You are trying to educate someone who has been studying or building sfx since the manual days of the 80ās and involved with game engine tech since 2009. I made my first stop motion 16mm film in 1971.To believe that what was purchased at 1.6B USD by Unity from WETA will never make it into the real-time pipeline is naive to say the least.
For the answer to this question just remember when SEGI was all the rage and everyone was wondering when Unityās internal version of fully dynamic GI will come out.
Iām not trying to educate you, iām pointing at something you should have known given your experience, since you also didnāt consider that real time in movies is different than real time in games.
That is a good point! I think this goes above the level of hype that SEGI generated though, as this time its specifically their competitors tech (and currently only available there commercially to the masses).
I am hoping that gives unity a good kick up the behind to get into gear on this issue, but yes I suppose best to not hold my breath
Interesting, those Nanite examples are amazing. It would be pretty cool in VR where normal maps donāt work. I think if I was going to use something similar in Unity it would really need to be officially created by Unity.
I feel like creating something that looks similar wouldnāt be too difficult. Break a high resolution mesh into LOD clusters with instanced materials. Then maybe add my own LOD level to swap these objects so the cluster size changes.
Obviously getting something that performs at the level of Nanite is a different matter. For example, Nanite only loads data required to render the scene. It also compresses the data e.g. 1 million triangles compressed to 14mb. In Unity a similar mesh would probably be around 75mb and thatās before creating clusters and LODs. Iām sure there are also a bunch of details Iām missing like how it prevents gaps when transitioning between a high and low resolution cluster. Wonder if the source code for Nanite will be included in UE5.
Personally I donāt think Unity will implement this. Unity seems to focus more on mobile games and fast prototyping. Although I guess this might change since Unity purchased Weta like ippdev mentioned.
Itās the toolchain not the technique thatās the problem. You want to be able to generate these really quickly and Iām fairly sure there will be a ton of edge cases to deal with and research on that. The brute force way would take too long.
Thanks for the video GimmyDev. Looks like it was way more complex than I thought. If that Nano Tech is really using a similar technique then Iām really impressed. Generating the required data sounds really involved like hippocoder mentioned. I wonder if the person that made the Nano Tech demo used the UE5 toolchain, exported the data and then imported it into Unity. Then theyād only need to implement the rendering techniques.
There is always the solution to simplify the problem, by enforcing modeling guideline, to make the problem domain easier. Nanite is a kind of āoptimizedā brute force to decouple concern from artist to tech, which lead to an over engineering solution that is too generic. Over engineering over generic solution is what big company do, because at their scales of resources itās teh most competitive things to do, since it mean less training for artist, and it also help with solution like photogrammetry or filmic mesh (ie big polygon soup mess), by essentially automating the conversion workflow. Itās also clever because it can be seen as a form of potential lossy compression (you could simply cull the small leaf), and itās byte sized is favorable to streaming.
But the same ideas over a less generic version, say that need strict quadmesh modeling, could be an option too. The consistency at modeling time would simplify the algorithm (clear boundaries) and make it more predictable.
Bu this is a world where nanite already exist and the method documented⦠Another less generic implementation pushing issues down the workflow isnāt competitive, unless we are speaking for scrappy specific small project who want to take a risk tailored to the nature of their projects.
I can see unity evolving this into their own Nanite solution:
which kinda looks like old ROAM algorithm
and this is similar to to the less generic solution I was talking about, even though itās about decimation:
The thing is the fidelity being solved here is not within typical reach of AAA. When you listen to what the Coalition said about assets, you know theyāre already forced to reduce polys in order to sustain dev times.
So given you need to reduce polys and yet still have enough to qualify using resolution independent tech, youāre looking at the real problem being your own budget to author enough high quality source art to make it worthwhile.
Right now, every āpushing the envelope techā requires more, not less work in order to make the most of it, and we canāt even come close to saturating this tech as we canāt source the assets for it.
And if we did source the assets for it, we would still be needing to source everything else that sustains this level of detail. I canāt help but think this is an interim polygon-chasing fancy, and for indies at least, some form of deep trained image enhancement is more effective. And even for AAA eventually when quality is sufficient.
All this polygon hunting (which is essentially what it really is), isnāt the future.
New tech always means more work. It increases productivity so then the boss expects more productivity.
Like in the army, all the modern gear is way lighter than anything ever before. But soldiers now carry more weight than any point in history. THatās just the stupidity of humankind. We keep creating more problems. All the actual problems were solved ages ago. Now every problem there is is our own doing.
Anyway, I dont care too much about new tech coming out, but Iām thinking Iāll probably start my next project in around another 6-12 months, and I am thinking that if UE5 is production ready around then I might be able to really save some time using Nanite.
I want to make a game that takes place in a city because there is tons of art for generic cities already available. I dont plan on hiring any environment artist to help, so the big task of creating tons of LODs for every model would be a real headache. It is looking like Nanite will pretty much negate that mountain of work. I can more or less just drop models in and LODs are completely automated.
I havenāt actually looked into it besides watching that one promo video, but it looks like this may be a case I can use new tech to actually save me work. Weāll see of course.