Let’s address the pressing matter at hand: Unity’s strategy and future plans in response to UE5’s Nanite and Lumen technologies. These cutting-edge technologies have generated extensive discussions within the forums, prompting the community to develop their own solutions like Nanotech, H-Trace, and MF-SSGI.
In addition to community efforts, Nvidia has made significant strides by introducing their own implementations known as Micro-Mesh and Kickstart RT. You can find more information about them at the following links:
To the best of my knowledge, Nvidia has open-sourced these implementations, which potentially allows Unity to leverage their advancements. Furthermore, noteworthy industry players such as Adobe and Simplygon (Microsoft) are incorporating these technologies into their respective tools. You can find details about their integration on the Simplygon website: Simplygon Blog
I’m not claiming these tech are easy to integrate into Unity as I don’t know the implementation details. I guess my intention was to give awareness that others are catching up. I want to know more on what Unity’s official stance on this is and your plans for the future.
Games with user-generated content benefit from real-time lighting solutions like Lumen. I understand that user-generated content (UGC), and the Metaverse, are strategic goals for Unity. With this in mind, I am very interested in learning more about Unity’s plans.
In my years of Unity experience it comes down to the lighting/GI and models that brings environments to life, yet it is the most complex and painful system to work with even in the standard pipeline. I upvote to lighting & performance features.
The Unity team should focus on bug fixing and improving what they already have e.g. ShaderGraph, HDRP, URP!
We are already developing solutions for Unity that are very similar to Nanite and Lumen and they will be soon available this year!
More info can be found on our discord server (see my signature) or these links below:
(Btw. I’m also planning to support Nvidia’s Micro-Meshes with NanoTech as an alternative mesh simplifier and already started to write a plugin for Unity.)
@saskenergy :
I know your question is if Unity is going into that direction or not, but from my perspective it doesn’t make sense for them to invest time and money in that direction, we (Mike and I) basically did the job for them, so now they should help us or support us and not reinvent the wheel.
Regarding Dynamic Global Illumination, we intend to sequence delivery in a way that helps ensure that our solutions scale across a wide range of devices.
First, we build Dynamic GI on Adaptive Probe Volumes, which we expect can also be performant on a range of mobile devices, however may come with some trade-offs to visual quality, limitations on procedural / user generated content creation and additional workflow steps we must ask creators to adopt to achieve desired results. The supported range of mobile devices will depend on upcoming internal performance benchmarks and optimizations we can include in time for release.
Next, we build a Dynamic GI solution that is tailored to higher-end devices, can provide higher quality visuals, better supports user generated content and reduces the authoring workflow steps. We aren’t able to reveal our technical approach here yet, but are excited about the direction technology is going in this space.
For each we aren’t yet able to discuss a target release dates.
Hi, @Nexusmaster ! I’m actually in your Discord and have been keeping a close eye on your development since last year ;).
Congrats on the 1.0.0 version on Erebus by the way!
Thanks for responding, StevenK! Glad to know Unity is still working on another dynamic GI solution aside from APVs. As you’ve pointed out, there are limitations on the APV system for true procedurally generated worlds. I know you guys can do it and I believe in you!
Speaking for this, any plan to have out of the box 24/7 day and night system super optimized for mobile platform specially android? From what I know it’s using sampling light probe approach to achieve it.
Regarding solutions to enable dense world rendering, I just wanted to chime in and say we are excited about the direction the industry is moving and the use cases. We are not ignoring it. And while we have been experimenting with developments in the field, we don’t have anything concrete to share publicly with you at this time.
Will be possible to re-bake APV (‘Dynamic GI on APV’) in runtime or has this idea been completely rejected and projects that rely on procedural/dynamic environments would need to wait for a new ‘Dynamic GI solution’?
What about adding/removing probes or changing probe density in runtime? Any chances for that or is it out of scope for this system?
I think about scenario where APVs API allows to edit/add probes in runtime to the generated scene and then bake/rebake them.
We are supporting streaming of probes, so the APV structure is updated at runtime, but we are not supporting changing the density (as it is not a uniform distribution but and adaptative it will be costlier to get it right at runtime)
No, I’m personally done with relying on 3rd party plugins for Unity that may die off at any time when the respective companies decides they can’t continue for whatever reason. Using Unity features is already a gamble. I definitely want native support for features as important as lighting and LODs. In fact this is sort of what I expect from a game engine, to handle these exact things. And of top of this I dont want to pay hundreds of dollars for something that should be built in, I can just switch to Unreal Engine if Unity decides to just give up.