Nvidia just released a blog + paper about rendering millions of light in realtime. What are the chances that we are going to see this used in game engines soon?
To be honest, their vid presentation of tech is really bad.
It doesnât show what tech can.
Most of presented scenes has uniform and static lights. Lights backing can achieve similar effects, with less gpu strain.
The animated one, are just few. Besides a carousel. But light is too uniform to see any particular effect.
With different presentation, I would be more impressed. Even metro doesnât show this WOW effect in my opinion.
I think for film makers maybe this will be the actual target audience.
For actual games? Very unlikely. To quote their publication, âWe implemented our approach on the GPU, rendering complex scenes containing up to 3.4 million dynamic, emissive triangles in under 50ms per frame while tracing at most 8 rays per pixel.â
A 50ms frame means 20 FPS. I had to go digging to determine what they kept referring to when they said âGPUâ and itâs a 2080 Ti. The 30 series is quite a bit faster but weâre still likely talking about console-level performance here. A 4000 or 5000 series card would help but then games would be more demanding too so who knows.
Based on past experience of nvidia tech demos vs actual widespread (albeit limited/partial) use of the tech in a published AAA title I would say about 5 years, with limited titles supporting the tech for enthusiast gamers within 2-3 years (usually 1 hardware generation after the one it was presented for - in this case nvidia 5000 series or sometime in 2025).
Upps ⌠now I dropped my trusted old looking glass. Damn!
Not good, because this stuff depends on RTX cards, and GPU shortage says hi.
That luckily has become way less of a problem the last couple months.
Admittedly many are waiting for the next generation that should come out in a couple months and probably those will be sold out quickly again, but at least that all means absolute worst case you gotta wait for one generation to get a GPU easily and RTX is now
The Tech does look interesting. Question is how does it scale?
If millions of lights lead to 20 fps, would 10 000 lights achieve at least 200 fps? Or is there a massive base cost and the lights number does hardly matter?
I assume the later too, because when they can present something for games, we can usually be quite sure that NVidia will do so explicitly.
What I believe their selling point for this might be, is easy usability. You could create what they show in the video quite well with post processing and smart placements of fewer lights, emissive textures and so on. But that requires a lot of thinking and adjustment.
Being able to just place lights like you would in reality, makes the process of designing a scene easier.
Overall the technology appears to be a massive improvement in terms of performance vs quality over the previous state of the art for this specific case which is cool. However itâs still too expensive for gaming for at least several gpu generations.
Still itâs another step in being able to move to a fully realistic based lighting &. rendering solution that in the future would lead to game development being able to drop all the various tricks and shortcuts we currently have.
I appreciate it for its contribution but I think the focus should be performant GI. We canât solve one light well yet (the sun and itâs bounces). I get itâs just a play on words but seems more of an inspiration thing than a practical thing.
This could be useful for light particles affecting GI. Every particle would contribute to indirect lighting.
Are there approaches at GI that do not require baking?
Because that would be a strong point of this technique here.
Or you do it in your art\3d-modeling program.
By âmillions of lightsâ they refer to each polygon being a light source, not that they placed millions by hand.
Certainly impressive but the flickering\fringing of moving shadow-edges due to rasterization is quite obviousâŚ
well 50ms isnât bad if you do real time caching into any light structure like cubemap array, SH lightprobe and lightmap (as with DDGI). While also obviously limiting the speed at which light move and change.
This was released more than 2 years ago (May 2020), but it is still a popular subject. The technique has been evolving rapidly, allowing this to be used in indirect lighting, specular, caustics and actual games (RTXDI).
From the original paper, to production:
RESTIR: Rearchitecting Spatiotemporal Resampling for Production (2021)
From screen space reservoirs, to world space grid based reservoirs:
REGIR: Rendering of Many Lights with Grid-Based Reservoirs (2021)
From direct lighting only, to multi bounce indirect lighting:
ReSTIR GI: Path Resampling for Real-Time Path Tracing (2021)
From diffuse only, to arbitrary paths (caustics, reflections):
GRIS: Generalized Resampled Importance Sampling (2022)
If you think it looks bad and there is no âWOWâ effect, you are looking at it wrong. This is not a regular graphics showcase to advertise photorealistic graphics to the general public, it is a show case for real time direct lighting only. If you compare what kind of direct lighting a regular path tracer can offer at 1 sample per pixel, and then look at the video again, you will be amazed.
In Rearchitecting Spatiotemporal Resampling for Production , Chris Wyman and Alexey Panteleev tackle the issues you mentioned and rearchitect the original 2020 ReSTIR implementation to get it ready for production, using fewer shadow rays, better cache flow and better shading.
This table gives a good overview of everything youâd want to know. It shows the total cost divided into getting the initial candidates (selected lights) ready, tracing rays through the scene, and re-using/shading, while also mentioning the speedup over the original paper. Scene complexity increases from left to right.
As you can see, most of the cost is spent on tracing the actual rays, which is a good thing, because this has nothing to do with ReSTIR and for gaming, we can solve this with LODs.
If you would like to see ReSTIR in action, ZyanideLab has created an open source branch of Quake II RTX that makes use of ReSTIR, allowing it to run in full, half and quarter resolution, giving both a performance and quality increase over default.
This is exactly what it does, it brings more performance to GI, allowing us to go from thousands of samples per pixel, to 2 samples per pixel. This means, moving from offline rendering, to real time rendering at interactive framerates. Not only that, but the quality is increased too. We were already able to solve lighting for a single light in real time very well using NEE (Next Event Estimation), but this makes it scale to millions of lights.
Even thousands of lights might sound overkill to a lot of people, but thousands of lights in the case of path tracing, means thousands of emissive triangles. A single light bulb (or monitor, tv screen, neon sign, etc) can already consist of many triangles, and to be able to represent itâs influence in a realistic way, you need to take in account every triangle of itâs surface.
ReSTIR is basically a signal amplifier for the initial candidates that you can feed any desired target distribution, and it will keep resampling itself to reach the desired target distribution. The better the input signal, the better the output, which means we can use light clusters and/or reflection probe knowledge to improve the input signal.
You guys are missing the most important part: itâs not just millions of lights but millions of SHADOW CASTING lights.
While everyone is focused on GI the reality is that direct illumination in real-time graphics is trash: weâre still stuck with mostly unshadowed point lights. This tech allows for every single light source to cast realistic lighting and shadows, even for emissive and volumetric materials. That makes a massive difference in rendering quality, especially for realistic scenes.
Just so OP knows that blog post and paper were released in 2020, and it became RTXDI which of course is a part of NVIDIAâS licensing suite. I wish it didnât require RTX cards but the actual result is beautiful.
Agreed, the tech is definitely unrivaled, millions of shadow casting AREA lights at that. Which look much better than regular point lights.
WOW AAA should totally learn this one new trick then. Thereâs probably decent reasons why not, and I would like to know them but lack the time.
I mean I can say right now that, unless the paper was written while a game was in production with this tech, 2 years is some extremely sharp turnaround time from paper to production to release. Like, Iâd say thatâs the biggest one. Also, dynamic updates at speed would likely run into issues because on a 2080 Ti they were averaging 50ms update times. When you account for all the rest of the rendering stack on the GPU, thatâs gonna be a pretty big problem.
On top of that, most RTX-compatible games have some pretty meagre offerings as far as improvements are concerned, largely because itâs only been recently that a decent number of people with compatible cards are concerned. On top of that, thereâs also the console market to consider, and how performant similar operations would be on those platforms. And then, if these things are pursued, theyâd have to be implemented and tweaked for performance.
So basically the decent reasons are every reason you can think of to not implement bleeding edge tech.