So what's the future of RT-GI??

I’m interested if anyone has heard/read/seen any updates on what our next realtime GI solution will be as the one we have now is deprecated (at least in HDRP), not sure if regular renderer or URP if it’s deprecated or not.

So just curious on the word, if there’s any new ‘potential’ from ‘unity’ solutions they plan to either ‘design’ or ‘implement from third party’ that anyone has heard about as of yet.

I remember reading awhile back a really cool Realtime GI solution that didn’t even need to bake or whatever, I think it was @hippocoder who posted about it, but not sure. (Sorry Hippo if it wasn’t you mate)…

Thanks everyone, it’s been awhile since I’ve been in the forums and actually talking I find myself just reading more than talking lately as the forums suck so much time away hahaha.

There is actually two big implementation of RTGI that isn’t pure raytracing like rtx, voxel based like segi and DDGI. What hippocoder shared was a variant of the segi method, unity is working on a DDGI variant.

There is another option but it’s not proven yet, but I’m working to see if it has any future. Exploration of custom diffuse RTGI "approximation" for open gl es 2.0 and weak machine - #3 by neoshaman

NOW if someone can tell me how to pass light structure to a purely texture based shader lol

Oh wow that’s pretty interesting mate… Did you have to create a new lighting system to make that work or did you not get that far yet? Perhaps I don’t understand how GI in general is actually made so perhaps there’s no lighting actually involved I don’t know.

GI is only lighting
Basic out of the box lighting you get in unity doesn’t bounce back
GI is the bounce back added, that is any surface that receive light emit back some of it, so that mean that any lit pixel is potentially a weak light. The main issue is to find which pixel get lit back by another pixel (bounce back) and not hidden by occlusion, for every pixel in the scene, so that’s like shadow multiply by every pixel that receive light. It’s recursive, so the light emitted by other pixel get emitted back too, you do that up until there is no more energy left to bounce the light.

What I do is I bake geometry data to texture, then bake visibility in texture too (to know which group of pixels can lit a single one), then I apply basic lighting, and then gather the result as the bounce back. It get store to a texture that register light that every pixel receive, then use that to emit back when the next pass gather result again. The great thing is, because it’s texture base, object just need to sample the texture, and when you stop the process, well it’s a texture so it’s like baking, so you can basically update only when you want, most of the time it’s baked. This allow expectation to be cheap enough, at the price of many approximation and sacrifice.

I don’t know yet if I will have to create a new lighting system, I’m just doing a proof of concept, so it’s not ready for prime time. Right now I’m trying to see how much I can leverage unity’s lighting to apply in the basic lighting pass, before writing down the bounce back pass.

I would expect better result with what unity will do though or segi. They operate at finer scale than I do, and with much less approximation.

Pretty interesting man, I really hope you can get it.

Now when you say baking to textures does that mean dynamic moving objects can still reflect light onto surfaces when moving? The reason I was asking all this about GI was because today I was drinking a soda and something caught my eye, I was like what’s this - light reflected off the can onto the bushes like 3 feet in front of me so when I got home I just had to ask about all the GI stuff lol.

Generally Real time solution don’t allow dynamic to contribute, because that’s expensive, so we use the fact that GI look a bit blurry to spread computation and change on many frame, GI basically being used as a complex ambient light. SEGI does it, but it’s also expensive, it mean voxelizing every frame. DDGI basically bake in real time a serie of lightprobe, so like baked lightprobe, dynamic only received.

There is way to do it with my system, but you still has to pay the cost (another texture to update every time a dynamic move), and it would work only on the worse approximation (the one I’m starting with) in which light don’t go in straight line, but “bend” (see my link to have an image), which is why I need to do a proof of concept to see if artefact are good enough (to have coarse ambient like light, for open world and pcg). Also I don’t really do transparent or fancy stuff, only opaques.

Even with RTX, dynamic are expensive (skinmesh at least), but it’s probably the best quality you can have.

1 Like