HXGI Realtime Dynamic GI

I’ve been working on a Realtime GI solution for Unity. I’m not able to use Unity’s pre-computed GI system, since my game is procedurally generated; additionally, I wasn’t happy with the speed, quality, and limitations of other methods available, so I had to roll my own system.


(newest screen shot using the new Sparse Radiance volume. Everything below this are older pictures using LPV’s to generate the data, the quality is now on par with unity’s progressive light mapper)

Features

  • Fully dynamic
  • Dynamic Shadow casting Directional Light
  • Emissive surfaces (supports single sided surfaces)
  • Infinite bounces
  • Works well with normal maps.
  • Little to no light bleeding.
  • Sparse data structure for larger view distance and less VRAM.
  • Skybox Lighting

Road map

  • Filter pass to remove any noise

  • Half res rendering for faster performance.

  • Dynamic Shadow/none shadow casting Point and Spot light

  • Realtime offscreen reflections

  • Forward/transparency support

  • Integration with HXVolumetricLighting

  • VR support

Performance is based on a lot of factors so its hard to nail down the cost of the effect. Scene complexity, view distance of GI and how fast you want light to respond to scene changes are the major factors. The amount of emissive surfaces has very little to no impact on performance. So a lot of expensive per pixel lights can be replaced with emissive meshes.

I made this post to catalog my progress and have a centralized place that people can follow along rather then through my twitter. I’m not sure yet if I will release this to the public or not since I don’t know if I have the time to support another asset or if it’s even financial worth my time, but I’d like to gauge interest in it.

54 Likes

I love the idea of emissive meshes actually - makes certain kinds of scenes so much easier to do. Strip lights, architectural lighting, etc…

This is undoubtably sick. I’ve been following it on twitter for the past few weeks so I’ve been watching the progress and have been hoping you’d decide to put it on the store!

I guess the main question I have is how does this compare with SEGI? I suppose its a fair bit faster as you say it would work with VR? If you would be willing to support this solution, I think the community would certainly appreciate it as it seems SE is having a hard time devoting time to SEGI.

How does it behave in vast mountain environment with small scale detail in the foreground?

8 Likes

Does it rely on real time voxelization? Can we bake/stream voxel? can we cache this for pgc level? Can we prebake objects that are snap to a grid and assemble their voxel module to recreate the whole voxel volume for light by inserting them?

SEGI Currently has 3 big advantages over my system.

  • Generally looks a lot more accurate on the first bounce.
  • Resolves changes in the scene a lot faster.
  • Supports skybox lighting.

But suffer from a lot of light bleeding due to the nature of cone traced GI.
Ghosting when camera or object moves due to GI being calculated in screen space with temporal re-projection sampling.
Only really handles 2 light bounces and fakes any after that which tends to look bad in low light situations.
It’s pretty expensive to render especially at 1080p+, generally resulting in Half res rendering which give you an unacceptable aliasing effect. Limited to deferred. A lot of these issues might be fixed with optimizations, but the light leaking probably cant be.

If you need lighting that responds almost instantly, then SEGI is better suited.

My solution runs a lot faster, Should be able to support Forward rendering as the pixel shader is really lightweight, handles infinite light bounces. Lighting is calculated independently of the view so there isn’t any ghosting from temporal re-projection, light leaking is limited to one voxel space. It will never leak through a wall but you might see a small amount of light on the ground of a really thin wall, light can also creep around corners a little bit. Currently i have no way of calculating Skybox occlusion, I have a few ideas though.

The biggest downside is it can take a 0 - 0.5 seconds to fully resolve the lighting when something contributing to GI moves, it looks like a ripple of light, You can see the effect in this video (old). There are settings to mitigate this at the cost of performance but isn’t nearly as noticeable in a lit scene, The directional light in the above video is doing it, I bet you didn’t notice. Adding cascade will also mostly mitigate this effect as well. Things like players and fast moving objects are best not to contribute to GI to mitigate this effect, They will still be lit correctly though. The effect does take up more VRAM.

You set the resolution and scale of the voxel gird. Only GI near the camera is calculated, That distance depends on the kinda of quality you want. Adding cascades will open it up to larger view distance. But for now its probably limited to 50-100 units depending on how good you want the effect to look.

It voxelizes the scene over X frames, it’s currently a lot faster then SEGI voxelization. You can slow down the voxelization or turn it off completely to get a speed improvement. As long as no GI contributing object moves, everything will work fine. Dynamic none GI objects still get lit correctly with in the bounds. You can even stop the light propagation if you don’t need to move the directional light and then the effect is almost free. So you could warm up the effects and then pause the voxelization and light propagation for really faster baked lighting that can light dynamic objects. But its only with in the bounds of the effect. You can do a similar thing in SEGI but it still needs to do the cone traced step. Honestly just turning down the voxelization and light propagation speed to something like 1/16 is pretty fast if you don’t need moving GI stuff.

3 Likes

Is that picture lit entirely by the emissive surfaces? It looks as good or better than normal real-time lights in that screenshot.

Super interested in this, Lexie, but you knew that already. :stuck_out_tongue: Hoping you’ll find it worthwhile as a sellable asset to the community, which is why I wanted to post here, to add my name officially to the list of potential customers. :slight_smile:

Yes. It does specular approximation from all the lighting, even bounce lighting contributes to specular. But it falls a little short on really smooth surfaces, you can kinda notice the blue shifting colors towards the end of the specular, Ill probably have to ray trace the voxel grid for fast reflections to make really smooth surfaces look good. The other option is enabling screen space reflections but those are SLOOOOOW and don’t handle off screen reflections.

What I meant is that can we pre voxelize scene and use that as separate asset to load and save! so it only run teh lighting and not the voxel (for all question). Or would that be possible in the future? and if there is real advantage of doing it? Assuming no moving objects of course, but can these be inserted in real time without having to voxelize the whole geometry using per object prevoxelized data? Basically trying to avoid rt voxelization at run time and use any data as “tile” to insert in into the “voxel map”

Yeah, screen space reflections are generally problematic. Off-screen reflections generally prevent me from using them.

It’s very impressive. It seems like it could solve a lot of the problems I’ve encountered lighting up big procedural worlds in Unity.

Screen space reflection is usually for contact and inter object dynamic reflection, it’s generally used and gracefully failed using lightprobe, look at gdc vault case like mirror edge catalist, they explain the idea well. SSR is never ever to be used alone.

1 Like

My issues w/ it are that I can never get reflection probes set up in any way where the transition is at all graceful.

It would be possible and you would see a speed improvement. Down side is it can take up a lot of space to have those all saved out. My last system did that, I had a pre-calculated volume for everything. It started taking up a lot of space though.

1 Like

Looks awesome

Does this work with a volume of spherical harmonics and then iterate and spread them 1 (or more?) voxels every iteration? If so, I’ve been thinking about such a system and wondering 2 things:

  1. How to iterate over them without causing artifacts. Going simply pixel 1,2,3,4,5 would make light spread faster to one side, would iterating like 1,3,5,2,4 (or more steps inbetween) fix it enough?
  2. Are specular highlights done by having a 2nd volume do the same thing with tighter spreading?

I may be entirely wrong about the concept though, in which case this is off-topic.

do you have a timeframe for when to publish it? Early Access Style would be nice :slight_smile:

While it looks weird and well, “wrong”, I also think it looks really cool? Feels like I’m in some sort of liquid light and I’m throwing waves of light around?

I almost want to make a game around it.

EDIT: Also, since the original video has a bit of banding, which I’m assuming is because of Youtube’s compression, can you post a couple of stills so I can see what sort of banding (if any) your solution generates?

1 Like

I have to say I’m extremely impressed, I think the lack of light bleeding more than makes up for the “slow” light propagation and I will admit I didn’t even notice this effect in the video you posted. One last question - does it work on a mac? I think macs can support compute shaders now, so I assume it wouldn’t be issue?

2 Likes