SEGI (Fully Dynamic Global Illumination)


SEGI Webpage
SEGI GitHub Page

SEGI has moved to GitHub!
SEGI has moved to GitHub and is free to download and open-source! If you purchased SEGI on the Asset Store on August 1st, 2017 or later, you will be granted a refund upon request via the contact page on sonicether.com.

SEGI is a voxel-based Global Illumination effect that aspires to provide 100% dynamic Global Illumination to Unity games and applications. Since it requires no precomputation, SEGI can bring GI to certain situations where precomputed solutions like Enlighten cannot!

SEGI provides indirect lighting and glossy reflections from a single directional light, the sky, and any emissive materials in the scene. In the future, SEGI will support indirect lighting from point and spot lights. SEGI calculates indirect light visibility for soft indirect shadows, and also calculates soft sky light shadows. It can render either a single bounce or infinite bounces of indirect light.

Screenshots and Demos
The only additional effects in these screenshots and demos are bloom, tonemapping, and anti-aliasing. No reflection probes, point/spot lights, or SSAO are used. The screenshots were taken with a render resolution of 1920x1080.













Demos (Windows)
Sponza Demo

Low Poly Demo

Labyrinth Demo

Compatibility
As this is the first beta version of SEGI, compatibility is limited. SEGI has only thoroughly been tested on a Windows PC; behavior on other platforms is unknown. Improving compatibility is of high priority. SEGI requires DX11. SEGI is not compatible with mobile devices, and probably never will be.

Current Limitations and Known Issues

Known Issues
-Infinite bounces can be slow in some complex scenes
-Incompatible with forward-rendered objects (deferred only)
-Light cookies on Directional Lights do not affect GI
-Point and spot lights do not contribute to GI
-Possible stuttering while playing in editor with the camera inspector visible
-Undefined behavior with VR
-Undefined behavior with multiple instances of SEGI
-Does not work with orthographic camera (since deferred rendering is disabled for orthgraphic cameras)
-Some voxels are black/too dark when voxel AA and infinite bounces are both enabled
-Slight changes in positions of some objects in the volume can cause large lighting differences

Current Limitations

Light Leaking
More Information…

There are two causes of light leaking under the hood of SEGI. The first is an inherent limitation of cone tracing. The premise of cone tracing is that for each traced “ray”, instead of simply sampling a point of information, you get to sample an area of pre-combined information. This keeps rays that are traced in different directions more coherent and allows for tracing far fewer rays to get a smooth result. Simply put, cone tracing samples blurred data.


This diagram compares naive ray tracing (left) with cone tracing (right)

Now, what happens when we have a one-voxel-wide occluder in front of that light data?


As you can see, with naive “brute force” sampling, each ray encounters full opaque black before it reaches the illuminated voxels, so no light gets through. With cone tracing, however, data is blurred which smears the occluder and the light information such that rays do not encounter full opaque black before reaching the illuminated voxels.

The other cause of light leaking is a lack of directional light information in the voxel data. This means that, instead light bouncing forward off an illuminated surface, it bounces in all directions. This problem is helped with the Inner Occlusion Layers property and GI Blockers (see Section 4.2 of the User Guide), but is not solved completely. In the future, multiple already explored options will be considered regarding storing and reading directional voxel data to resolve this issue.

Indoor/Outdoor Hybrid Scenes
Mostly because of the light leaking discussed above, SEGI can struggle with indoor/outdoor hybrid scenes, especially with interiors that have thin bright walls. The use of GI Blockers (discussed in the User Guide) can help.

Limited Scene Scale
Currently, SEGI only uses a single voxel volume to store GI data. This means that large scenes or scenes with greatly varying object scales are not handled well. In the near future, voxel volume cascades (akin to shadow cascades) will be used to extend the GI distance and influence whilst keeping high-density data where it counts–near the camera.

Redundant Voxelization
SEGI completely revoxelizes everything inside the GI voxel volume every frame. This obviously is not ideal, especially with mostly static scenes, because a lot of redundant work is being done. This issue will be investigated soon in order to find a way to reduce redundant voxelization and improve performance.

53 Likes

Holy S**t you have just made Unity look like Cryengine, :slight_smile:
Great job, following this thread and Once you work on the limitations this will be one of those “Must own assets”.

7 Likes

Dude yes looks so damn sweet (yea I sound like a broken record but man I’m not even joking)…

…instabuy!

2 Likes

You said It can’t be used for big scenes? I have a big terrain with lots of tree can I use this?

For large, outdoor scenes, objects and the terrain in the distance merely need to appear to be affected by skylighting to look coherent. You wouldn’t even really be able to tell there wasn’t any bounce lighting or occlusion (bounce could be faked to a degree). Sure large flat surfaces such as buildings could make bounce lighting more apparent, but it isn’t a deal breaker.

I’d compare it to how we really don’t have high quality distance-shadows, even with cascades rendering power is limited. Look to GTA 5 is a great example of creating great fidelity yet having to make compromises for the vast draw distances.

Once GI cascades arrive, large scenes will obviously be less of an issue

1 Like

Thanks Sonic for release.

1 Like

Sweet baby jesus. This is definitely going to be one of the most important Unity assets.

2 Likes

Ow yes!

1 Like

Looks fantastic! Congrats.

1 Like

I have a question, would it work for emissive material outside of camera view? I am currently using SSRR to simulate realtime emission with obvious drawbacks. I wonder if your post effect can handle emissive material off screen?

1 Like

Curious Sonic, but what occurs in VR that may or may not be working correctly?

1 Like

Certainly. The lighting calculations are not bound to screen-space data. Any emissive material inside the voxel volume (a large volume centered on the camera) will contribute to indirect lighting and reflections (if they’re enabled). SEGI can’t do sharp reflections, though.

Well, I don’t have a VR headset to do testing and development (I plan to change that soon), which is the primary reason why functionality with VR is undetermined. Simply having two instances of SEGI (one for each eye) would result in a ton of wasteful calculations. I need to look into how to run only one instance of SEGI and simply do all the on-screen shading twice, sharing the same data between both eyes.

1 Like

Nice, perhaps we will be able to choose “static” or “dynamic” Voxelization.
Anyway good job to listing all limitations and possible future improvments, so we are not fooled about the plugin.

1 Like

Even if it was possible to prevoxelize to some extent and only revoxelise new objects, although I know that would be fairly complicated to code up

Awesome, looks incredible Sonic! Congratulations :slight_smile:

3 Likes

I notice that mouse movement is lagging terribly even though I get a great refresh rate. Could you release a build with the Standard Package FPSController with bobbing turned off? Is the screen space part of the GI having an effect on the update cycle or something?

This package is absolutely gorgeous though, I can hardly wait for the release of this asset.

Damn!!!

1 Like

incredible stuff!!! How much will be the price for it

1 Like

Perhaps like Nottorus plugin ? above 200$ ?

$80 as long as it’s in beta. $100 once it’s been improved past the point of being considered beta.

6 Likes