True Volumetric Lights (Now Open Source)

I decided to make this little technology open source. Contributions are welcome. Or just fork it and completely rewrite it. Also be warned, it is a hobby project turned into open source. You’ll get what you pay for as they say.

Be sure to let me know if you use it in your project.
Here it is.

Original post follows
Hey,

I have experimented with command buffer in my spare time recently and I have decided to expand unity lights to make them volumetric. And since people really seem to like the results I decided to show you guys as well. YouTube and Vimeo links. Please ignore poor image quality. Compression wasn’t very kind to it. Vimeo link is probably slightly better.

Enjoy

https://vimeo.com/158254747

53 Likes

Like it very much!

Performance?
Webplayer?
AssetStore?

Thanks!
Performance: Not sure. I didn’t profile it yet. But it is more or less the same technique as the one used in Killzone Shadow Fall, so I would say it is ok for Next Gen consoles, mid range PCs and up.
Webplayer: Theoretically yes, but I never used webplayer. There may be some limitation I’m not aware of.
AssetStore: Very unlikely. I couldn’t be bothered to maintain it. I already have a day job and as a game developer I don’t have too much free time. I did it just for fun. However, I will consider making it open source.

4 Likes

Very nice! count me as interested :slight_smile: Sorry for shooting you a PM before, I didn’t see this thread. I’d love to see it open source, probably could throw a donate button up or something?

In any case, great work, how is it done?

1 Like

No problem with that PM. I wrote a quick explanation here. I’ll come here with more details when I have time. It requires deferred renderer, it reconstructs position from the gbuffer. That means it doesn’t work with transparent geometry (cutout is ok).

So, afters years of silence on the subject, TWO solutions pop up at the same time. Typical :smile:

This is looking great! Would love it to see it open-source too - and so would my game:

7 Likes

Sorry, I’m kind of new around here :slight_smile: Also, is that a rabbit in that car? It looks amazing.

Edit: Ok, I just noticed your avatar. Coincidence? I don’t think so :slight_smile:

2 Likes

This is intereting, i’ve been playing with raymarching for volumetric light too but only manage to make directional to work with it via command buffer and haven’t continue it again. Would love to see your code if you don’t mind.
+1 to make it open source put it to github or something would be cool :slight_smile:

Thank you :slight_smile: Yeah, it’s an evolved rodent from the far far future (driving an under-evolved car from the past :slight_smile: )

2 Likes

Good news. I decided to make it open source. It will happen within few weeks. Hopefully by the end of the month. I’ll come here with more info about it, its requirements and limitations before the release. Stay tuned…

8 Likes

After seeing the volumetric fog on gdc i was hoping to play with something similar, thanks :slight_smile:
If you need any testers let me know!

That’s great news! Looking forward to read the requirements and limitations. Should you need testing this in a large open-world setting, I’m ready :slight_smile:

Please bear with me. One of the games I’m working on is close to release. My current plan is to give you more info + my volumetric demo for testing at the end of this week. I’ll hopefully release source code some time next week. I don’t plan to do any real testing. It’s open source, it doesn’t have to work for everyone out of the box. We can iron it out as we go…

5 Likes

Extremely cool work Michal, cant wait to play around with the source!

I have a question, in the other thread you mention using LightEvent.AfterShadowMap to fetch the shadow map and to render the light volume.
Im assuming you have to change the RenderTarget before using DrawMesh to render the light volume, which RenderTarget do you set it to?

It would be a new temporary render texture created before the lights start rendering. At least that’s the way I do it.

@Lexie is right. I use temporary render target. Let me explain in more detail.

First of all it was a hobby project. You should see it more as a starting point then a production ready plugin. Could be good enough for some use cases though.

Technique Overview

  • Create render target for volumetric light (volume light buffer)
  • Use CameraEvent.BeforeLighting to clear volume light buffer
  • For every volumetric light render light’s volume geometry (sphere, cone, full-screen quad)
  • Use LightEvent.AfterShadowMap for shadow casting lights
  • Use CameraEvent.BeforeLighting for non-shadow casting lights
  • Perform raymarching in light’s volume
  • Dithering is used to offset ray origin for neighbouring pixels
  • Use CameraEvent.AfterLighting to perform depth aware gaussian blur on volume light buffer
  • Add volume light buffer to Unity’s built-in light buffer

Half resolution rendering
Common optimization is to render the volumetric effect in lower resolution. Usually half or quarter resolution. The only real difference is that instead of simply adding volume light buffer to Unity’s light buffer in the last step, bilateral upscale is used.
I can’t get it to work properly for some reason. I believe it is a bug in Unity. I’ll try to figure it out before release but no promises.
It is worth noting that using half resolution + bilateral upscale would affect image quality. Especially in motion. Some sort of temporal reprojection is needed to reduce artifacts but I’m not going to add that.

Limitations

  • DirectX 11 only. OpenGL should be easy to do. It should differ only in projection matrices (different clip space, left/right handed system, flipped uv)
  • HDR only. Again, should be easy to add LDR support.
  • Deferred renderer only.
  • Doesn’t render transparent geometry correctly (cutout is ok). There are ways to handle transparent objects but they are either difficult or impossible to do with Unity’s public API.

Performance
As I said before, it was a hobby project. I didn’t care about performance much. The demo runs 25-30 fps on my notebook with low end GeForce 750 (volumetric light computed in full resolution, 1080p). And it runs several hundred fps on my desktop. So I felt no need to try to optimize it more. There is definitely room for improvement. With that being said, there are currently several ways how you can trade image quality for performance.

  • Perform raymarching in lower resolution. You can choose half resolution (quarter pixel count) for big performance savings.
  • Every light source has several parameters that affect performance
  • Number of raymarching steps
  • Shadows
  • Cookie
  • Volumetric Noise (currently 3d texture)
6 Likes

Update. Just figured out why half resolution rendering didn’t work. For some reason Unity doesn’t like when render target and depth buffer have different resolutions. Even though it is perfectly legal in DirectX and it runs correctly in standalone player. Only Unity editor is having problems. Another workaround then :frowning:

That brings me to interesting question. I simulate Z Test in pixel shader when I render in lower resolution. That is not exactly ideal. Did anyone find a way how to downsize existing depth buffer and then use as depth buffer for further rendering? It is a simple thing to do in DirectX at least. But I didn’t find how to do it in Unity. @Lexie , how do you handle ZTest in lower resolution?

Hi in this project you can see how to downscale depth (im not the author). In that project the shader that does the depth downscaling is this

Shader "Custom/DownscaleDepth" {

    CGINCLUDE

#include "UnityCG.cginc"

    struct v2f
    {
        float4 pos : SV_POSITION;
        float2 uv : TEXCOORD0;
    };

    sampler2D _CameraDepthTexture;
    float4 _CameraDepthTexture_TexelSize; // (1.0/width, 1.0/height, width, height)

    v2f vert(appdata_img v)
    {
        v2f o = (v2f)0;
        o.pos = mul(UNITY_MATRIX_MVP, v.vertex);
        o.uv = v.texcoord;

        return o;
    }

    float frag(v2f input) : SV_Target
    {
        float2 texelSize = 0.5 * _CameraDepthTexture_TexelSize.xy;
        float2 taps[4] = { float2(input.uv + float2(-1,-1)*texelSize),
            float2(input.uv + float2(-1,1)*texelSize),
            float2(input.uv + float2(1,-1)*texelSize),
            float2(input.uv + float2(1,1)*texelSize) };

        float depth1 = tex2D(_CameraDepthTexture, taps[0]);
        float depth2 = tex2D(_CameraDepthTexture, taps[1]);
        float depth3 = tex2D(_CameraDepthTexture, taps[2]);
        float depth4 = tex2D(_CameraDepthTexture, taps[3]);

        float result = min(depth1, min(depth2, min(depth3, depth4)));

        return result;
    }

        ENDCG
        SubShader
    {
        Pass
        {
            ZTest Always Cull Off ZWrite Off

            CGPROGRAM
#pragma vertex vert
#pragma fragment frag
            ENDCG
        }
    }
    Fallback off
}

You just blit a downsampled rendertexture to the downscale shader’s material

        //----DEPTH
        RenderTextureFormat formatRF32 = RenderTextureFormat.RFloat;
        int lowresDepthWidth = source.width / 4;
        int lowresDepthHeight = source.height / 4;

        RenderTexture lowresDepthRT = RenderTexture.GetTemporary(lowresDepthWidth, lowresDepthHeight, 0, formatRF32);

        //downscale depth buffer to quarter resolution
        Graphics.Blit(source, lowresDepthRT, DownscaleDepthMaterial);

Edit: Then you pass the downsampled depth to the raymarching shader
e.g.

FogMarchedMaterial.SetTexture("LowResDepth", lowresDepthRT);

and in the raymarching shader:

// read low res depth and reconstruct world position
        float depth = SAMPLE_DEPTH_TEXTURE(LowResDepth, i.uv);
   

        //linearise depth       
        float lindepth = Linear01Depth(depth);

        //get view and then world positions       
        float4 viewPos = float4(i.cameraRay.xyz * lindepth,1);
        float3 worldPos = mul(InverseViewMatrix, viewPos).xyz;

        //get the ray direction in world space, raymarching is towards the camera
        float3 rayDir = normalize(_WorldSpaceCameraPos.xyz - worldPos);
        float rayDistance = length(_WorldSpaceCameraPos.xyz - worldPos);

.......

Just copied most relevant stuff in case you were bored to check the whole project :smile:

1 Like

Thanks, but this is exactly what I do. The problem is that the raymarching is executed even for pixel that are behind existing geometry. Imagine point light behind a wall. It shouldn’t be visible at all. I have to discard those pixels in pixel shader based on their depth. Better approach would be to downsample native zbuffer and use it as native zbuffer for further rendering. This way pixels behind existing geometry would be discarded by ZTest before pixel shader and ray marching. I just don’t know how to downsample native z buffer in Unity…

Awesome, thank you for the detailed explanation of your approach!