Custom fragment depth in directional light shadowcaster shader

Has anyone here ever written a custom fragment/vertex shader pair that changed per-pixel depth value AND supported shadows from directional lights? I could use a minimal example of this kind of shader.

I have a custom shader of this kind, but directional light is giving me trouble - depth value appears to be invalid (it is stretched) and I am probably missing an adjustment somewhere.

This:

float calculateShadowDepth(float3 worldPos){
    float4 projPos = mul(UNITY_MATRIX_VP, float4(worldPos, 1));
    projPos = UnityApplyLinearShadowBias(projPos);
    return projPos.z/projPos.w;
}

Works correctly for spot lights, but fails when the engine starts rendering directional shadows (apparently it does it using orthographic camera).

I’m not 100% sure I did this, but I have played with per-pixel depth offsets by modifying depth output from pixel shader. (WARNING: This can/will disable early-z culling/optimization!)
I’d suggest if you can to do the same to the pass where your outputting your mesh as a shadow caster. But I’m not sure if they allow pixel shaders in the shadow passes?
And your completly correct that a Directional light is handled as a Ortho camera, that’s the “idea” of it being so far away it’s “directional” and not from a point.

Fragment shaders are allowed in shadow passes.

Alright. Here are some details.

This is a cube made out of 12 triangles:
2591506--181299--1.jpg
^^^ This is a cube. Not a sphere. The depth calculation is correct for perspective/ortho cameras, I checked.

This is what it looks like when it is lit by spot/point lights.
2591506--181300--2.jpg
And this is what it looks like when it is lit by directional light.
2591506--181301--3.jpg
As you can see the shadow depth is stretched, even thought the silhouette of the shadow is correct.

In frame debugger I can see that the shadow for directional light is rendered in 3 passes by 3 fake spot lights which appear to have orthographic projection on them. Then those 3 shadow buffers are combined into one using some process I don’t get to see.

It is also quite annoying that I can’t find a method to see the final shader with macro expansion applied to it.

Does anyone have any idea what theoretically could be going wrong here?

I think… I might’ve figured out what’s wrong with it. I’ll post an update later - once I test it.

I’ve done this before. Tricky business, because it’s also different for point lights. (Shadow map is rendered to a cubemap.) The advantage of the point light though is that it’s always rendered to a “regular” texture. For directional and spots it can also be an actual depth texture, which requires outputting to sv_depth. In case hardware shadow maps are not supported, the output needs to go to sv_color.

I just took the entire shadow pass from the standard shader and adjusted that. The first directional light might be treated differently, because it’s usually the most important. (Aka the sun.)

@Zicandar : It will (not can) disable early z-rejection, so always render these surfaces first. (They will be shaded no matter what is in front.)

If the GPU makers are listening. I had a small idea on the early z-rejection issue. What if you add a mode in which the vertex shader output is a best case z-depth result. As in, it might be visible, but the pixel shader might make it worse. Or with typical z-testing, the pixel shader can only push the pixels back, not move them forward compared to the vertex shader output. In that case, which is also the case of neginifinity, early z-rejection can still be applied. (No GPU makers listening? Just my luck…)

I’ve already dealt with this part, actually. Point lights were the easiest, because they have the least insane macros in AutoLight.cginc.

Since the sphere-cube is pretty much rendered by raycasting, I think it is just the case of incorrect incoming direction vector which only shows up on orthographic spot lights which seems to be used for directional light rendering. I should be able to test that tomorrow. I’ve already dealt with this problem for orthographic camera, but it looks like I overlooked it again in the spotlight related code.

I figured it out.

The issue turned out to be quite… complicated.

Basically… in most scenarios when unity is rendering something it is possible to determine whether the camera is orthographic using unity_OrthoParams.w. If it is > 0, the camera is orthographic. For raycast object it determines how direction to camera is calculated, which is quite important.

However. This flag is not set correctly when unity renders scene to depth texture (UpdateDepthTexture) and it is not set correctly when rendering directional orthographic spot lights from which directional light shadow is constructed…

So, I had to directly query projection matrix values to determine whether it is orthographic or not.

        if ((UNITY_MATRIX_P[3].x == 0.0) && (UNITY_MATRIX_P[3].y == 0.0) && (UNITY_MATRIX_P[3].z == 0.0)){
            o.rayDir = -UNITY_MATRIX_V[2].xyz;
        }
        else{
            o.rayDir = getRayToCamera(worldPos);
        }

^^^ Which is a messy way to go about it.
Also, I had incorrect multi-compile pragma in shadowcaster pass (needed #pragma multi_compile_shadowcaster ).

I’m not sure if I should report this as a bug. It is an incredibly obscure issue, and I’ve never had anyone respond to any of my reports…


Reported. Case 787801.

I though there was a way to tell it you would only move stuff further away? And that would allow early z-rejection.

ALWAYS report stuff, if not reported, the issue does not exist. :slight_smile:

Reported it already. See case Number.

Z-Rejection is irrelevant for me though, it is a test/prototype, so it can be slow. Also I should probably use compute shader for this.

I can’t find anything like that. Only references that early z-rejection is disabled when outputting depth. The most recent reference is from 2012 though, so things might have changed.

neginfinity,

I’m also raycasting spheres from cubes, in hopes of using them for SVOs. So far normals are colors look correct and great. I’ve been struggling with the cryptic system of unity’s ShadowCaster (at least this is where I think I can solve the problem). To bad good resources and documentation are nearly non existent.

Mind if I ask that you share some source or create a tutorial?

There’s a “blog” linked in my signature with whopping 5 articles total.

Here’s the code:
http://neginfinity.bitbucket.org/shaders/2016/04/13/raytraced-primitives-in-unity-pt4_1.html

^^^ The earth here is a cube.

To correctly reconstruct depth, calculate fragment position in the worldspace multiply by view matrix as a 4 component vector (with w == 1), then divide all components by w. That’ll give you screen coordinates and correct depth regardless of the matrix you’re using.

The example can easily be modified to handle distance fields too:

2844223--207656--distancefields.jpg

1 Like

interesting so this is basicaly raymarched object right, not camera renderers?
sorry for off the main topic

Yes. It is a raymarched/raytraced object with correct depth, shadows and everything.
The original “Cube” serves as sort of portal.
You can see boundary outline on the last screenshot. That’s object’s actual geometry.

The screen with earth uses mathematically defined sphere (meaning ray tracing only has one step), the second sample uses distance fields (meaning you could plug anything in there) and actual raymarching. The distant field shape on the last screen is “sphere minus torus minus cube”.

Hmm even more interested. . . :smile:

Well, the articles are online, so feel free to read them.

The main difficulty with unity lighting is that there are ifdefs involved and there are multiple projection modes being used.
For example, directional shadow is screenspace, but in order to construct it unity creates 2 or 3 temporary spotlight-like shadowmaps… which IIRC may use ortographic projection.

I think I outlined that too. I think there are 2 or 3 paragraphs of text outlining what kind of jumping through the hoops is required to get all lighting models to work in a shader that overwrites depth of object.

Checking it out, thanks for the article man. Most article about raymarching are mostly fullscreen camera renders, really great to find about raymarch object

It ended up being mostly about fighitng unity lighting system and making shadows right. The latest test with distance field isn’t on the “blog”. Will probably post it there eventually. Then again, distance fields are surprisingly easy to implement.

1 Like

Yes, same here. Adjusting the depth in the standard pass was not much of an issue. For the shadows there is a difference between point lights and the other types. And you have to account for the options that the output is just a depth buffer or a color buffer with depth buffer. I’m also interested to read your article. I got things to work most of the time, but I’d like to compare it with your solution. The shadows were absolutely the hard part.

See “LightAndShadow.cginc” from here then check how it is being used.
I also added code for distance fields screenshot from earlier.