Has anyone here ever written a custom fragment/vertex shader pair that changed per-pixel depth value AND supported shadows from directional lights? I could use a minimal example of this kind of shader.
I have a custom shader of this kind, but directional light is giving me trouble - depth value appears to be invalid (it is stretched) and I am probably missing an adjustment somewhere.
I’m not 100% sure I did this, but I have played with per-pixel depth offsets by modifying depth output from pixel shader. (WARNING: This can/will disable early-z culling/optimization!)
I’d suggest if you can to do the same to the pass where your outputting your mesh as a shadow caster. But I’m not sure if they allow pixel shaders in the shadow passes?
And your completly correct that a Directional light is handled as a Ortho camera, that’s the “idea” of it being so far away it’s “directional” and not from a point.
^^^ This is a cube. Not a sphere. The depth calculation is correct for perspective/ortho cameras, I checked.
This is what it looks like when it is lit by spot/point lights.
And this is what it looks like when it is lit by directional light.
As you can see the shadow depth is stretched, even thought the silhouette of the shadow is correct.
In frame debugger I can see that the shadow for directional light is rendered in 3 passes by 3 fake spot lights which appear to have orthographic projection on them. Then those 3 shadow buffers are combined into one using some process I don’t get to see.
It is also quite annoying that I can’t find a method to see the final shader with macro expansion applied to it.
Does anyone have any idea what theoretically could be going wrong here?
I’ve done this before. Tricky business, because it’s also different for point lights. (Shadow map is rendered to a cubemap.) The advantage of the point light though is that it’s always rendered to a “regular” texture. For directional and spots it can also be an actual depth texture, which requires outputting to sv_depth. In case hardware shadow maps are not supported, the output needs to go to sv_color.
I just took the entire shadow pass from the standard shader and adjusted that. The first directional light might be treated differently, because it’s usually the most important. (Aka the sun.)
@Zicandar : It will (not can) disable early z-rejection, so always render these surfaces first. (They will be shaded no matter what is in front.)
If the GPU makers are listening. I had a small idea on the early z-rejection issue. What if you add a mode in which the vertex shader output is a best case z-depth result. As in, it might be visible, but the pixel shader might make it worse. Or with typical z-testing, the pixel shader can only push the pixels back, not move them forward compared to the vertex shader output. In that case, which is also the case of neginifinity, early z-rejection can still be applied. (No GPU makers listening? Just my luck…)
I’ve already dealt with this part, actually. Point lights were the easiest, because they have the least insane macros in AutoLight.cginc.
Since the sphere-cube is pretty much rendered by raycasting, I think it is just the case of incorrect incoming direction vector which only shows up on orthographic spot lights which seems to be used for directional light rendering. I should be able to test that tomorrow. I’ve already dealt with this problem for orthographic camera, but it looks like I overlooked it again in the spotlight related code.
Basically… in most scenarios when unity is rendering something it is possible to determine whether the camera is orthographic using unity_OrthoParams.w. If it is > 0, the camera is orthographic. For raycast object it determines how direction to camera is calculated, which is quite important.
However. This flag is not set correctly when unity renders scene to depth texture (UpdateDepthTexture) and it is not set correctly when rendering directional orthographic spot lights from which directional light shadow is constructed…
So, I had to directly query projection matrix values to determine whether it is orthographic or not.
I can’t find anything like that. Only references that early z-rejection is disabled when outputting depth. The most recent reference is from 2012 though, so things might have changed.
I’m also raycasting spheres from cubes, in hopes of using them for SVOs. So far normals are colors look correct and great. I’ve been struggling with the cryptic system of unity’s ShadowCaster (at least this is where I think I can solve the problem). To bad good resources and documentation are nearly non existent.
Mind if I ask that you share some source or create a tutorial?
To correctly reconstruct depth, calculate fragment position in the worldspace multiply by view matrix as a 4 component vector (with w == 1), then divide all components by w. That’ll give you screen coordinates and correct depth regardless of the matrix you’re using.
The example can easily be modified to handle distance fields too:
Yes. It is a raymarched/raytraced object with correct depth, shadows and everything.
The original “Cube” serves as sort of portal.
You can see boundary outline on the last screenshot. That’s object’s actual geometry.
The screen with earth uses mathematically defined sphere (meaning ray tracing only has one step), the second sample uses distance fields (meaning you could plug anything in there) and actual raymarching. The distant field shape on the last screen is “sphere minus torus minus cube”.
Well, the articles are online, so feel free to read them.
The main difficulty with unity lighting is that there are ifdefs involved and there are multiple projection modes being used.
For example, directional shadow is screenspace, but in order to construct it unity creates 2 or 3 temporary spotlight-like shadowmaps… which IIRC may use ortographic projection.
I think I outlined that too. I think there are 2 or 3 paragraphs of text outlining what kind of jumping through the hoops is required to get all lighting models to work in a shader that overwrites depth of object.
Checking it out, thanks for the article man. Most article about raymarching are mostly fullscreen camera renders, really great to find about raymarch object
It ended up being mostly about fighitng unity lighting system and making shadows right. The latest test with distance field isn’t on the “blog”. Will probably post it there eventually. Then again, distance fields are surprisingly easy to implement.
Yes, same here. Adjusting the depth in the standard pass was not much of an issue. For the shadows there is a difference between point lights and the other types. And you have to account for the options that the output is just a depth buffer or a color buffer with depth buffer. I’m also interested to read your article. I got things to work most of the time, but I’d like to compare it with your solution. The shadows were absolutely the hard part.