I’m having some issues with converting a fragment shader to a surface one. What the fragment one does, successfully, is check the difference between the depth buffer and the pixel’s depth and write another value. However when i try to do this same thing in a Surface shader the output is always 0.
The depth values just arent right, so i suspect i am using them wrong. When i visualise the screenPosition its also a little different, the fragment is much brighter.
Now it just kinda flickers. I’m not sure how to get the clipspace position in a surface shader, normally i’d pass it through in a vert function but i dont think you can with Surface Shaders? Its not listed as a possible input either.
Ah, no, you can define your own input and output. appdata_full is just a common predefined struct. I can’t really find a good example, but you can change it like this:
struct my_struct {
some_data : TEXCOORD6;
};
my_struct vert(appdata_full input) {
my_struct output;
// Initialize all values in the struct
return output;
}
Thank you so much, great example on how to use Vert properly as well. The syntax is still a bit daunting, not sure what we can and cannot get away with.
One more question, in what case wouldnt we have a depth buffer texture?
The depth texture only exists in specific cases. One of the following must be true:
Your camera is rendering using the deferred rendering path, either enabled on the camera or from project settings.
Your camera has Camera.depthTextureMode with DepthTextureMode.Depth enabled; DepthNormals (or 5.4’s MotionVectors) don’t create a _CameraDepthTexture. Usually this gets enabled by a post process effect on the camera, but it can also be done via script. On a project I’m working on I force it on with a simple editor only script.
You have SoftParticles enabled in quality settings. I believe this will enable the depth texture for all cameras, though it might only be cameras with a particle system visible.
You have a realtime or mixed directional light in your scene with shadows enabled. The brightest shadowing directional light in the scene has it’s shadows rendered with a full screen using only the camera depth. Important note, this is only true on non-mobile platforms! It’s also possible for this to not be true if you have cascades disabled on PC or consoles as Unity may choose to not use these screen space shadows. I believe there’s also a hidden setting in 5.4 to disable this behavior and a future version of Unity will likely disable this entirely.
Only one of those needs to be true for a camera to render depth. If a camera has a culling mask so that it has no directional lights visible, or doesn’t have a post process on it, or is forced to use forward rendering, etc. it will not have a depth texture.
I also modified the originally posted shader because I realized that while testing _TexelSize should work to test for the existence of a depth texture, I forgot it does not since Unity doesn’t update the _TexelSize to reflect null textures. This one tests if the depth texture is exactly zero which should pretty much never happen in real world situations.
Eye depth is exactly that, the world space depth of that fragment (the pixel being drawn by a particular model) from the camera. The camera depth texture, and the “raw z” is the value in the depth texture. The camera depth texture is either something generated by a separate pass of the scene geometry in forward rendering, or the depth that was rendered during the deferred pass. Depth texture stores the clip space depth, which is a non-linear 0.0 to 1.0 range. Research Z depth and depth buffers elsewhere if you want to understand that. The LinearEyeDepth() converts that non-linear 0.0 to 1.0 into a linear depth, i.e. the world space depth from the camera.
For opaque objects there’s not really a reason to do this as the linearized value from the depth texture should match the eye depth as calculated in the vertex shader, however for transparent objects (which aren’t rendered into the depth texture) it lets you get the distance from that fragment to the closest scene geometry behind it. That’s what sceneZ - partZ does, subtracts the depth that fragment is from the scene depth so the resulting value is the distance to the scene.
I see this topic is a bit old, but I am stuck with it
When I exactly copy your Code ( @bgolus ) I get the following warning:
At Line 103 I have:
float rawZ = SAMPLE_DEPTH_TEXTURE_PROJ(_CameraDepthTexture, UNITY_PROJ_COORD(IN.screenPos));
Oh … okay it somehow does not show up everytime …
One last question, do you know where to find documentation about functions like SAMPLE_DEPTH_TEXTURE_PROJ ? I cant find anything about this method
This should just be used as reference, not copied into your project btw.
Really it’s just a macro to call tex2Dproj and return the red channel. The tex2Dproj function is one that takes a float4 uv, but is equivalent to calling tex2D(texture, uv.xy / uv.w)
Looking into it further, I think this isn’t possible without command buffers because the _DepthTexture isn’t written to until after the opaque objects are drawn. I would need to draw my object after depth but before deferred lighting.