Is it possible to get the thickness of an object at a certain position in a CG shader?
Use two passes in the shader, the first one has front face culling on, and writes to the depth buffer, then in the second pass, you use backface culling and get the depth of the current fragment, and compare it to the depth value in the _DepthTexture. You can determine thickness from that.
Hey Invertex!
I’ve seen this approach mentioned multiple places, but I can’t find any syntax examples to accomplish it and I’m really new to writing shaders.
I’m to the point where I’m getting a calculated depth to camera per vertex, and getting it passed to the fragment shader… I think. I’ve tried to track down the specifics of each helper function / define, but it’s a bit of a labyrinth.
How do you write to the “depth buffer”? Is this the rendertexture that Unity makes when cameras have DepthTextureMode set to Depth, and if so how do I write to it? I can’t find any specifics about _DepthTexture, but I’m assuming it’s the name of the automatically created rendertexture.
In the second pass, how do I read the texture? And then how do I combine them?
Any advice is helpful. This is the code that I have so far, which results in this kind of image:
Shader "Custom/Thickness Shader" {
Properties {
}
SubShader{
Tags {"Queue" = "Transparent"}
Pass {
Name "Front Face Depth Pass"
Blend One One
ZWrite On
Cull Back
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#include "UnityCG.cginc"
struct v2f
{
float4 pos : SV_POSITION;
float4 depth_tex : TEXCOORD0;
};
v2f vert(appdata_base v) {
v2f o;
o.pos = UnityObjectToClipPos(v.vertex);
float dist = length(WorldSpaceViewDir(v.vertex));
o.depth_tex.xyz = dist-1;
o.depth_tex.w = 1;
return o;
}
float4 frag(v2f i) : SV_Target{
return i.depth_tex;
}
ENDCG
}
Pass {
Name "Back Face Depth Pass"
Blend One One
ZWrite On
// ZTest Always // This pass only renders if I do this, but there's got to be another way-- I don't want it to be in front of everything.
Cull Front
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#include "UnityCG.cginc"
struct v2f
{
float4 pos : SV_POSITION;
float4 depth_tex : TEXCOORD0;
};
v2f vert(appdata_base v) {
v2f o;
o.pos = UnityObjectToClipPos(v.vertex);
float dist = length(WorldSpaceViewDir(v.vertex));
o.depth_tex.xyz = dist - 1;
o.depth_tex.w = 1;
return o;
}
float4 frag(v2f i) : SV_Target{
float4 col;
col.r = 1;
col.a = 1;
return col; // Test to make sure both passes are happening. Not sure how to get the backface depth pass here.
}
ENDCG
}
}
}
The short version is what @Invertex suggested isn’t entirely wrong in the basic description about what you need to do (draw an object twice), but completely wrong about how to go about it in Unity.
You cannot write the backface depth to the depth buffer using a ZWrite On Cull Front pass and then read that depth from the _DepthTexture. For one there is no texture named _DepthTexture, it’s called _CameraDepthTexture, but the real problem is that texture is generated in a separate full screen pass of the scene and only includes all opaque objects with shadow caster passes. It is not the current depth buffer.
The “correct” way to do this is to render your object’s back face depth into your own float render texture manually using a command buffer. Then when you render your object later you sample that texture to get the depth. The other possible option would be to abuse the current buffer’s alpha channel and write the depth to that, and then use a grab pass to get it. It will only work if you’re using an HDR camera and your quality settings are using at least an ARGBHalf format.
Also, you would need to draw the back faces first, otherwise the front faces will occlude the back faces.
