So one of the things you will encounter when rendering complex meshs’ using standard transparent rendering is that they depth test against the scene, but not against themselves. This is a limitation of how depth tested rendering works and there are some ways to work around this issue. I’m going to talk about one of them, then show you how to do it in Unity.
So lets start by rendering a scene that has some complex geometry (in this context I am using complex to refer to geometry where there are overlaps and folds):
You will notice that in this image there are areas of darkness where the object has been rendered twice. This occurs because there is no depth buffer for the transparent object and overlapping polygons will cause pixels to be rendered more then once. Taking a step back, the solution is to only have these pixels be rendered once, and only for the front most pixel. How can this be set up?
Looking into the default surface shader for transparent objects you will see that it looks like this:
Shader "Transparent/Diffuse" {
Properties {
_Color ("Main Color", Color) = (1,1,1,1)
_MainTex ("Base (RGB) Trans (A)", 2D) = "white" {}
}
SubShader {
Tags {"Queue"="Transparent" "IgnoreProjector"="True" "RenderType"="Transparent"}
LOD 200
CGPROGRAM
#pragma surface surf Lambert alpha
sampler2D _MainTex;
fixed4 _Color;
struct Input {
float2 uv_MainTex;
};
void surf (Input IN, inout SurfaceOutput o) {
fixed4 c = tex2D(_MainTex, IN.uv_MainTex) * _Color;
o.Albedo = c.rgb;
o.Alpha = c.a;
}
ENDCG
}
Fallback "Transparent/VertexLit"
}
It’s a simple shader that just sets up and configures the blend modes; this is done implicitly via the ‘alpha’ tag to the surface pragma. You can mimic this yourself and remove the ‘alpha’ pragma if you want… but it’s not necessary.
So what is the root issue? When this shader is executed there is nothing in the z-buffer to indicate which pixel is on top. So what we need to do is prime the z-buffer with real depth values. This needs to be done JUST before the transparent object is rendered so that ordering and stacking of transparent objects works properly.
The easiest way to do this is to add an extra pass to the surface shader. A little secret when it comes to surface shaders is that you can insert your own passes, either before or after the main surface shader block. They will just be executed as part of the rendering pipeline.
So lets add a depth only pass to the start of the surface shader, you’ll notice that the rest of the shader remains the same:
Shader "Custom/Ghost" {
Properties {
_Color ("Main Color", Color) = (1,1,1,1)
_MainTex ("Base (RGB) Trans (A)", 2D) = "white" {}
}
SubShader {
Tags {"RenderType"="Transparent" "Queue"="Transparent" "IgnoreProjector"="True"}
LOD 200
Pass {
ZWrite On
ColorMask 0
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#include "UnityCG.cginc"
struct v2f {
float4 pos : SV_POSITION;
};
v2f vert (appdata_base v)
{
v2f o;
o.pos = mul (UNITY_MATRIX_MVP, v.vertex);
return o;
}
half4 frag (v2f i) : COLOR
{
return half4 (0);
}
ENDCG
}
CGPROGRAM
#pragma surface surf Lambert alpha
#pragma debug
sampler2D _MainTex;
fixed4 _Color;
struct Input {
float2 uv_MainTex;
};
void surf (Input IN, inout SurfaceOutput o) {
fixed4 c = tex2D(_MainTex, IN.uv_MainTex) * _Color;
o.Albedo = c.rgb;
o.Alpha = c.a;
}
ENDCG
}
Fallback "Transparent/Diffuse"
}
This pass simply sets color writes to off (using the colormask), and depth writes to true. Plugging this shader into out character now looks like this:
The biggest issue with this method is that overlapping objects need to be sorted nicely. Unity does a good job at this for the most part. This also needs an extra draw call. If you are worried about performance remember to profile.
This method of inserting extra passes into surface shaders can be used for a lot of cool things, you should have an experiment and see what else you can do.