Shader for illustrating meshes (including edges / backfaces) in Scene view

As the title says, I’m trying to figure out how to construct a shader that will help me manipulate objects in the scene view.

I will give some preamble, as this may lead to a useful development component.

I’m dealing with game objects that get constructed dynamicly at run-time. In order to position and manipulate them at design time in the scene view, I’m representing them with basic primitives; spheres, cubes, cylinders, …

Specifically a coin shaped cylinder with 12 spheres at the clock positions, forming a ring.

So I’m trying to make a shader that shows this setup as clearly as possible in the scene-view. Just using a simple transparent shader looks really bad on the coin. it completely fails to bring out the edges.

I have made a cG shader that illuminates each vertex according to the angle between the normal and the eye-ray. This is very useful for Scene view as it doesn’t require positioning of lights. It also allows me to distinguish front from back faces by colour.

Shader "Custom/NewShaderXX" 
{
	SubShader 
	{
		Tags { 
			"RenderType"="Transparent"
			"Queue"="Transparent"
		}
		
		LOD 200
		
		// print pixel regardless
		ZTest Always
		
		// store depth buffer
		ZWrite Off
		
		// front and back faces
		Cull Off
		
		// additive blending
		Blend One One
		
		Pass {
			
			CGPROGRAM
		
			#pragma vertex myVertexShader
			#pragma fragment myFragmentShader
			
			struct VertexShaderInput 
			{
				float4 vertex : POSITION;
				float3 normal; //  : NORMAL;
			};
			
			struct VertexShaderOutput {
				float4 pos : POSITION;
				
				float4 color : COLOR;
			};
			
			VertexShaderOutput myVertexShader( VertexShaderInput vertexIn )
			{
				
				//convert Object Space vertex and normal to View Space
				float3 viewSpaceVertexPosition = mul(UNITY_MATRIX_MV, vertexIn.vertex).xyz;
				float3 viewSpaceNormal = mul(UNITY_MATRIX_IT_MV, float4(vertexIn.normal, 0)).xyz;
			
				// eye is on origin in viewspace (eye-space)
				float3 vertToEye = - viewSpaceVertexPosition;
				
				// theta = +1 if angle = 0, 0 if angle = 90deg, and -1 if 180deg
				float cos_theta = dot( 
									normalize( vertToEye ), 
									normalize( viewSpaceNormal ) 
									);
				
				// if theta > 0, blue
				float blueComponent = ( cos_theta >= 0 ) ? cos_theta : 0;
				float redComponent = ( cos_theta < 0 ) ? -cos_theta / 4 : 0;
				
				float4 colorOut = float4( redComponent, 0, blueComponent, 0.1 );
				
				// - - - 
				
				VertexShaderOutput vsOut;
				
				vsOut.pos = mul( UNITY_MATRIX_MVP, vertexIn.vertex );
				vsOut.color = colorOut; // _MyColor;
				
				return vsOut;
			}
			
			half4 myFragmentShader( VertexShaderOutput v2f_data ) : COLOR
			{
				return half4( v2f_data.color );
			}
		
			ENDCG
		
		} // Pass
	
		
	} // Subshader
}

I’ve set it to blend additively (One One), and set ‘Cull Off’ in order to render back faces also.

Here is the result:

It’s almost there. It’s very easy to see the hidden edges of the coin. However, I can’t figure out how to bring out the ’ coin meets sphere ’ edge.

It seems as though the sphere is getting laminated on top.

I’ve tried quite a few things, but with no success as yet.

How can I achieve the desired effect?

π

With ZTest Always or ZWrite Off you won’t get any occlusions and therefore the edge where the cylinder intersects the sphere won’t be visible.

BTW: the name of the programming language is “Cg” and you can use the NORMAL semantics for vertex shader input.

I did actually realise that these settings were going to present a problem, but my attempts to change them gave dodgy rendering. So in the end I stuck with settings that at least generated a pretty picture.

Here is the result with ZTest LEqual and ZWrite On (i.e. both on default settings):

(NOTE: I halved the blue component for both of these images, so: ‘float blueComponent = ( cos_theta >= 0 ) ? cos_theta / 2 : 0;’ just in case anyone is tempted to fire the shader up)

This has really bad artefacts, although it does trace out the Coin-Sphere line of intersection nicely.

However, if I rotate the camera just slightly, …

and continuing the rotation it will flicker between variants of each.

I can hazard a guess that the drawing order of the primitives or even the triangles within a primitive may not be guaranteed, and this flickering could be some byproduct of the order changing. But really I am floundering here…

PS Thanks for the tips Martin, always greatly appreciated.

In order to get the best of both worlds, you should probably have two materials: one opaque material with normal ZTest and without blending to make the intersection visible without artifacts, and a second transparent material without ZTest and without writing to the depth buffer to see the backfaces.

Ok this makes sense – to do it in two passes.

Shader "Custom/NewShaderXX" 
{
	SubShader 
	{
		Tags { 
			"RenderType"="Transparent"
			"Queue"="Transparent"
		}
		LOD 200
	
		// store depth buffer
		ZWrite On
		ZTest LEqual
		Cull Back
		Blend One Zero
		
		Pass {
			CGPROGRAM
			:
			VertexShaderOutput myVertexShader( VertexShaderInput vertexIn )
			{
				:
				// if theta > 0, it's a front face
				float lightLevel = ( cos_theta >= 0 ) ? cos_theta / 2 : 0;
				float4 colorOut = float4( 0, 0, lightLevel, 0.3 );
				:
			}
			:
			ENDCG
		}
		
		
		// Now render FRONT+BACK faces
		Cull Off
		ZWrite Off
		ZTest Always
		Blend One One
		
		Pass {
			
			CGPROGRAM
			:
			VertexShaderOutput myVertexShader( VertexShaderInput vertexIn )
			{
				:
				// if theta < 0, it's a back face
				float lightLevel = ( cos_theta < 0 ) ? -cos_theta / 4 : 0;
				float4 colorOut = float4( lightLevel, 0, 0, 0.15 );
				:
			}
			:
			ENDCG
		}
	} // Subshader
}

This certainly looks better, although zooming in on the object in the scene view again I get a flickering between 3 (at least) states:

How to understand and prevent this behaviour?

Martin, I notice you mentioned using separate ‘Materials’ rather than simply shader passes. Was this a deliberate distinction? Should I be setting different Queue tags? Can I even assign multiple materials to a mesh in Unity?

I’ve made an Xray shader thats free, and available on my blog if it helps. You can have a look at what I have done to solve similar issues Link

Cheers
Brn

Yes, that was a deliberate distinction indeed. And, yes, the reason is that the two passes need different Queue tags. And, yes, you can apply multiple materials to a mesh in the mesh renderer component. :slight_smile:

Brn, thanks a million for that! That is exactly what I was trying to do. And I would never have got there in a month of Sundays.

I am trying to understand your shader code, by annotating; it is really clever, I am having a really hard time trying to unravel it.

First we have the cunning use of blend mode:

		// { generated color } * { color already on the screen }
		// + 
		// { color already on the screen } * 0
		//
		// so for example if we had white on the screen, and we are generating shades of green then we would get out shades of green.
		//  but if we had grey on the screen, now our generated colour is twice as dark/
		//  if we had red on the screen, or anything without a green component for that matter, it's going to go black
		// if we had black on the screen then it's still going to be black.
		Blend DstColor Zero
		
		// Darken the colour already on the screen
		myColor = (1,1,1,1);
		DO_CG_PASS( myColor )
		
		
		// { generated color } * { (1,1,1,1) - color already on the screen }
		// + 
		// { color already on the screen } * 1
		//
		//  So for example, if the colour on-screen is white,  or close to white,  we are going to add practically nothing
		// but if it's close to a black background,  then almost all of this contribution gets added
		Blend OneMinusDstColor One
		
		myColor = (0,0,0,0);
		DO_CG_PASS( myColor )

Even pulling it apart at this fine-grained level like a complete retard, I can’t quite grok what’s happening here. Maybe because I can’t get a handle on the actual contributions themselves. If I could get some intuitive idea of what colour each pass is contributing, then I could understand how this blending works.

Second is some cunning maths applied to the unit normal vector in view space:

			v2f_surf vert_surf( appdata_base v ) 
			{
				v2f_surf o;
				
				o.pos = mul( UNITY_MATRIX_MVP, v.vertex );
				
				half3 unitNormal_ViewSpace = normalize( 
							mul( (float3x3)UNITY_MATRIX_IT_MV, v.normal )
							);
				
				half nZ = unitNormal_ViewSpace.z;
				
				/*
				  in view space, we are looking long the positive Z-axis.
				   so if nZ = -1, (1-nZ = 2) that means  the surface normal as pointing straight at us
				    if nZ = 0, (1-nZ = 1) it means we are catching an edge
				     if positive, (1-nZ = 1 to 0), it means we are looking at a backface
				     
				   ...  now my head is starting to spin
				 I was expecting something like a dot product between the normal and the eye-ray
				ok  I'm going to post and keep looking at it...
				*/
				
				o.finalColor = lerp(
									half4(1,1,1,1),
									_Color,
									saturate(
										max( 1 - pow( nZ, _Rim ),  _Inside )
										)
									);
				return o;
			}

Even with a cleanly written and perfectly functioning shader in front of me, I am struggling to get my head round it :expressionless:

Any verbal explanation of what is going on would be much appreciated.

If I manage to figure it out, I will amend this post.

PS Brn, I had a look at your website. Your work is really stunning. I didn’t even realise techniques existed for real-time reflection and refraction to the extent that you are implementing them. I’ve never seen anything like this!

There are various ways of enhancing silhouettes, a physically motivated one is described here: Cg Programming/Unity/Silhouette Enhancement - Wikibooks, open books for an open world
Brn is using something that is closer to the form of the Fresnel factor: Cg Programming/Unity/Specular Highlights at Silhouettes - Wikibooks, open books for an open world

zOMG there is a whole wikibook there!!! That will give me something to do while I rest the old RSI :smile:

That’s a fantastic link. Thanks!

Hi Pi_3.14,

Im glad you like the shader. Martin is spot on in noticing I’m using a similar method to a fresnel term to get the geometry facing the camera to fade out.

There are some misleading variable names in the shader which I apologise for, because it started as a “MatCap” or hemispherical environment map shader.

The main one being half3 uv = mul( (float3x3)UNITY_MATRIX_IT_MV, v.normal );
In this case its not being used as UV’s at all. Im using the value of the vertex normals view space Z value to determine the angle of the vertex to camera for the Falloff value. This works because the Z value of the normal will be greatest when the normal points to camera. As the normal rotates away the Z value will decrease.

o.finalColor = lerp(half4(1,1,1,1),_Color,saturate(max(1- pow (uv.z,_Rim),_Inside)));
This line looks complicated but its only because of the way its formatted. The last part of the lerp function is the “Fresnel” calculation.

The reason Im using a lerp to do this is because of the blend mode. The first pass is a multiply blend " Blend dstcolor zero ". By lerping between white and the desired color based on the Fresnel value, anything on screen will be darkened towards that color. Also because a multiply is commutative it doesnt matter in which order the faces are drawn because the final color will always be the same.

Generally though, the final color after the first pass (although consistant) will be too dark. which is why the second pass is done.The second pass is a Soft additive or " Blend OneMinusDstColor One " . Additions are also commutative. Even though the Soft additive blend mode is not quite that straight forward, its close enough. For the additive pass the lerp is now done between black and the desired color.

The end result is a what you see in the video in that because of the commutative nature of the maths the draw order is no longer an issue. In truth its not quite perfect and a better version on the effect would be done using replacement shaders, where all the darkening would be done for all the Xray surfaces first. Then with a replacement shader pass do the lightening passes. ( At least that’s how I visualize it would work out to be better )

Any way I hope that helps.
Cheers
Brn

Brn, That’s awesome, thanks so much for taking the time to explain.

I was a bit thrown by the fact that you are not using anything looking like a dot product to get a measure of how much the normal is facing the eye. But it looks like you are using an optimisation; I even wonder whether the object space → view space matrix transformation may have automatically done exactly the same operation as doing the dot product in world space.

I’m glad I did ask for clarification; I could tell that there was interplay between the blend mode and the lerped colour, but with a hazy understanding of each I would have stared at the shader code all day without figuring it out.

One thing I notice is that your shader duplicates the entire code block. is there really no way to avoid this? I envisaged there might be a solution if it was only possible to pass a variable from ShaderLab in to Cg, maybe set some flag in ShaderLab that Cg could read. But I can’t see any indication that this is possible. I’ve posted it as a separate question, here.http://forum.unity3d.com/threads/152833-Passing-variables-(not-parameters)-from-ShaderLab-into-CGPROGRAM-amp-re-using-passes

I have revised brn’s shader to eliminate the duplication of Cg code on this thread.

EDIT 12.10.10
New revision here