Intersection Shader help needed for a newb

Hello all. I’m trying to write a shader that renders a plane white wherever it intersects with other geometry, otherwise it’s a solid black. Refer to the image below:

I’m completely new to Unity, and started using it for my college research project couple of months ago. The point of this shader is to fake the black and white image effect of a ultrasound (a very barebones look is enough). I’ll parent a camera to the plane with the shader, that only renders the plane and nothing else, and have that displayed on a secondary display area on the screen. I figured a intersection shader would be the simplest approach, but I’ve actually not found a single example online.

I’ve seen cross section ones, where the geometry is actually cut away and anything below the plane is filled with a solid color. But that’s beyond my needs. I’ve also found a Winston Barrier example from the youtuber “Making Stuff Look Good” that sort of does what I want, but not really. I’ve also found this example:

Shader "Unlit/Intersection Glow"
{
    Properties
    {
        _Color ("Color", Color) = (1,0,0,1)
    }
    SubShader
    {
        Tags { "RenderType"="Transparent" "Queue"="Transparent" }
        LOD 100
        Blend One One // additive blending for a simple "glow" effect
        Cull Off // render backfaces as well
        ZWrite Off // don't write into the Z-buffer, this effect shouldn't block objects
        Pass
        {
            CGPROGRAM
            #pragma vertex vert
            #pragma fragment frag

            #include "UnityCG.cginc"

            struct appdata
            {
                float4 vertex : POSITION;
            };

            struct v2f
            {
                float4 screenPos : TEXCOORD0;
                float4 vertex : SV_POSITION;
            };

            sampler2D _CameraDepthTexture; // automatically set up by Unity. Contains the scene's depth buffer
            fixed4 _Color;

            v2f vert (appdata v)
            {
                v2f o;
                o.vertex = UnityObjectToClipPos(v.vertex);
                o.screenPos = ComputeScreenPos(o.vertex);
                return o;
            }

            fixed4 frag (v2f i) : SV_Target
            {
                //Get the distance to the camera from the depth buffer for this point
                float sceneZ = LinearEyeDepth (tex2Dproj(_CameraDepthTexture, UNITY_PROJ_COORD(i.screenPos)).r);
                //Actual distance to the camera
                float fragZ = i.screenPos.z;

                //If the two are similar, then there is an object intersecting with our object
                float factor = 1-step( 0.1, abs(fragZ-sceneZ) );

                return factor * _Color;
            }
            ENDCG
        }
    }
}

But it doesn’t seem to do anything other than make my geometry invisible. I read that I was supposed to not have “Zwrite Off”, so I commented it out but it still doesn’t to anything other than not render the Unity Wireframe lines inside the object with the shader. I’ve got a script that sends out the Depth buffer (from the WInston Example) attached to my camera, but it doesn’t make the shader work any better.

Any help as to how to best approach the shader would be most appreciated. Thanks. :slight_smile:

A quick description of what those two techniques do and what you need.

The cross section shaders work by calculating a mathematical plane’s position within the shader code and clipping the pixels that are on one side. There are versions that clip with multiple planes, or a sphere, or a box. The key thing is those shapes don’t “exist”, they’re just being manually provided to the shader via a vector (or 4) used to describe the mathematical shape.

The “intersection glow” or other depth based fades work by reading values from the depth buffer, or more specifically the camera depth texture, which is a “copy” of the depth buffer rendered in a separate pass prior to rendering the scene. The depth buffer, or z buffer, is a real time rendering thing used for sorting opaque geometry and handling depth intersections of transparent objects with opaque geometry. The main thing to understand is the depth buffer and by extension the camera depth texture stores the depth of the closest opaque surface at each pixel, and only that one surface. The shader reads that depth value and compares the depth of the currently rendering geometry and either fades the opacity out or makes it glow, or sometimes both. The other key here is the object using that shader cannot itself be opaque as then it too would render to the depth buffer and that depth comparison would always be a distance of 0.0.

So, for what you need, the cross section shader is closer to what you’re looking for, but is obviously somewhat limited to simple shapes and rendering multiple of these shapes gets difficult if you’re using it on your plane. An alternative would be to use a second camera rendering to a render texture and rendering your objects using a cross section shader, or just using that camera’s near plane. That will only work if your geometry isn’t intersecting and is water tight. Honestly, there are multiple threads on many forums and several blog posts over the years on the best way to get the cross section of arbitrary geometry. This is a much more complex topic then I think you’re expecting.

Thanks for the feedback. I wasn’t aware that writing a shader would be so difficult. The thing is, I already have a working cross-section shader, but I didn’t think it would work because it removes everything above the plane. video detailing what I want it to do:

If there is a way for me to make the cross-section shader not shave off the geometry above the cut-plane, I’d use it. Or any other method. I want to do this in real-time, though, not as a video. Here’s the other half of my project, which has mostly been done:

Except shaving off the geometry above the cut-plane is exactly what you want and need to have happen.

You need a second camera that’s rendering the view for the quad, and only use the cross section shader for that camera’s view. That can be done with replacement shaders, or maybe more simply by having a second set of objects setup and use layers to hide them from the main camera.

Ok. Maybe im misunderstanding some fundamentals of how a shader works with a camera. I had no idea I can set a shader for a specific camera. Can I set the camera to ONLY render out the cross-section area without rendering anything that’s under the cut-plane that isn’t being removed? As as shown in the video, I need both the normal 3D view, and the cross-section view to render at the same time. I get that I need 2 cameras, but how do render the empty areas black? Do I just can just attach a solid black plane just slightly below the cut-plane of appropriate size that the only the cross-section camera sees?

Also, there’s another problem that I think I’ll have if I use the cut-plane. Like the video, I need a needle to be able to approach from the opposite side of the cut-plane and still have an intersection drawn and rendered to the camera. The shader I have now does not cuts both-ways, or only renders the cross-section of intersection geometry. Without that, the simulator is sort of useless.

Unless I can have another camera that only renders the needle and some how composite the two renders onto one display.

Set the second camera’s clear flags to Solid Color. Set color to black.

Specifically you can tell a camera to render everything it sees with a different shader.

Replacement shaders can take some time to get your head around, and for your use case I’m not sure if it’s the best option. I would recommend using layers and the camera’s culling mask.

Ultimately the way I would use the cross section shader is to not use the cross section plane at all. This might seem weird, but just have the object fully intact and use the second camera’s near clip plane to “slice” into it. The only part of the cross section shader you really need is the fact it renders the interior faces a different color than the external faces.

Try this.

  • Make a second camera.
  • Set the Clear Flags to Solid Color and set the Background Color to black.
  • Create a new material, set shader to Unlit/Texture.
  • Create a new render texture asset. (Right click in the project view, create new > render texture)
  • Set render texture to the Target Texture of the camera and the texture of the material.
  • Apply the material to your plane you want the intersections to be visible on.

Now you can move that camera around and see what it sees on the plane.

  • Click on Layers > Edit Layers in the top right.
  • Add a new layer called “Cross Section”.
  • Select your main camera and under Culling Mask disable “Cross Section”.
  • Select your second camera and under Culling Mask set it to “Nothing” then enable “Cross Section”.

Now it’ll just be showing black.

  • Make a copy of your vein game objects.
  • Set their material to be the cross section material, make sure the cut plane is way off to the side. Also make sure the exterior is black and interior is white.
  • On those game objects set their Layer to “Cross Section”.

Now move the camera to be right up next to those veins. You could even have the camera be a child of the ultrasound transducer game object. The key is having the camera’s near plane line up with the transducer’s intersection plane.

I will give that a try, thanks. :slight_smile:

Is there a good resource, online or textbook, that lists all the functions that the CG language uses? I’m trying to understand how the shaders work, but I’m stumbling in the dark. The Unity resource website doesn’t seem to list them.

I found this: http://developer.download.nvidia.com/cg/Cg_3.1/Cg-3.1_April2012_ReferenceManual.pdf

Not sure if its still relevant, though, seeing as its from 5 years ago.

There are a couple of links in this forum if you search for them. It is important to understand that Unity does not really use Cg anymore. It’s HLSL with a few Cg-isms left over as a product of legacy code (fragment shader vs pixel shader for example) and backwards compatibility (some functions and #defines in the .cginc files, also “cginc” files).

Most of the time HLSL and Cg are identical, but there are a handful of functions that are slightly different in their use, and that only exist in one or the other.

I tried the suggestion you gave earlier, and it’s not working the way I thought it would. The cross section isn’t rendered on the cutting plane, but rather on the model. Thus making the second camera only see the plane just renders a blank plane without anything on it. I’m thinking about two solution, neither of which I know how to do.

  1. I can make the shader slice off everything except the cross section. Put this on a duplicate set of models that only the second camera sees, and render that camera using orthogonal view. I haven’t figured out how to make it slice both above and below the plane though. I’ve tinkered the shader a lot and got a decent idea on which part of the code is doing what, but don’t understand it enough to write a section for myself. Below is the shader code that I modified from the “Cross Section” unity asset from the asset store.
Shader "CrossSection/OnePlaneBSP" 
    {
    Properties
    {
        _Color("Color", Color) = (1,1,1,1)
        _CrossColor("Cross Section Color", Color) = (1,1,1,1)
        _MainTex("Albedo (RGB)", 2D) = "white" {}
        _Glossiness("Smoothness", Range(0,1)) = 0.5
        _Metallic("Metallic", Range(0,1)) = 0.0
        _PlaneNormal("PlaneNormal",Vector) = (0,1,0,0)
        _PlanePosition("PlanePosition",Vector) = (0,0,0,1)
        _StencilMask("Stencil Mask", Range(0, 255)) = 255
    }

    SubShader 
    {
        Tags { "RenderType"="Opaque" }

        Stencil
        {
            Ref [_StencilMask]
            CompBack Always
            PassBack Replace

            CompFront Always
            PassFront Zero
        }

        Cull Back

        CGPROGRAM //Renders the sections below the plane.
        #pragma surface surf Standard fullforwardshadows
        #pragma target 3.0

        sampler2D _MainTex;

        struct Input 
        {
            float2 uv_MainTex; //the rendered screen
            float3 worldPos;
        };

        half _Glossiness;
        half _Metallic;
        fixed4 _Color;
        fixed4 _CrossColor;
        fixed3 _PlaneNormal;
        fixed3 _PlanePosition;

        bool checkVisability(fixed3 worldPos)
        {
            float dotProd1 = dot(worldPos - _PlanePosition, _PlaneNormal);
            return dotProd1 > 0;
        }

        void surf(Input IN, inout SurfaceOutputStandard o) 
        {
            if (checkVisability(IN.worldPos))discard;
            fixed4 c = tex2D(_MainTex, IN.uv_MainTex) * _Color;
            o.Albedo = c.rgb;       
            o.Metallic = _Metallic;
            o.Smoothness = _Glossiness;
            o.Alpha = c.a;
        }
        ENDCG
       
        Cull Front

        CGPROGRAM //Renders cross section and cuts off geometry above the plane.
        #pragma surface surf NoLighting noambient

        struct Input 
        {
            half2 uv_MainTex;
            float3 worldPos;
        };

        sampler2D _MainTex;
        fixed4 _Color;
        fixed4 _CrossColor;
        fixed3 _PlaneNormal;
        fixed3 _PlanePosition;

        bool checkVisability(fixed3 worldPos)
        {
            float dotProd1 = dot(worldPos - _PlanePosition, _PlaneNormal);
            return dotProd1 > 0;
        }

        fixed4 LightingNoLighting(SurfaceOutput s, fixed3 lightDir, fixed atten)
        {
            fixed4 c;
            c.rgb = s.Albedo;
            c.a = s.Alpha;
            return c;
        }

        void surf(Input IN, inout SurfaceOutput o)
        {
            if (checkVisability(IN.worldPos)) discard;
            o.Albedo = _CrossColor; //render out the cross section
        }
        ENDCG
       
    }
    //FallBack "Diffuse"
}

As I mentioned, I can’t find a list of functions for HLSL or CG. I’ve not idea what the functions like LightNoLight or Surf does, or what the parameters mean. Nor do I understand the syntax fully. If you could how I could get the shader to clip both above and below, it would be great. Or I can use the easier method of just putting a solid black plane just slightly below the clipping plane.

A second method I thought about using was making a screen shader that takes the normal render of the second camera, and only render out the cross section based on its color. The Cross section’s default color is white. If I could write a shade that only renders white, and makes everything else black, I will sill get what I want. I don’t know what terminology to google, though. I’ve tried googling “making camera only render white” and found no examples. ANy ideas on what terminology to use so I can search for it?

Because that’s not anything specific to HLSL (or Cg), that’s Unity’s ShaderLab Surface Shaders.

It’s written in HLSL, but the #pragma surface line tells Unity to use a shader generator to construct the full vertex / fragment shader. Select the shader in Unity and click on the Show Generated Code button in the inspector to see the full generated shader code. There’s also a lot of functionality abstracted away in various .cginc files. It’s all calling HLSL code in the end.

So are things like “LightingNoLighting” user defined by the coder, or something that predefined by ShaderLab? I’m just looking for a list. If it’s just something the original coder wrote, than I guess I fundamentally misunderstood the shader code.

LightingNoLighting would be something defined by that shader … and also should never ever be used as it’s a sign of someone not knowing what they’re doing. If you’re using a Surface Shader and don’t want lighting you should be using a vertex fragment shader. Surface shaders are explicitly for generating the passes Unity needs to interface with it’s lighting system.

Got this far today. Managed to get the cross section and color-filter shaders working at the most basic level. Still trying to make the cross-section shader clip both sides off so I won’t need a 3rd shader for the needle. But this is good enough for right now.

1 Like

Just out of curiosity, can the difference passes in a Shader have different render queues from one another?