Program a shader that only displays the visible edges

Hey everybody,

for a current project I have to program a shader that only returns the visible edges of the object as a wireframe model. My idea was to create a wireframe model of my object and then delete all edges between triangles with the same normal vector. Since I’m a total beginner in programming, I don’t know if this is the right approach or how to implement it in C#.

I am grateful for any help

There are a lot of threads on edge finding / wireframe rendering in the forums. The most common solution is to use the camera depth normal texture to do edge finding as a post process, but that doesn’t work if you want your objects to be transparent, which it sounds like you want.

The next solution is usually one that highlights all triangles edges using barycentric coordinates, either using a geometry shader to extract them or prebaking data into vertex colors / uvs, but again that doesn’t really help you.

The solution I came up with for a game I worked on was to generate a texture that was a SDF to the closest edge and using that to be able to render anti-aliased wire frames. The way that worked was the meshes were already auto-UVed based on a low surface angle threshold with a constant-texel size. I exported those UV sheets into Photoshop and applied an inner & outer glow layer style and then used that texture in game. It’d be possible to do something similar inside of Unity if you can’t pre-baked the content, but it might be slow.

Really, your idea of iterating over the surface geometry in C# is fine. Note that the way Unity’s meshes are stored, it’s already broken up into mesh “islands” where any change in normal across faces will cause shared vertices to be split. It’ll also split across UVs and vertex color changes, but as long as those line up with the hard edges of the normals, then it should be pretty easy to find the edges of polygons to draw.

Just deleting the interior triangles will result in polygon soup or no edges at all to render as GPUs can really only render triangles and not arbitrary sided polygons. So you’d have to construct some kind of new mesh that’s just the edges. Or you could use something like Vectrosity to render the edges out as lines manually.

Geometry shader is my favorite option:

Shader "Wireframe (Geometry Shader)"
{
    SubShader
    {
        Tags { "RenderType" = "Transparent" "Queue" = "Transparent" }
        Pass
        {
            Blend SrcAlpha OneMinusSrcAlpha
            CGPROGRAM
            #pragma vertex VSMain
            #pragma geometry GSMain
            #pragma fragment PSMain
            #pragma target 5.0

            struct Data
            {
                float4 vertex : SV_Position;
                float2 barycentric : BARYCENTRIC;
            };

            void VSMain(inout float4 vertex:POSITION) { }

            [maxvertexcount(3)]
            void GSMain( triangle float4 patch[3]:SV_Position, inout TriangleStream<Data> stream)
            {
                Data GS;
                for (uint i = 0; i < 3; i++)
                {
                    GS.vertex = UnityObjectToClipPos(patch[i]);
                    GS.barycentric = float2(fmod(i,2.0), step(2.0,i));
                    stream.Append(GS);
                }
                stream.RestartStrip();
            }

            float4 PSMain(Data PS) : SV_Target
            {
                float3 coord = float3(PS.barycentric, 1.0 - PS.barycentric.x - PS.barycentric.y);
                coord = smoothstep(fwidth(coord)*0.1, fwidth(coord)*0.1 + fwidth(coord), coord);
                return float4(0..xxx, 1.0 - min(coord.x, min(coord.y, coord.z)));
            }
            ENDCG
        }
    }
}

2 Likes

First of all, thanks to both of you for the quick response.

Since my application should run on the Microsoft Hololens, I would prefer the geometry shader solution. In addition, the edges between the triangles don’t have to be deleted, but it is sufficient to color them black.

@Przemyslaw_Zaworski
thanks for your example, but all triangles are still present here. is it possible to use this example as a basis to compare the normal vectors of neighboring triangles and if the difference is below a threshold, color them black?
I have already tried to find a solution for this, but without success.

No. Geometry shaders are explicitly limited to information about a single triangle at a time. There are examples online you’ll find in academic papers that show geometry shaders accessing data from neighboring triangles via adjacency information, but this isn’t something you can do in Unity, or any other real time game engine I know of. Adjacency information simply isn’t used by anyone outside of academia as it’s way, way too slow to use generically.

So instead you need to pre-process the meshes in c#, or plausibly in a compute shader (which still requires you process the mesh in c# to convert it into a data form a compute shader can use). If you’re going to pre-process the mesh, there’s no real reason to use a geometry shader anymore since any of the data you’d be getting from using one can be pre-baked into the mesh.

Geometry shader - basic approach : generate triangle normals, blur image and use derivatives for render contours.
Black aliased edges and visualised normals - code needs to be improved:

Shader "Wireframe with Silhouette Outline (Geometry Shader)"
{
    SubShader
    {
        Pass
        {
            CGPROGRAM
            #pragma vertex VSMain
            #pragma geometry GSMain
            #pragma fragment PSMain
            #pragma target 5.0

            struct Data
            {
                float4 vertex : SV_Position;
                float3 normal : NORMAL;
            };

            float3 GenerateNormal(float3 v1, float3 v2, float3 v3)
            {
                return normalize(cross(v2 - v1, v3 - v1));
            }

            void VSMain(inout float4 vertex:POSITION) { }

            [maxvertexcount(3)]
            void GSMain( triangle float4 patch[3]:SV_Position, inout TriangleStream<Data> stream )
            {
                Data GS;
                float3 trianglenormal = GenerateNormal(patch[0].xyz, patch[1].xyz, patch[2].xyz);               
                for (uint i = 0; i < 3; i++)
                {
                    GS.vertex = UnityObjectToClipPos(patch[i]);
                    GS.normal = trianglenormal;
                    stream.Append(GS);
                }
                stream.RestartStrip();
            }

            float4 PSMain(Data PS) : SV_Target
            {
                return float4(PS.normal, 1.0);
            }
            ENDCG
        }
       
        GrabPass {"_BackgroundTexture"}
       
        Pass
        {
            CGPROGRAM
            #pragma vertex VSMain
            #pragma geometry GSMain
            #pragma fragment PSMain
            #pragma target 5.0

            sampler2D _BackgroundTexture;
            float4 _BackgroundTexture_TexelSize;
           
            struct Data { float4 vertex : SV_Position; };

            void VSMain(inout float4 vertex:POSITION) { }

            [maxvertexcount(3)]
            void GSMain( triangle float4 patch[3]:SV_Position, inout TriangleStream<Data> stream )
            {
                Data GS;
                for (uint i = 0; i < 3; i++)
                {
                    GS.vertex = UnityObjectToClipPos(patch[i]);
                    stream.Append(GS);
                }
                stream.RestartStrip();
            }

            float3 blur(float2 uv, float radius)
            {
                float2x2 m = float2x2(-0.736717, 0.6762, -0.6762, -0.736717);
                float3 total = 0..xxx;
                float2 texel = float2(0.002*_BackgroundTexture_TexelSize.z/_BackgroundTexture_TexelSize.w, 0.002);
                float2 angle = float2(0.0,radius);
                radius = 1.0;
                for (int j=0; j<64; j++)
                {
                    radius += rcp(radius);
                    angle = mul(angle, m);
                    float3 color = tex2D(_BackgroundTexture, uv+texel*(radius-1.0)*angle).rgb;
                    total += color;
                }
                return total/64.0;
            }
           
            float4 PSMain(Data PS) : SV_Target
            {
                float3 color = blur(PS.vertex.xy/_ScreenParams.xy, 0.05);
                float3 value = smoothstep(0.,50., abs(color)/fwidth(color));
                return float4(min(min(value.x, min(value.y, value.z)).xxx , abs(color)), 1.0) ;
            }
            ENDCG
        }
    }
}

Hololens was mentioned earlier. AFAIK grab pass has problems on Hololens, potentially with only one eye working properly? It’s also not great for performance.

Technically the above method is the same idea as the post processing method I mentioned earlier, just more expensive. Also, not sure why it’s using a geometry shader. Seems like overkill to use a geometry shader when you’re just outputting the triangle as is, or doing flat shading (which I don’t think the OP was looking for).

If you’re going to go with the above approach of getting normal edges like this, you’re better off rendering your object to a render texture manually and doing edge detection on that when rendering the object rather than using a grab pass. A grab pass is nice because it means you don’t need any scripting, but the performance impact of using them, especially on mobile hardware, is not insignificant.

I’m also working on a HoloLens project, and this thread has been super helpful. I would like to reproduce the geometry shader demonstrated above, but instead of displaying a wireframe, just render my main texture with a transparent alpha channel.

Please excuse my lack of experience with shader languages. This would take place in the fragment shader?

Hi everyone,
I am new to unity. This thread contains very useful information. I’m trying to extract the texture map of the visible edge geometry shader. Is there any possible way to achive this?

Could you let us in the thread know what you have tried so far?

Hi,
I wanted to extract the visible vertices from a given camera view. I think using the visible edge geometry shader, if it’s possible to extract the uv texture map of the visible edges, then would be able to get the visible vertices from this. Or is there any other way to achieve this?

I have an edge shader atm that shows 3 lines of a triangle. I tried this shader code here, trying to get the quad outline look, but it didn’t work (no outline was shown)
7333039--890935--upload_2021-7-16_18-38-33.png

Edit:
Ended up getting it working, forgot to post here, but posted code on twitter:
https://twitter.com/deusxyz/status/1415961879684423680