Optimal way of rendering lines

I am working on procedural generation. I generate a bunch of points with different heights, and I create a mesh out of them. Then, using a wireframe shader I make the mesh look like this:

My plan is to run this from a Raspberry Pi 3 using WebGL (so performance plays a big role in this). The problem is that WebGL doesn’t seem to support the wireframe shader. What should I do?
Is there a way to make a wireframe shader that can run in WebGL?
Should I render lines instead of making a mesh with a shader? How would I do that?

Thanks in advance.

Hello,

Maybe you can take a look to Unity - Scripting API: MeshTopology
When Calling Mesh.SetIndices(int[ ] indices,MeshTopology.Lines)
It’s the fastest way if i remember correctly.

1 Like

One option might be to use Graphics.DrawProcedural. There are examples out there how to use it.

EDIT: just noticed I missed that you are going to use Raspberry Pi. Then it’s probably best to stick with something that generates meshes on cpu side (afaik).

The problem with using MeshTopology.Lines to draw lines is some hardware doesn’t support it, or are very, very slow at rendering lines. You may also need to modify the geometry to draw the lines properly, as MeshTopology.Lines doesn’t mean it draws every corner of a triangle. Instead that means each set of 2 vertex indices is a line segment. So your usual triangle vertex indices only draw two sides of your triangles. This means you’ll have to manually generate your index buffer to ensure you get every edge of the original triangle mesh.

Expanding the geometry on the CPU works, but is expensive. Especially on a mesh as complex as this, and the device you’re targeting. I highly recommend against it.

The problem with the UCLA line shader is that it uses a geometry shader to set vertex colors used to pass the barycentric coordinates to the fragment shader used to calculate the lines. The actual line rendering shader code using barycentric coordinates is super cheap, and works on just about any hardware that exists, but you’d need to bake the barycentric coordinates into the mesh’s vertex data rather than calculating it with a geometry shader. This is easier than it sounds, and really just requires you set alternating red, blue, and green (or black) vertex colors.
5066282--497879--upload_2019-10-14_12-47-5.png
That’s really all the geometry shader is doing.

6 Likes

First, thanks for taking the time to respond.
Could you please elaborate on how the barycentric coordinate system works?
I don’t fully understand what are the colors doing. Is the barycenter calculated based off a point on the gradient?

And where should I write the code to color the vertices?
A C# script, or a shader? I’ve never worked with shaders before.

Thanks

Barycentric coordinate is the position inside of a triangle relative to the vertices of the triangle.

It’s also the relative distance to each of the triangle’s edges, which is what those wireframe shaders work off of.

To color the vertices you need to do it in c# when generating the mesh. The geometry shader method is the generic way to do it from a shader, but that requires OpenGLES 3.1 or Direct3D 11.

By having the vertex colors alternating, and then doing

float grad = min( i.color.r, min( i.color.g, i.color.b));

you have this:

5068175--498119--Screenshot 2019-10-15 at 11.41.00.png

And from that, you can use SmoothStep, or step, or fwidth trickery to generate a wireframe.

(just look into the frag part of the UCLA shader)

2 Likes

Thanks for your answers.

I’ve looked into barycentric coordinates and I think I understand them, but I’ve no idea of what any of the formulas mean (I’m still 16 so I don’t what all the symbols mean).
If i’m understanding this correctly, the UCLA shader has three main parts: vertex, geometry and fragment. We need to eliminate the geometry part, and bake the barycentric coords in the mesh’s data (1. how do I do that?).
Then we need to read that baked data somewhere in the shader (2. where, and how?).

Finally we use what @AcidArrow said inside the frag method (which I don’t understand fully):

(3. I don’t really understand what do these functions mean)

I’m sorry if I am asking too many questions or if they are too basic, but I’d like to understand how it works.
Thank you so much.

@MateoPeri in response to your 3:

SmoothStep returns an interpolated (smoothly changing) value between 0 and 1 if value x is within a defined range (you define min and max).

Doesn’t probably help much I bet, but here are a few links:
HLSL Reference manual:

Book of Shaders explains SmoothStep in it’s own way, although I’m not sure if that’s clear explanation, but at least it’s visual (helped me a lot and you can play with the code there.):

Fwidth you can use generate antialiased lines etc. I found the following discussions useful to understand what it does:

https://computergraphics.stackexchange.com/questions/61/what-is-fwidth-and-how-does-it-work

Here’s a practical example of one use case of fwidth:

When you’re generating the vertex points used to generate your mesh, you also need to generate a colors32 list. Your mesh is presumably already a nice grid like the example image, in which case all you need to do is figure out the grid position to know what color to use for that position. If it’s a more complicated mesh, well, that gets harder.

Removing the geometry shader stage is as easy as removing the line #pragma geometry geoOrWhatever. Of course the shader probably won’t work anymore after that, but the geometry shader is effectively gone and the code for it will go unused. But obviously more work has to be done to get the shader to work again and to pass the vertex colors from the vertex function to the fragment function.

The vertex colors are easily accessible in the vertex shader as part of the appdata, and can be trivially passed to the fragment shader unmodified. The bigger problem is the UCLA shader is doing some other fancy stuff in the geometry shader to help make the line width consistent. I would look at this Catlike Coding tutorial on wireframe rendering which uses that fwidth function @Olmi mentioned:
https://catlikecoding.com/unity/tutorials/advanced-rendering/flat-and-wireframe-shading/

Really I think the big sticking point here is you’re going to need to learn how to write shaders, at least on a basic level. Catlike coding has a ton of stuff, though I tend to direct people to here first:
https://www.alanzucconi.com/2015/06/10/a-gentle-introduction-to-shaders-in-unity3d/

Alternatively there are assets on the store that do wireframe shading and includes utilities to setup your mesh for you, like this:
https://assetstore.unity.com/packages/vfx/shaders/wireframe-shader-the-amazing-wireframe-shader-18794

Thanks for those links, I read the tutorials and now I have a better understanding of how shaders work. I did the alternating vertex colors bit and wrote a surface shader to check if it was working:

So, I’ve managed to set the vertex colors, but how do I obtain a barycentric coords? In the UCLA shader, they use a distance value:
float val = min(input.dist.x, min(input.dist.y, input.dist.z));
But I don’t have a distance value. I tried @AcidArrow 's code and added a smoothstep function:

val = smoothstep(0, 0.1, val);```
But that just renders a flat color.
I'm failing to understand how to 'convert' the vertex color data into barycentric coordinates (they call this value 'dist' in the UCLA shader).

Thanks in advance.

What’s the rest of the frag shader?

The interpolated vertex color you’re rendering is the barycentric coordinate. No conversion is necessary.

The dist value that the UCLA shader is outputting isn’t the barycentric coordinate, but rather the barycentric coordinate pre-scaled by screen space area of the triangle. However @AcidArrow 's example code should work and produce the kind of blurry triangle shapes in the subsequent example image, and you should be able to use the code shown in the Catlike Coding tutorial that makes use of fwidth() to scale those barycentric coordinates similar to what the UCLA shader is doing to keep the line widths constant in screen space.

The rest of the frag shader:

float4 UCLAGL_frag(v2g input) : COLOR
{
    float grad = min(input.color.r, min(input.color.g, input.color.b));
    grad = smoothstep(0, 0.1, grad);
    // This is the rest of the UCLA shader, left unchanged. It shouldn't affect the outcome, as it only modifies the color.
    //blend between the lines and the negative space to give illusion of anti aliasing
    float4 targetColor = _Color * tex2D( _MainTex, input.uv);
    float4 transCol = _Color * tex2D( _MainTex, input.uv);
    transCol.a = 0;
    return grad * targetColor + ( 1 - grad ) * transCol;
}

Ok so first, I need to find a way to render the blurry triangles, then follow the Catlike Coding tutorial to scale the coordinate and make the line width consistent. But I can’t get the blurry triangles to show…
Any ideas?

Thanks.

What happens if you add:
return float4(grad, grad, grad, 1.0);
at line 3 before the smoothstep?

Flat white. Does that mean that grad is 0??

No it means it’s 1, you are probably not getting the vertex colors right, maybe post the whole shader. Are you passing the vertex colors from vert to frag?

Yes, I am passing them from vert to frag, unchanged.
Both files:

Shader Functions

// Upgrade NOTE: replaced 'mul(UNITY_MATRIX_MVP,*)' with 'UnityObjectToClipPos(*)'

//Algorithms and shaders based on code from this journal
//http://cgg-journal.com/2008-2/06/index.html

#ifndef UCLA_GAMELAB_WIREFRAME
#define UCLA_GAMELAB_WIREFRAME

#include "UnityCG.cginc"

// DATA STRUCTURES //
// Vertex to Geometry
struct v2g
{
    float4    pos        : POSITION;        // vertex position
    float2  uv        : TEXCOORD0;    // vertex uv coordinate
    float4  color    : COLOR;        // VERTEX COLOR
};

// PARAMETERS //

//float4 _Texture_ST;        // For the Main Tex UV transform
float _Thickness = 1;        // Thickness of the wireframe line rendering
float4 _Color = {1,1,1,1};    // Color of the line
float4 _MainTex_ST;            // For the Main Tex UV transform
sampler2D _MainTex;            // Texture used for the line

// SHADER PROGRAMS //
// Vertex Shader
v2g UCLAGL_vert(appdata_full v)
{
    v2g output;
    output.pos =  UnityObjectToClipPos(v.vertex);
    output.uv = TRANSFORM_TEX (v.texcoord, _MainTex);//v.texcoord;
    output.color = v.color;

    return output;
}

// Fragment Shader
float4 UCLAGL_frag(v2g input) : COLOR
{
    /* OLD CODE
    * // find the smallest distance
    * float val = min( input.dist.x, min( input.dist.y, input.dist.z));
    * // Calculate power to 2 to thin the line
    * grad = exp2( -1/_Thickness * grad * grad );
    * blend between the lines and the negative space to give illusion of anti aliasing
    */

    float grad = min(input.color.r, min(input.color.g, input.color.b));
    //return float4(grad, grad, grad, 1.0); // returns white
    grad = smoothstep(0, 0.1, grad);

    /* CATLIKE CODING
    float3 barys = float3(input.color.r, input.color.g, input.color.b);
    float3 deltas = fwidth(barys);
    barys = smoothstep(deltas, 2 * deltas, barys);
    float minBary = min(barys.x, min(barys.y, barys.z));
    */
      
    float4 targetColor = _Color * tex2D( _MainTex, input.uv);
    float4 transCol = _Color * tex2D( _MainTex, input.uv);
    transCol.a = 0;
    return grad * targetColor + ( 1 - grad ) * transCol;
}


#endif

Wireframe Shader

Shader "UCLA Game Lab/Wireframe/Single-Sided"
{
    Properties
    {
        _Color ("Line Color", Color) = (1,1,1,1)
        _MainTex ("Main Texture", 2D) = "white" {}
        _Thickness ("Thickness", Float) = 1
    }

    SubShader
    {
        Pass
        {
            Tags { "RenderType"="Transparent" "Queue"="Transparent" }

            Blend SrcAlpha OneMinusSrcAlpha
            ZWrite Off
            LOD 200
          
            CGPROGRAM
                #pragma target 5.0
                #include "UnityCG.cginc"
                #include "/UCLA Wireframe Functions.cginc"
                #pragma vertex vert
                #pragma fragment frag

                // Vertex Shader
                v2g vert(appdata_full v)
                {
                    return UCLAGL_vert(v);
                }
              
                // Fragment Shader
                float4 frag(v2g input) : COLOR
                {  
                    return UCLAGL_frag(input);
                }
          
            ENDCG
        }
    }
}

I mean, I’m not entirely sure why the modiifcations aren’t working. And honestly there’s not a ton of value in trying to modify the UCLA shader since most of the code exists to deal with the geometry shader. I’d say just go back to your previously working Surface Shader and start from there, even though I’m not a fan of using surface shaders for unlit stuff (assuming you want this to be an unlit shader).

Actually, bah, here’s the entire shader:

Shader "Wireframe using Baked Barycentric Coordinates"
{
    Properties {
        _Color ("Color", Color) = (1,1,1,1)
        _Width ("Line Width (Pixels)", Range(1, 50)) = 2
    }
    SubShader {
        Tags { "Queue"="Transparent" "RenderType"="Transparent" }
        Blend SrcAlpha OneMinusSrcAlpha
        Cull Off ZWrite Off

        Pass {
            CGPROGRAM
            #pragma vertex vert
            #pragma fragment frag
            #include "UnityCG.cginc"

            struct v2f {
                float4 pos : SV_Position;
                float3 coord : TEXCOORD0;
            };

            half4 _Color;
            float _Width;

            void vert (appdata_full v
                , out v2f o
                // , uint vid : SV_VertexID
                )
            {
                o.pos = UnityObjectToClipPos(v.vertex);
                o.coord = v.color.xyz;

                // hack to get barycentric coords on the default plane mesh
                // vid += (uint)round(v.vertex.z + 1000);
                // uint colIndex = vid % 3;
                // o.coord = float3(colIndex == 0, colIndex == 1, colIndex == 2);
            }

            half4 frag (v2f i) : SV_Target
            {
                float3 coordScale = fwidth(i.coord);

                // more accurate alternative to fwidth
                // float3 coordScale = sqrt(pow(ddx(i.coord), 2) + pow(ddy(i.coord), 2));

                float3 scaledCoord = i.coord / coordScale;
                float dist = min(scaledCoord.x, min(scaledCoord.y, scaledCoord.z));
                float halfWidth = _Width * 0.5;
                float wire = smoothstep(halfWidth + 0.5, halfWidth - 0.5, dist);

                return half4(_Color.rgb, _Color.a * wire);
            }
            ENDCG
        }
    }
}
5 Likes

How can I modify the above shader (by @bgolus ) so that the thickness of the Wireframe is constant with reference to the world space please? Posted as a new thread: Perspective Wireframe (using baked Barycentric coordinates)