Flat lighting without separate smoothing groups?

Hi there,

I’m pretty new to ShaderLab so I might be missing something obvious, but I’m having trouble writing a shader that simulates flat lightning, like so:


Now, the obvious way to do this is to just break the smoothing groups apart when importing the model. The problem with doing this is that most of my animated meshes have a sort of shielding effect layered on top of them (similar to Halo). The shield uses a transparent shader that pushes the mesh’s vertices out by a specified amount, like seen in the demo shader on this page. If you set smoothing groups the flat, it breaks apart the mesh’s triangles, and each vertex on each triangle has the same normal, so when they are pushed out the triangles don’t stay together. My solution would be to write a shader that does flat lightning and keep the mesh fully smoothed, but it seems tough. When fully smoothed, each vertex’s normal seems to be the average of the triangles around it (?). Is it possible to access the triangle normal either directly or through deriving it?

Thanks,
Erik

1 Like

You could derive it with the cross product of the partial derivatives of the position. Eg I think something like:
float3 posddx = ddx(viewSpacePos.xyz);
float3 posddy = ddy(viewSpacePos.xyz);
float3 derivedNormal = cross( normalize(posddx), normalize(posddy) );

I’ve used this in a few view space situations, but should work the same for world space normals too (based off world space pos).
Note: due to quad messaging on different hardware, the partial derivative instructions are implemented a bit oddly on some setups, but in general should be stable… just test to be certain! If you’re using dx11, test with the _fine and _coarse versions too!

PS, alternatively you could bake the smooth normal data into vertex colours too, so you have two sets of vertex inputs representing normals.

Okay, thanks for the help! Took me a bit of work but I got off to somewhat of a start so far.

And heres the shader.

Shader "Example" {
    Properties {
        _MainTex ("Base (RGB)", 2D) = "white" {}
    }
    SubShader {
        Tags { "RenderType"="Opaque" }
        LOD 200
       
        Pass {
            CGPROGRAM
                #pragma target 3.0
                #pragma glsl
                #pragma vertex vert
                #pragma fragment frag
                #include "UnityCG.cginc"
           
                struct v2f {
                    float4 pos : SV_POSITION;
                    float2 uv_MainTex : TEXCOORD0;
                    float4 worldPos : TEXCOORD1;
                };
           
                float4 _MainTex_ST;
           
                v2f vert(appdata_base v) {
                    v2f o;
                    o.pos = mul(UNITY_MATRIX_MVP, v.vertex);
                    o.uv_MainTex = TRANSFORM_TEX(v.texcoord, _MainTex);
                   
                    // worldPosition(?)
                    o.worldPos = v.vertex;

                    return o;
                }
           
                sampler2D _MainTex;
           
                float4 frag(v2f IN) : COLOR {
                    float3 posddx = ddx(IN.worldPos.xyz);
                    float3 posddy = ddy(IN.worldPos.xyz);
                    float3 derivedNormal = cross( normalize(posddx), normalize(posddy));
                   
                    // Add in lighting, somehow
                   
                    half4 c = (0.0, 0.0, 0.0, 0.3);
                    c.rgb += derivedNormal;
                    return c;
                }
            ENDCG
        }
    }
}

So I’m not quite sure how to retrieve the view position, or if I’m doing it right. v.vertex passes in the vertex position in world space (?) so I should just directly be able to pass it through to the fragment shader. Provided this is done correctly, my next step is to somehow access lighting data and then use it to define the colour of each face. Unity suggests that shaders that interact with lighting be written as Surface Shaders, but since I’m not really interacting in a standardized way I don’t know if that’s the way to go. Can fragment shaders access lighting data, like color and intensity?

Thanks again for the help, made this much easier!

Progress report!


Shiny. And the code:

Shader "Example" {
    Properties {
        _MainTex ("Base (RGB)", 2D) = "white" {}
    }
    SubShader {
        Tags { "LightMode" = "ForwardBase" }
        LOD 200
       
        Pass {
            CGPROGRAM
                #pragma target 3.0
                #pragma glsl
                #pragma vertex vert
                #pragma fragment frag
                #include "UnityCG.cginc"
               
                uniform float4 _LightColor0; 
           
                struct v2f {
                    float4 pos : SV_POSITION;
                    float2 uv_MainTex : TEXCOORD0;
                    float4 worldPos : TEXCOORD1;
                    float3 vertexLighting : TEXCOORD2;
                };
           
                float4 _MainTex_ST;
           
                v2f vert(appdata_base v) {
                    v2f o;
                    o.pos = mul(UNITY_MATRIX_MVP, v.vertex);
                    o.uv_MainTex = TRANSFORM_TEX(v.texcoord, _MainTex);
                   
                    // worldPosition(?)
                    o.worldPos = v.vertex;
                   
                   
                    o.vertexLighting = float3(0.0, 0.0, 0.0);
                   
                    float3 normalDir = normalize(mul(float4(v.normal, 0.0), _World2Object).xyz);
                   
                    for (int index = 0; index < 4; index++)
                    {    
                        float4 lightPosition = float4(unity_4LightPosX0[index], unity_4LightPosY0[index], unity_4LightPosZ0[index], 1.0);
                       
                        float3 vertexToLightSource = lightPosition.xyz - mul(_Object2World, v.vertex);
                        float3 lightDirection = normalize(vertexToLightSource);
                        float squaredDistance = dot(vertexToLightSource, vertexToLightSource);
                        float3 diffuseReflection = unity_LightColor[index].rgb * squaredDistance * max(0.0, dot(normalDir, lightDirection));        

                        o.vertexLighting = o.vertexLighting + diffuseReflection;
                    }
                   
                    return o;
                }
           
                sampler2D _MainTex;
           
                float4 frag(v2f IN) : COLOR {
                    float3 lightDirection = normalize(_WorldSpaceLightPos0.xyz);
               
                    float3 posddx = ddx(IN.worldPos.xyz);
                    float3 posddy = ddy(IN.worldPos.xyz);
                    float3 derivedNormal = cross( normalize(posddx), normalize(posddy));
                   
                    // Add in lighting, somehow
                   
                    half4 c = (0.0, 0.0, 0.0, 0.0);
                    c.rgb += derivedNormal + IN.vertexLighting * _LightColor0.rgb * max(0.0, dot(derivedNormal, lightDirection));;
                    return c;
                }
            ENDCG
        }
    }
}

I’m mostly butchering the code on this page to try and figure out how to write shaders. As it is right now the lighting is a complete and total mess.

I found your task interesting so I solved it using the “normals to vertex color” approach and Shader Forge - which works quite well I think (sorry for the crazy gif):



1851455--118733--SF_NormalsFromVertexColorForSmoothScale.gif

Maybe this is of help to you.

1 Like

Got it to work, mostly! Ended up just pulling that entire shader I posted and changing the normal to be the derived one. Code and gif.

Shader "Custom/Default" {
   Properties {
      _Color ("Diffuse Material Color", Color) = (1,1,1,1) 
      _SpecColor ("Specular Material Color", Color) = (1,1,1,1) 
      _Shininess ("Shininess", Float) = 10
   }
   SubShader {
      Pass {      
         Tags { "LightMode" = "ForwardBase" } // pass for 
            // 4 vertex lights, ambient light & first pixel light

         CGPROGRAM
        #pragma target 3.0
         #pragma multi_compile_fwdbase 
         #pragma vertex vert
         #pragma fragment frag

         #include "UnityCG.cginc" 
         uniform float4 _LightColor0; 
            // color of light source (from "Lighting.cginc")

         // User-specified properties
         uniform float4 _Color; 
         uniform float4 _SpecColor; 
         uniform float _Shininess;

         struct vertexInput {
            float4 vertex : POSITION;
            float3 normal : NORMAL;
         };
         struct vertexOutput {
            float4 pos : SV_POSITION;
            float4 posWorld : TEXCOORD0;
            float3 normalDir : TEXCOORD1;
            float3 vertexLighting : TEXCOORD2;
         };

         vertexOutput vert(vertexInput input)
         {          
            vertexOutput output;

            float4x4 modelMatrix = _Object2World;
            float4x4 modelMatrixInverse = _World2Object; 
               // unity_Scale.w is unnecessary here

            output.posWorld = mul(modelMatrix, input.vertex);
            output.normalDir = normalize(
               mul(float4(input.normal, 0.0), modelMatrixInverse).xyz);
            output.pos = mul(UNITY_MATRIX_MVP, input.vertex);

            // Diffuse reflection by four "vertex lights"            
            output.vertexLighting = float3(0.0, 0.0, 0.0);
            #ifdef VERTEXLIGHT_ON
            for (int index = 0; index < 4; index++)
            {    
               float4 lightPosition = float4(unity_4LightPosX0[index], 
                  unity_4LightPosY0[index], 
                  unity_4LightPosZ0[index], 1.0);

               float3 vertexToLightSource = 
                  lightPosition.xyz - output.posWorld.xyz;        
               float3 lightDirection = normalize(vertexToLightSource);
               float squaredDistance = 
                  dot(vertexToLightSource, vertexToLightSource);
               float attenuation = 1.0 / (1.0 + 
                  unity_4LightAtten0[index] * squaredDistance);
               float3 diffuseReflection = attenuation 
                  * unity_LightColor[index].rgb * _Color.rgb 
                  * max(0.0, dot(output.normalDir, lightDirection));        

               output.vertexLighting = 
                  output.vertexLighting + diffuseReflection;
            }
            #endif
            return output;
         }

         float4 frag(vertexOutput input) : COLOR
         {
            float3 posddx = ddx(input.posWorld.xyz);
            float3 posddy = ddy(input.posWorld.xyz);
            float3 derivedNormal = cross( normalize(posddx), normalize(posddy));
         
            // float3 normalDirection = normalize(input.normalDir); 
            float3 normalDirection = normalize(derivedNormal);
            float3 viewDirection = normalize(
               _WorldSpaceCameraPos - input.posWorld.xyz);
            float3 lightDirection;
            float attenuation;

            if (0.0 == _WorldSpaceLightPos0.w) // directional light?
            {
               attenuation = 1.0; // no attenuation
               lightDirection = 
                  normalize(_WorldSpaceLightPos0.xyz);
            } 
            else // point or spot light
            {
               float3 vertexToLightSource = 
                  _WorldSpaceLightPos0.xyz - input.posWorld.xyz;
               float distance = length(vertexToLightSource);
               attenuation = 1.0 / distance; // linear attenuation 
               lightDirection = normalize(vertexToLightSource);
            }

            float3 ambientLighting = 
                UNITY_LIGHTMODEL_AMBIENT.rgb * _Color.rgb;

            float3 diffuseReflection = 
               attenuation * _LightColor0.rgb * _Color.rgb 
               * max(0.0, dot(normalDirection, lightDirection));

            float3 specularReflection;
            if (dot(normalDirection, lightDirection) < 0.0) 
               // light source on the wrong side?
            {
               specularReflection = float3(0.0, 0.0, 0.0); 
                  // no specular reflection
            }
            else // light source on the right side
            {
               specularReflection = attenuation * _LightColor0.rgb 
                  * _SpecColor.rgb * pow(max(0.0, dot(
                  reflect(-lightDirection, normalDirection), 
                  viewDirection)), _Shininess);
            }

            return float4(input.vertexLighting + ambientLighting 
               + diffuseReflection + specularReflection, 1.0);
         }
         ENDCG
      } 
   } 
}

I’ll need to add in the ability to handle textures and currently point lights don’t work, but I’m happy with it for now.

ShaderForge looks really great, I’m impressed at how easy it looks to build that. Good to know that both solutions work! And your gif is way higher quality than mine :frowning:

Thanks for all the help guys,

Erik

3 Likes

Really interesting approach. Thanks for sharing!

Another great post. Thank you!

I’ve been looking for this for so long, can’t believe I finally found it.

To have true flat lighting is really, really rare in Unity for some reason!

Now I just need to learn shaders enough to add more than one directional light, and the other types of light as well (and textures)… :slight_smile:

So in the last few years I’ve learned shaders much better…and for anyone learning how to do this well, catlikecoding has a great post on it.

I’ve seen that. I couldn’t find any of his examples that worked with Point Lights either. It’s all one single directional light only. Also, I wonder how all of these will work with the coming LW pipeline?

Maybe in a few more years, I’ll stumble on a solution. I have time.

Any ideas as to how this could be accomplished in a Surface Shader? The normal needs to be in tangent space.

Were you able to find the solution? I’m trying to solve something similar…

Try this:

Shader "Custom/FlatSurfaceShader" {
    Properties {
        _Color ("Color", Color) = (1,1,1,1)
        _MainTex ("Albedo (RGB)", 2D) = "white" {}
    }
    SubShader {
        Tags { "RenderType"="Opaque" }

        CGPROGRAM
        #pragma surface surf Standard fullforwardshadows vertex:vert
        #pragma target 3.0

        sampler2D _MainTex;

        struct Input {
            float2 uv_MainTex;
            float3 cameraRelativeWorldPos;
            float3 worldNormal;
            INTERNAL_DATA
        };

        half _Glossiness;
        half _Metallic;
        fixed4 _Color;

        // pass camera relative world position from vertex to fragment
        void vert(inout appdata_full v, out Input o)
        {
            UNITY_INITIALIZE_OUTPUT(Input,o);
            o.cameraRelativeWorldPos = mul(unity_ObjectToWorld, float4(v.vertex.xyz, 1.0)) - _WorldSpaceCameraPos.xyz;
        }

        void surf (Input IN, inout SurfaceOutputStandard o) {

            fixed4 c = tex2D (_MainTex, IN.uv_MainTex);
            o.Albedo = c.rgb * _Color.rgb;

            // flat world normal from position derivatives
            half3 flatWorldNormal = normalize(cross(ddy(IN.cameraRelativeWorldPos.xyz), ddx(IN.cameraRelativeWorldPos.xyz)));

            // construct world to tangent matrix
            half3 worldT =  WorldNormalVector(IN, half3(1,0,0));
            half3 worldB =  WorldNormalVector(IN, half3(0,1,0));
            half3 worldN =  WorldNormalVector(IN, half3(0,0,1));
            half3x3 tbn = half3x3(worldT, worldB, worldN);

            // apply world to tangent transform to flat world normal
            o.Normal = mul(tbn, flatWorldNormal);
        }
        ENDCG
    }
    FallBack "Diffuse"
}

I have a slightly more optimized example of transforming world space normals into tangent space normals from my triplanar normals article, but that technique doesn’t work for this particular situation since Unity’s Surface Shaders like to aggressive “optimize” away stuff that the shader is actually still using causing all sorts of annoying bugs. The same aggressive optimization is the reason I’m passing in a custom world position even though Unity is already passing that data from the vertex to the fragment. Using camera relative world space also reduces some floating point error issues.

4 Likes

Thank you very much, it works and it helped me to modify a shader I needed.

Adding some example Shader Graph node graphs for doing flat shading.

Basic form:

Note, because the position node is always using the interpolated vertex world space for the LWRP, after about 1000 units from the world origin you will start to see noise appear in the surface normal. If you need to get very close to the object noise will appear at around 50 units from the world origin. The original derivative based vertex fragment shader above has the same problem, but the surface shader I posted does not since it uses camera relative coordinates.

edit: Since people will likely continue to come across this post, the above node graph has a lot of stuff only needed because early versions of Shader Graph where missing an easy way to transform from world space to tangent space. If you’re using a more recent version of Shader Graph (like any version for Unity 2019) the graph looks like this:

4 Likes

Why don’t you add the shader graph file itself as well, so people don’t have to recreate it from the screenshot?

1 Like

Here’s the shader graph itself. (only includes the pre-Unity 2019 shader)

4237447–377122–FlatShading.zip (2.61 KB)

2 Likes

Thank you for these! I’m having an issue with the surface shader, though. I’m using this for a procedurally-generated mesh, and I’m noticing that the shading isn’t actually flat. There are always some areas shading in the wrong direction, leading to an ugly result. Why would that be?