Seeking a background fade out for the frustum border.

I have a webGL that shows in a page. That pages background could be White, Black, or some other solid color. I’m looking for a shader that I can apply to certain objects (floor/walls) that will fade to the background color at the edge of the frustum. The objects of focus should still go all the way to the edges; without that requirement, I could just use post processing to generically overwrite the edges. And Maybe I can still approach this through post processing.

But ultimately, I want a way to still show items in their proper setting, but that fade into the page, with the exception of the object of focus. I.e. if a sword is shown, I might have it on a wall over a fire place. I would want the fireplace to fade to a white page background, but if the sword gets to the edge, then it will cut off.

I could do two cameras, with different views, and have a UI Layer with the fade out border, then a transparent image over that showing the items of focus. The items live in the same space, but filters prevent it.

I’m wondering if there is a shader that already does similar effects though.

  • Thanks.

Not entirely sure what you mean by ‘edge of the frustum’. If you mean fade near the edges of the camera frustum, then all you would need to do is fade to the background colour as the NDC-relative position reaches its extremes [-1,1]. For example, the following is a basic shader that support a main texture and fading parameters;

Shader "Unlit/FrustumFade"
{
    Properties
    {
        _Color ("Background Colour", Color) = (1,1,1,1)
        _MainTex ("Texture", 2D) = "white" {}
        _Fade ("Fade Range", Range (0,1)) = 0.6
    }
    SubShader
    {
        Tags { "RenderType"="Opaque" }
        LOD 100

        Pass
        {
            CGPROGRAM
            #pragma vertex vert
            #pragma fragment frag
            // make fog work
            #pragma multi_compile_fog

            #include "UnityCG.cginc"

            struct appdata
            {
                float4 vertex : POSITION;
                float2 uv : TEXCOORD0;
            };

            struct v2f
            {
                float4 vertex : SV_POSITION;
                float2 uv : TEXCOORD0;
                float3 pos : TEXCOORD1;
                UNITY_FOG_COORDS(2)
            };

            sampler2D _MainTex;
            float4 _MainTex_ST;

            fixed4 _Color;
            half _Fade;

            v2f vert (appdata v)
            {
                v2f o;
                o.vertex = UnityObjectToClipPos(v.vertex);
                o.uv = TRANSFORM_TEX(v.uv, _MainTex);
                o.pos = o.vertex;
                UNITY_TRANSFER_FOG(o,o.vertex);
                return o;
            }

            fixed4 frag (v2f i) : SV_Target
            {
                fixed4 col = tex2D (_MainTex, i.uv);
                UNITY_APPLY_FOG(i.fogCoord, col);

                i.pos.xyz = abs (i.pos.xyz / i.pos.w);
                half fade = smoothstep (_Fade, 1.0, max (max (i.pos.x, i.pos.y), i.pos.z));

                return lerp (col, _Color, fade);
            }
            ENDCG
        }
    }
}

What is stored in each value of o.vertex?

Don’t divide by w in the vertex shader, you need to do this in the fragment shader, or it’ll lead to significant distortion. Also Z differs between OpenGL and everything else, so you’ll need to account for that. In OpenGL it’s a -w to w range, in everything else it’s a w to zero range.

Also, either way, after the divide by w, the x and y will be a -1 to 1 range, so you presumably want the abs of that in the fragment shader otherwise it’ll only fade out correctly in one corner of the view.

The homogeneous clip space position. Can be thought of as a “frustum space” position, but the xy values aren’t in a -1 to 1 range. They’re in a -w to w range, and the w is the view space depth. The divide by w is known as the “perspective divide”, as this position format corrects for perspective distortion of linearly interpolated values… the distortion you’ll see if you divide by w in the vertex shader.

Passing the “o.vertex” in both the o.vertex and o.pos separately might seem redundant, but the SV_POSITION gets transformed by the GPU and isn’t the same data by the time it gets to the fragment shader.

1 Like

Yeah, that was my bad. I tend to write these answers in a fragmented pattern while switching back in and out of Unity, so I tend to end up changing how I word the answer while still accidentally leaving in some of the old wording (hence the ‘clip space’). I also wasn’t really thinking when I went for the vertex shader optimisation of the fade parameter and just assumed it would interpolate fine (even though clip depth isn’t linear). Wasn’t sure if we needed a fade at the camera’s near plane, so I left out the near check. I’ve updated my original answer.

If xy is from -w to w, and w is the distance from the far plane(correct?) then what is in the z? seems like xyw accounts for everything we need.

Not correct.

The clip space value of w is not affected by the near or far planes, or even anything about the perspective*. It’s the -z position of the vertex in view space. The GPU view space matrix has the z axis inverted compared to the Unity scene coordinate system, so the w value would be identical to the z position shown when you have a transform as a child of the camera game object.

As for what the clip space z is, that was in the first paragraph:

  • If the projection matrix is orthographic, w is always 1. But for any perspective projection matrix where the focus point is the camera (which is true for any perspective projection matrix generated by Unity itself) the above is true.
1 Like

For perspective:
xy in clip space is -w to w range
w in clip space is -z vertex position from view space
z in clip space is -w to w range(in opengl)

is that correct?

If yes, then what are the limits of the -w…w range? because if the w in clip space is dependent on the vertex position in view space, then isn’t the w limited by the frustum far and near planes?

I’m trying to understand it but i need to double check it back and forth so that i am sure i interpreted it correctly and didn’t miss anything.

Technically there are no “limits” to w, outside of floating point precision. The near and far plane control where the GPU clips when rasterizing, but you can have an object 10000 units away from the camera and a far clip of 100, but you’ll still see that w of 10000 since it doesn’t get clipped until after the vertex shader.

1 Like

How would you normalize a value on an undefined range then?
You say it is clamped after vertex shader, so in fragment W can have a maximum and minimum, but what is it in vertex then? It has to have a max and min in order to normalize by it.

w is the range definition. The other components of the vector are normalized relative to the w component, whereas the w component after normalization is simply 1 (a number divided by itself will always be 1). You can think of the w component less as a distance value, and more as a perspective value - its purpose (in an incredibly simplified and not very correct manner) is to decrease the size of objects on screen as they get further away from the camera during the normalization process. There are many resources out there that explain clip space and its relationship with NDC;

https://learnopengl.com/Getting-started/Coordinate-Systems
https://answers.unity.com/questions/1443941/shaders-what-is-clip-space.html

1 Like

You don’t normalize w, it is the normalizing term in itself. And you don’t ever want to “normalize” it in the vertex shader.

The xyzw values of clip space are all in a range defined by the current w value itself. So you “normalize” the value by dividing by the w, resulting in a -1 to 1 range for all on screen positions. They key is on screen positions.

The vertex shader isn’t confined by the frustum of the current projection matrix. That is to say the vertex shader runs on all vertices, not just the ones that end up visible on screen. The vertex shader is actually part of how the GPU determines if something is visible on screen or not by transforming the vertex position into the clip space position. So you’ll have clip space positions in the vertex shader that are far, far, outside the frustum bounds, and that’s okay. A vertex that is 300 units above the camera and far out of view may still be part of a triangle that is in view, so it still needs to be calculated. Most real time rendering engines make use of CPU side frustum culling to skip rendering of objects that are fully outside of the view frustum to avoid calculating meshes that for sure no triangle of which will be seen, but you can’t do that on a per vertex level. That vertex that’s 300 units out of view may be part of a triangle that’s 1000 units across and which you’re looking at the dead center off. Thus all 3 vertices of that triangle aren’t anywhere near that “-w to w” range, but are still needed.

As for why you don’t want to do the normalization in the vertex shader, it’s because the values are linearly interpolated in screen space. If you just pass the normalized positions it doesn’t interpolate properly, which is the entire point of using a float4 to begin with.

Here’s an easy example. Take a shader that just renders a texture using the normalized xy clip space positions as its UVs. First try dividing by w in the vertex shader. If the object is something like a view facing quad, it’ll look perfectly fine.

But try taking that quad and rotating it so it’s not facing the view and the texture will start to warp wildly.

But if you do the divide in the fragment shader, after the interpolation, everything is correct.

If you look closely at those last two images you’ll notice the UV positions at each vertex is the same spot on the texture, but everything in between is wrong. This is because the interpolated values aren’t correctly taking into account the perspective when you do the divide in the vertex shader.

Perspective Divide Test shader

Shader "Custom/Perspective Divide Test"
{
    Properties {
        _MainTex ("Texture", 2D) = "white" {}
        [KeywordEnum(Vertex, Fragment)] _Do_Perspective_Divide_In ("Do Perspective Divide in:", Float) = 0
    }
    SubShader {
        Pass {
            CGPROGRAM
            #pragma vertex vert
            #pragma fragment frag
            #include "UnityCG.cginc"
            #pragma shader_feature _ _DO_PERSPECTIVE_DIVIDE_IN_FRAGMENT

            struct v2f {
                float4 pos : SV_Position;
                float4 screenPos : TEXCOORD0;
            };

            sampler2D _MainTex;

            v2f vert(appdata_base v)
            {
                v2f o;
                o.pos = UnityObjectToClipPos(v.vertex);
                o.screenPos = o.pos;

            #if !defined(_DO_PERSPECTIVE_DIVIDE_IN_FRAGMENT)
                o.screenPos /= o.screenPos.w;
            #endif

                return o;
            }

            half4 frag(v2f i) : SV_Target
            {
            #if defined(_DO_PERSPECTIVE_DIVIDE_IN_FRAGMENT)
                i.screenPos /= i.screenPos.w;
            #endif

                return tex2D(_MainTex, i.screenPos.xy);
            }
            ENDCG
        }
    }
}
1 Like