Camera effect shader

Hi everyone,

I am quite new at writing Camera effects and I am facing some issues. Here is my code.
Camera script

// A part of the camera script
    void Start() {
        Camera.main.depthTextureMode = DepthTextureMode.Depth;
    }

    void OnRenderImage(RenderTexture sourceTexture, RenderTexture destTexture) {
        Graphics.Blit(sourceTexture, destTexture, material);
    }
}

Shader

Shader "Debug/Test" {
    Properties {
        _MainTex        ("Base (RGB)", 2D)                    = "white" {}
    }
    SubShader {
        Pass {
            ZTest Always Cull Off ZWrite Off
            Fog { Mode off }
            CGPROGRAM
            #pragma vertex vert
            #pragma fragment frag
            #pragma fragmentoption ARB_precision_hint_fastest
            #include "UnityCG.cginc"
            uniform sampler2D    _MainTex;
            uniform    float4        _MainTex_TexelSize;
            uniform sampler2D     _CameraDepthTexture;
            struct appdata // or appdata_img
            {
                float4    vertex : POSITION;
                half2    texcoord : TEXCOORD0;
            };
            struct v2f
            {
                float4    pos : SV_POSITION;
                half2    uv[2] : TEXCOORD0;
            };
            v2f vert(appdata v)
            {
                v2f o;
                o.pos = mul(UNITY_MATRIX_MVP, v.vertex);
                half2 uv = MultiplyUV(UNITY_MATRIX_TEXTURE0, v.texcoord);
                o.uv[0] = uv;
                #if UNITY_UV_STARTS_AT_TOP
                    if (_MainTex_TexelSize.y < 0)
                    { uv.y = 1 - uv.y; }
                #endif
                o.uv[1] = uv;
                return o;
            }
            fixed4    frag(v2f i) : COLOR
            {
                fixed4 color = fixed4(1, 0, 0, 1);
              
                // Depth
                float depth = UNITY_SAMPLE_DEPTH( tex2D(_CameraDepthTexture, i.uv[1].xy));
                depth = Linear01Depth(depth);
                if(depth > 0.99999)
                { color = half4(1, 1, 1, 1); }
                else
                { color = EncodeFloatRGBA(depth); }
              
                half3 screen = tex2D(_MainTex, i.uv[0]).rgb;
              
                return color;
            }
            ENDCG
        }
    }
    FallBack off
}

By using this shader, I am having a strange output, I am not getting the classic black and white depth output. What’s the point of using EncodeFloatRGBA ? If I don’t use this function, I am only getting some silhouettes of my objects and not a nice black and white gradient.
How can I output the classic depth texture, the aim is also to use the output of this shader in an other one using a new Graphics.Blit but this is not a problem.

I also would like to know if it’s possible to render only backfaces of the meshes using a Camera effect ? How ?

Thank you.

Any idea ?

Read the unitycginclude files on helpful depth handling functions depending on situation.
And then check the custom depth replacement shader which unity provides. It will teach you how to output depth.

Camera effect(postprocessing) doesnt render anything, it just gives you the final rendered image so you can work on the image itself.
If you want to render only backfaces, you do it in the actual objects shader. (Replacement shaders also will help you here)

@aubergine Thank you for you answer.
Concerning Post Processing, it seems that there is two way to do it, first, by using Graphics.Blit in OnRenderImage and using a material with a custom shader or by using RenderWithShader and outputting the result in a RenderTexture.
Am I right ?
The process is quite different I think, using Graphics.Blit allow you to make post process on the image or texture. But if you use RenderWithShader, you just replace the actual shader used on your objects by the one specified, then you avec access to vertex informations etc… but you can’t render with that camera you can just output the result in a RenderTexture, am I always right ?

So, if I want to render backfaces of every scene objects, I should use RenderWithShader, then if I want to pixelate the result, I should use OnRenderImage and apply my pixelate shader on the RenderTexture and use Graphics.Blit. Right ?

post process = post(after) process(rendering) here is your Render outcome (the source) do whatever with it with your fancy image effect(blit) and copy the result to final outcome (the destination)

replacement shader = replaces objects used shader while doing the rendering with the specified one(s)

I dont understand what you require exactly, but here is what i understood:
If you only want to render backfaces and pixelate them, than you will Show this pixelated image on screen and the front faces are not necessary? Why dont you write your original shaders will cull front option and do regular image effect on the final image?

@aubergine Thanks again for your answer, it gets clearer in my mind.
The provided example was just some kind of use case, in a more general way I want to keep the default rendering of my scene. But I want to output some kind of buffer from it. To obtain that buffer I firstly need to apply transformations on objects shaders (Replacement Shaders), then I need to retrieve result in a RenderTexture and apply PostProcess (Graphics.Blit) and send the result to another object material.
If it’s not clear just tell me :slight_smile:

If you want a good example, you could check my “Glow Per Object”, “Blur Per Object” or “Pixelate Per Object” assets in the assetstore, their workflow is as what you are trying to learn.

Thank you very much @aubergine ! I’ll take a look :wink: