Multiple post processing shaders on camera outputting black or overriding previous shaders

Hi! I’m a student and have been working on learning shaders and at the same time build a small library of cool shaders to use in different projects. A shader that I think it would be a good start is the pixelation effect some 3D games have (ex sokpop minigames), after finishing up the pixelation shader I saw that it was giving me half colours and in-betweens, which I don’t want. After that I made a nearest Neighbour scaling shader so it would fix the colour picking after the texture got squashed down from the first shader.

My problem now is: whenever I try to use both shaders at the same time, either one of them is overwritten by the other, or the output is a black screen to the main camera.

Here is my shaders:

Shader "Hidden/PixelShader"
{
    Properties
    {
        _MainTex ("Texture", 2D) = "white" {}
    }
    SubShader
    {
        // No culling or depth
        Cull Off ZWrite Off ZTest Always
        Blend SrcAlpha OneMinusSrcAlpha

        Pass
        {
            CGPROGRAM
            #pragma vertex vert
            #pragma fragment frag

            #include "UnityCG.cginc"

            struct appdata
            {
                float4 vertex : POSITION;
                float2 uv : TEXCOORD0;
            };

            struct v2f
            {
                float2 uv : TEXCOORD0;
                float4 vertex : SV_POSITION;
            };

            v2f vert (appdata v)
            {
                v2f o;
                o.vertex = UnityObjectToClipPos(v.vertex);
                o.uv = v.uv;
                return o;
            }

            sampler2D _MainTex;
            int _PixelDensity;
            float2 _AspectRatioMultiplier;

            fixed4 frag (v2f i) : SV_Target
            {
                float2 pixelScaling = _PixelDensity * _AspectRatioMultiplier;
                i.uv = round(i.uv *pixelScaling) / pixelScaling;

                fixed4 col = tex2D(_MainTex, i.uv);
                return col;
            }
            ENDCG
        }
    }
}
Shader "Hidden/PerfectPixel"
{
    Properties
    {
        _MainTex ("Texture", 2D) = "white" {}
    }
    SubShader
    {
        // No culling or depth
        Cull Off ZWrite Off ZTest Always

        Pass
        {
            CGPROGRAM
            #pragma vertex vert
            #pragma fragment frag

            #include "UnityCG.cginc"

            float4 _MainTex_TexelSize;

            struct appdata
            {
                float4 vertex : POSITION;
                fixed4 col : COLOR;
                float2 uv : TEXCOORD0;
            };

            struct v2f
            {
                float2 uv : TEXCOORD0;
                fixed4 col : COLOR;
                float4 vertex : SV_POSITION;
            };

            v2f vert (appdata v)
            {
                v2f o;
                o.vertex = UnityObjectToClipPos(v.vertex);
                o.uv = v.uv * _MainTex_TexelSize.zw;
                o.col = v.col;

                return o;
            }

            sampler2D _MainTex;
            float texelsPerPixel;

            fixed4 frag (v2f i) : SV_Target
            {
                float2 locationWithinTexel = frac(i.uv);
                float2 interpolationAmount = clamp(locationWithinTexel / texelsPerPixel, 0, .5)
                + clamp((locationWithinTexel - 1) / texelsPerPixel + .5, 0, .5);

                float2 finalTextureCoords = (floor(i.uv) + interpolationAmount) / _MainTex_TexelSize.zw;

                return tex2D(_MainTex, finalTextureCoords) * i.col;
            }
            ENDCG
        }
    }
}

And here is the script I made for the main camera:

using System.Collections;
using System.Collections.Generic;
using UnityEngine;

[ExecuteInEditMode]
[RequireComponent(typeof(Camera))]
public class PixelEffect : MonoBehaviour
{
    private Material pixelMaterial = null;
    private Material _pixelPefectMat = null;
    private float _texelsPerPixel;
    private Camera _mainCamera;

    [SerializeField] private int pixelDensity = 80;

    void SetMaterial()
    {
        pixelMaterial = new Material(Shader.Find("Hidden/PixelShader"));
        _pixelPefectMat = new Material(Shader.Find("Hidden/PerfectPixel"));

    }

    void OnEnable()
    {
        SetMaterial();
    }

    void OnDisable()
    {
        _pixelPefectMat = null;
        pixelMaterial = null;
    }

    private void Start()
    {
        _mainCamera = GetComponent<Camera>();

        float nativeAspectRatio = _mainCamera.pixelRect.x / _mainCamera.pixelRect.y;
        float aspectRatio = Screen.width / Screen.height;

        if (nativeAspectRatio > aspectRatio)
            _texelsPerPixel = _mainCamera.pixelRect.y / Screen.height;
        else
            _texelsPerPixel = _mainCamera.pixelRect.x / Screen.width;

        _pixelPefectMat.SetFloat("texelsPerPixel", _texelsPerPixel);
    }

    void OnRenderImage(RenderTexture source, RenderTexture destination)
    {

        if (pixelMaterial == null)
        {
            Graphics.Blit(source, destination);
            return;
        }

        Vector2 aspectRatioData;
        if (Screen.height > Screen.width)
            aspectRatioData = new Vector2((float)Screen.width / Screen.height, 1);
        else
            aspectRatioData = new Vector2(1, (float)Screen.height / Screen.width);

        pixelMaterial.SetVector("_AspectRatioMultiplier", aspectRatioData);
        pixelMaterial.SetInt("_PixelDensity", pixelDensity);

        Graphics.Blit(source, pixelMaterial);
        RenderTexture renderTemp = RenderTexture.GetTemporary(source.descriptor);

        _pixelPefectMat.SetTexture("_MainTex", renderTemp);
        Graphics.Blit(renderTemp, destination, _pixelPefectMat);
        RenderTexture.ReleaseTemporary(renderTemp);
    }
}

I almost sure the problem is in my camera script and not in any of the shaders, but you can never be too sure.
Thank’s for the help in advance!

Lets go over lines #68 through #72 in the above script example:

Line #68 there isn’t setting a destination render target. Normally if you do that it renders to the screen buffer, but in the case of OnRenderImage that may never be seen since after OnRenderImage Unity might copy the destination render texture the function has over the screen buffer, erasing whatever you previously rendered to it.

Line #69 is making a new render texture with the same dimensions and image format as the source. That’s fine. However note it is not making a copy of that texture, so it’s a black texture with the same resolution and format as the source.

Line #71 is setting the material’s _MainTex to be the renderTemp texture. Note, this is also the only thing the Blit does with the source texture parameter, so this is generally unnecessary. There is sometimes a bug that it isn’t set by the Blit for reasons no one understands, and setting it manually fixes that, just be aware that if you manually set the _MainTex to one texture and Blit with a different source it may be overridden by the Blit. Here it’s fine, just potentially unnecessary.

But… the renderTemp texture is empty, and probably is not what you want to be reading from since it’s, well, just an empty black texture.

Now, if you swap lines #68 and #69 so you create the renderTemp texture first, then change the Blit to:
Graphics.Blit(source, renderTemp, pixelMaterial);
I suspect everything will work as you were expecting it to.

All that said, I’m not entirely sure why you have this as two shaders, since you could be doing both in one.

Thank you! Very explanatory. I’ll fix it when I have the time. If there’s still problems going on I’ll make it 1 shader instead of 2

Edit: I made 2 separate shaders because the nearest neighbour one works for pixel art aswell and not just this occasion.