How would you efficiently access the last N rendered frames every frame?

Hello!

I’m trying to do some effects in Unity that require looking back in time to previously rendered frames, for example to superimpose delayed versions of the rendered view on top of each other for ghost like trails.

I implemented this in a very simple way first using OnRenderImage() to read in the incoming rendered image, storing it in a Texture2D, then stuffing that into an array of Texture2Ds in a circular buffer style implementation. This obviously eats up a lot of bandwidth by going unnecessarily into CPU memory every frame which I don’t need, so it cuts me down from like 200fps to 60. This should stay purely on the GPU memory.

I looked into using Texture2DArray, which looks like what I want, but I’m having trouble getting it to write into a specific slice of the array. I found this line from here
: “You can also use a geometry shader to render into individual elements.” But it doesn’t describe how. I also found this question was asked before as well by @gsourima here but the solution seems very hacky, albeit looks like it worked.

Anyone have any leads on how I can solve this problem? Is @gsourima’s solution the only one atm?

Thanks a lot for any and all help!

Unless you already use it you should have a look at Graphics.CopyTexture which allows you to write from any texture to any texture, including the option to access specific Texture2DArray entries. To my knowledge Graphics.CopyTexture does not pull data from the graphics card which should give you a performance boost if you’ve been using Texture2D.ReadPixels so far.

Hi!

I can confirm that since I wrote the answer you refer to, I did not change my code and still use it (now in 2017.1).

In the OnRenderImage(RenderTexture source, RenderTexture destination) method I first store the current frame (_sliceNum being obviously the index of the slice I want to write into) :

Graphics.SetRenderTarget(_texArray, 0, CubemapFace.Unknown, _sliceNum);

GL.PushMatrix();
GL.LoadOrtho();

_matStore.SetTexture("_CamTex", source);
_matStore.SetPass(0);

GL.Begin(GL.QUADS);
GL.TexCoord2(0, 0);
GL.Vertex3(0, 0, 0);
GL.TexCoord2(1, 0);
GL.Vertex3(1, 0, 0);
GL.TexCoord2(1, 1);
GL.Vertex3(1, 1, 0);
GL.TexCoord2(0, 1);
GL.Vertex3(0, 1, 0);
GL.End();

GL.PopMatrix();

The _matStore member is the simplest Material ever. It’s shader just passes the input texture to the current target (here the proper tex array slice, but the shader does not need to know anything about the destination buffer).

Shader "Store Frame"
{
    Properties
    {
        _CamTex ( "CamTex", 2D ) = "white" {}
    }
    SubShader
    {
        // No culling or depth
        Cull Off ZWrite Off ZTest Always
        Pass
        {
            CGPROGRAM
            #pragma vertex   vert
            #pragma fragment frag
            #include "UnityCG.cginc"

            struct vertInput
            {
                float4 vertex : POSITION;
                float2 uv     : TEXCOORD0;
            };

            struct vertOutput
            {
                float4 screenPos : SV_POSITION;
                float2 uvs       : TEXCOORD0;
            };

            sampler2D _CamTex;

            vertOutput vert ( vertInput v )
            {
                vertOutput o;
                o.screenPos = UnityObjectToClipPos( v.vertex );
                o.uvs = v.uv;
                return o;
            }

            fixed4 frag ( vertOutput i ) : SV_Target
            {
                fixed4 col = tex2D( _CamTex, i.uvs );
                return col;
            }

            ENDCG
        }
    }
}

Coming back to the OnRenderImage() call, you can now use the whole _texArray in a new shader, as presented in my initial message on the page you referenced.

Hope this helps!