Is it possible to render partial camera depth slice to a Render Texture?

I don’t know if depth is the right term for what I’m asking. I’ve been having a rough time finding information about this. I’d like to take the rendered information from one camera and send slices of it to different render textures so I can control their resolution independently.

I’ve seen several topics that seem close, but nothing concrete yet.

Here’s an illustration for what I’m trying to do:

Any chance this is possible? If not this exactly, are there other options using a single camera?

Thanks in advance!

4 years later, if some of you want to do this, you need to:

use geometry shader, and setup the uint renderIx : SV_RenderTargetArrayIndex variable, in the struct that it returns.
this will determine the index of the slice to which your fragment shader will be outputting.

You can generate several quads inside the geometry shader, just watch out for the [maxvertexcount()] decorator, above your geom() function. You should assign a different renderIx value separately, for each quad that you generate inside the geom(). 0,1,2,3, and so on.

Next, you want a RenderTexture, which is a texture array. Create it in c# as follows:

public static RenderTexture CreateTextureArray( Vector2Int widthHeight, GraphicsFormat format,
                                                FilterMode filter, int numSlices, int depthBits=0 ){
    var arr = new RenderTexture(widthHeight.x, widthHeight.y, depth:0, format, mipCount:1);
    arr.dimension = TextureDimension.Tex2DArray;
    arr.volumeDepth = numSlices;
    arr.useMipMap = false;
    arr.enableRandomWrite = true;
    arr.filterMode = filter;
    arr.Create();
    return arr;
}

Then you need to render into it from your camera.
One would expect to do camera.targetexture = myRenderTexture; But it won’t work.
If you you try to do it, and invoke camera.Render() it will always render to slice 0, disregarding the renderIx manipulations in the shader.

People saw this in the past , but it was for URP and can lead you into rabbit hole of switching to URP and then setting up render features etc. Which is kinda cumbersome.

So instead, my variant was to do this kinda thing:

CommandBuffer cmd = new CommandBuffer();
cmd.name = "Render to TextureArray";

cmd.SetupCameraProperties(myCamera);
cmd.SetRenderTarget(new RenderTargetIdentifier(myTextureArray, 0, CubemapFace.Unknown, -1)); //notice -1, allows to render to all slices (will decide in geom func)
cmd.ClearRenderTarget(true, true, clearingColor);

//Draw all renderers:

var selectedMeshes = MyModelsHandler_3D.instance.mySelectedMeshes; //my own manager that holds a list of objects.
var renderers = selectedMeshes.Select(m => m.meshRenderer).ToArray();

foreach (var renderer in renderers){
     cmd.DrawRenderer(renderer, mat);
}

Graphics.ExecuteCommandBuffer(cmd); //submit to the gpu!
cmd.Clear();

cmd.Release();

So you have:

  1. ability to set the destination slice directly inside the shader.
  2. dispatch rendering from c#, so it does actually allow you to write into your desired slice.

As you can see the DrawRenderer renders using a material. That material must have shader with your geom function, which sets the renderIx thing

Now, you can determine the slice in the geom function, using the distance to camera from your vertex, before you generate the quad for your slice.

The solution you shared seems completely unrelated the question, it doesn’t allow you to render to multiple render textures of different resolutions since all entries in a texture array have the same size. I think you’re confusing “slice” as in individual entry in a texture array with what OP calls slice, which is a depth range / slice of the frustum.

For OP’s question, the simplest approach is to adjust the camera’s near/far plane for each depth slice and then render each time to a different RenderTexture.