I have a setup where many cameras are rendering into a single 2D render texture array, each to its own slice, using command buffers. Every fixed update, a single command buffer is constructed which loops over all cameras and their related draw calls, and SetRenderTarget is called per camera with the same render texture array, but with the slice index of that camera.
I’m using a render texture array in the hopes of removing the overhead of render target switches. Unfortunately, I see in the profiler that RenderTexture.SetActive is still causing severe overhead with this setup. Am I not using render texture arrays correctly? Is there a way that I can change the render target to the array only once, and then only change the target index for subsequent cameras?
Yes currently we do set the render target to a specific slice (which involves a call to setting the render target slice to active).
Your alternative here is to do something like using SV_RenderTargetArrayIndex in the shaders to output to a specific array slice. This is not something we can do automatically though.
Aha, so I was making incorrect assumptions about binding arrays!
I changed the setup to use SV_RenderTargetArrayIndex, set in the geometry shader through a uniform parameter inserted via MaterialPropertyBlock. However, it does not work for multiple cameras. In the documentation, the following is stated:
Does this mean that when binding a render texture array, Unity will always forcefully bind to a single slice only? RenderTargetIdentifier also assumes slice = 0, I can’t see an option to bind to the array as a whole and drive slice selection through SV_RenderTargetArrayIndex.
I happened across this article about cubemapping in a single pass for a different part of the project, and it said that passing -1 as the slice index binds to the entire texture array. That’s very handy to know, something that’s definitely missing in the docs.
Writing to the different slices using SV_RenderTargetArrayIndex works, and it does indeed alleviate the performance bottleneck because I only have to change the rendering context once. However, I ran into another issue with using the texture array in this manner. Take a look at the attached screen shot. In the scene view on the left, you see projectors which are painting the flat surface. Each camera renders its depth into the texture array through the command buffer and a custom shader, not using the Unity camera and actual depth. The projections should be occluded by the capsules, but only the rightmost projector is being occluded (it’s also the last one in the texture array).
The quads underneath the projections show a debug output of the texture array being sampled the same way as the shader which does the painting, using UNITY_SAMPLE_TEX2DARRAY(). As you can see, only the last slice has any information stored in it. I see the same thing when inspecting the texture array in RenderDoc. However, when I CopyTexture() the slices of the texture array into separate textures, you can see that the slices are filled in there! The three inspector windows show the three copied textures. The copying is done in the same command buffer as the actual rendering of the slices.
Any explanation for this inconsistent behaviour, @joelv ? Am I still missing something? Is there perhaps something wrong with the SetTexture call when passing an array?
Great that it worked out in the end. Yeah that missing -1 should really be in the docs.
I don’t really know this code and expected behaviour too well, but what you describe sounds like a bug. Please report it and we’ll look into it.
It was my bad, I had a ClearRenderTarget call between every slice instead of only once for all slices. It’s working now! There an 18ms difference on the GPU, going from 30 FPS to 60. Thanks again for helping out!