You need to use RenderTexture with dimension parameter configured to tex2DArray and in your compute shader use
RWTexture2DArray.
Then read the data back to CPU-side with AsyncGPUReadback. You could then copy this data to separate slices of Texture2D with SetPixels/GetPixels.
Check this good thread to get the all critical information needed:
I don’t know if it’s possible to read that array without that AsyncGPUReadback, as Texture2DArray doesn’t have ReadPixels as you probably noticed and I don’t know any other way.
Yeah, have seen that thread. I wanted something that I could read synchronously as it is part of code that does other things too. I’m already using Texture2DArray as input without needing RenderTexture with dimension parameter (works fine).
Currently I’m using a computebuffer for this scenario but I’d like to switch to RWTexture2DArrays.