To access the pixels of a RenderTexture you can create a texture with matching pixel format, make your RenderTexture the active render target, use ReadPixels to copy the data and then use GetRawTextureData to access the pixels.
However ReadPixels only works with a limited set of RenderTextureFormats.
This function works on RGBA32, ARGB32 and RGB24 texture formats, when render target is of a similar format too (e.g. usual 32 or 16 bit render texture). Reading from a HDR render target (ARGBFloat or ARGBHalf render texture formats) into HDR texture formats (RGBAFloat or RGBAHalf) is supported too.
If my RenderTexture isn’t one of thes formats (ie a RenderTextureFormat::RFloat) then how can I access the raw pixel data on the CPU?
@burtles I’m afraid not. I ended up doing something incredibly inefficient (using an ARGBFloat32 and copying value to all 4 channels) as my use case was more about it working than perf or mem.
-
If perf / memory are an issue then there are some options I thought about:
-
In some cases you can workaround the problem by having each shader process 4 pixels instead of 1 and then packing your RFloat values into an ARGBFloat texture with a quarter of width. Whether this turns out to be inefficient then depends on the complexity of your shader but there are scenarios where it would be almost as fast as doing it with RFloats.
-
Instead of using RFloat use an integer format like ARGB32. Then when writing it out encode your float into 4 colour components. Might be tricky to make it completely accurate but should be possible.
-
Perhaps the reason Unity doesn’t support this is due to limitations on some platforms, if you need to solve it on a specific platform it may be possible to use a native plug-in.
-
In my case I needed to access the pixels to save them to disk, sometimes one can rework a design to avoid the need to bring the data back to the CPU, in many cases even if you can do it, that copy is pretty inefficient.
-
The slightly frustrating thing is that Unity clearly can do it internally as you can get a preview of an RFloat texture in the editor.
You can use a shader to write the texture to a ComputeBuffer and then GetData from that buffer. Double device memory consumption implied, of course. Notice that under DX11 you can do this trick on following uavs only:
R32_FLOAT
R32_UINT
R32_SINT
You can get the raw pixels of a RFloat Texture on the CPU, next convert Texture to RenderTexture, for example:
const int maxGridSideLength = 50;
float[] data = new float[maxGridSideLength * maxGridSideLength * maxGridSideLength ];//this is your raw data
//set texture3D
Texture3D texture = new Texture3D(maxGridSideLength, maxGridSideLength, maxGridSideLength, TextureFormat.RFloat, false);
texture.filterMode = FilterMode.Bilinear;
texture.SetPixelData(data, 0);
//convert texture3D to RenderTexture
RenderTexture renderTex = new RenderTexture(maxGridSideLength, maxGridSideLength, 0, RenderTextureFormat.RFloat);
renderTex.enableRandomWrite = true;
renderTex.dimension = UnityEngine.Rendering.TextureDimension.Tex3D;
renderTex.volumeDepth = maxGridSideLength;
renderTex.filterMode = FilterMode.Bilinear;
renderTex.wrapMode = TextureWrapMode.Repeat;
renderTex.useMipMap = false;
renderTex.Create();
Graphics.CopyTexture(tex3D, renderTex);
But ,there may be a small problem that when multiple rendertextures exist, only the last rendertexture conversion can succeed. But I don’t know why this is.