Hello,
for a scientific application I would like to use 16 bit (per pixel) signed integer depth data to use as a displacement texture in a DX11 shader. The shaders I have work fine when using standard 8bit/channel textures, but as I understand there is no support in Unity for textures using higher bit depths per channel.
Unity’s TextureFormat does not offer formats any higher than 8 bits/channel.
RenderTextureFormat offers various formats with higher bit depths and floating point. I think it is possible somehow to write to RenderTextures using DX11, ComputeShader and ComputeBuffer, but I have not gone down that road yet because a) my knowledge of those types is very limited, b) they are scarcely documented and c) although fast, I would like to avoid any extra data processing. An approach using ComputeShader is this one: Encode/Decode floating point textures in Unity.
I am searching for a solution that is as fast as possible because I would like to update the displacement texture in realtime, ie. on every frame.
So what I tried is to encode the 16 bit signed int data (which is actually signed short, –32,768 to 32,767), in a RGBA32 texture using this piece of code:
short val; // something between –32,768 and 32,767
int r = (valA >> 8) + 128; // MSB, is between -128 and 127, so add 128 to make it 0-255.
int g = (valA 0x00ff); // LSB, is in the range 0-255 anyway.
Color col = new Color(r/255f, g/255f, 0, 0);
and I decode it in the shader using:
float4 col = tex2Dlod(_DispTex, coords);
float r = trunc(col.r * 255) - 128;
float g = col.g * 255;
float highResDisplacement = (r * 256 + g) / 32768.0;
v.vertex.xyz += v.normal * ((highResDisplacement - _DisplacementOffset) * _Displacement);
This basically works, and it would even be possible to encode 2 pixels of source data into one pixel of the RGBA32 texture, but I get some errors in the result. When decoding, the MSB part (encoded in the R channel) sometimes comes out wrong. These might be rounding errors, but I really cannot find the exact problem. It might be just a mathematical issue, or a difference in Mono’s and DX11’s float precision.
Any help, any hints or comments on this issue are greatly appreciated.
Basically, my questions are:
What is the easiest and/or fastest way to use 16 or 32 bit/channel data in a DX11 shader?
Does anyone know how to reliably encode 16 bit signed int data in a texture format provided by Unity?
Kind regards,
claus