I’ve had multiple issues trying to render a mesh to an integer RT through a shader, but I’ve been getting closer to the target result and I’m now left with something which should work fine, other than a format issue.
I need to draw a mesh to an R32G32UINT RT, and then read that RT in another shader. Something I initially missed when debugging was the fact that it’s not actually rendering to this format; it’s being rendered to an R16G16B16A16_TYPELESS and then blitted to an R32_G32_INT by Unity in Camera::ImageEffects. I need the RT to receive the exact shader output as this is a packed series of bits to be used in later rendering, and the data is exact.
_mask = new RenderTexture(_screenWidth, _screenHeight, 0, RenderTextureFormat.RGInt);
The RT creation is straightforward; just an RGInt with no depth buffer. Is there any obvious reason that this would not natively render to the target format (rendering is done through manually rendering a camera with this RT as it’s targetTexture)?
Oh, and to clarify- yes, it’s a supported format on my system, I’m only targeting SM4+ GPUs.
The frag, just for reference;
int4 frag (v2f i) : COLOR //5 depth levels max, of an 8-bit alpha value followed by a 4-bit ID
{
if((i.metaX & 1) == 0)
discard;
int depth = ((i.metaX & 14) >> 1) * 12; //0b00001110
float a = tex2D(_MainTex, i.uv0).a * i.colour.a;
uint seq = uint(clamp(a * 255, 0, 1)) | ((i.metaX & 240) << 4); //ID @ 0b11110000 shift to 0b00001111_00000000
uint r = seq << depth;
depth -= 32;
int shiftTest = step(0, depth);
uint g = (seq << (depth * shiftTest)) | (seq >> (abs(depth) * (1 - shiftTest)));
r = 255; //testing full alpha at ID, depth 0, 0
return int4(asint(r), asint(g), 0, 2147483647);
}