Integer RenderTexture Format Issue

I’ve had multiple issues trying to render a mesh to an integer RT through a shader, but I’ve been getting closer to the target result and I’m now left with something which should work fine, other than a format issue.

I need to draw a mesh to an R32G32UINT RT, and then read that RT in another shader. Something I initially missed when debugging was the fact that it’s not actually rendering to this format; it’s being rendered to an R16G16B16A16_TYPELESS and then blitted to an R32_G32_INT by Unity in Camera::ImageEffects. I need the RT to receive the exact shader output as this is a packed series of bits to be used in later rendering, and the data is exact.

_mask = new RenderTexture(_screenWidth, _screenHeight, 0, RenderTextureFormat.RGInt);

The RT creation is straightforward; just an RGInt with no depth buffer. Is there any obvious reason that this would not natively render to the target format (rendering is done through manually rendering a camera with this RT as it’s targetTexture)?

Oh, and to clarify- yes, it’s a supported format on my system, I’m only targeting SM4+ GPUs.

The frag, just for reference;

int4 frag (v2f i) : COLOR //5 depth levels max, of an 8-bit alpha value followed by a 4-bit ID
{
    if((i.metaX & 1) == 0)
        discard;
    int depth = ((i.metaX & 14) >> 1) * 12; //0b00001110
    float a = tex2D(_MainTex, i.uv0).a * i.colour.a;
    uint seq = uint(clamp(a * 255, 0, 1)) | ((i.metaX & 240) << 4); //ID @ 0b11110000 shift to 0b00001111_00000000
    uint r = seq << depth;
    depth -= 32;
    int shiftTest = step(0, depth);
    uint g = (seq << (depth * shiftTest)) | (seq >> (abs(depth) * (1 - shiftTest)));
    r = 255; //testing full alpha at ID, depth 0, 0
    return int4(asint(r), asint(g), 0, 2147483647);
}

So in a floating-point format, it can be rendered natively. There is still a blit phase, which I did not register, but it does at least render directly into the target format. For now I’ll try reinterpreting the data bit patterns as floats and go from there, theoretically this should be fine, but surely this isn’t intended behaviour (again, this is a supported render target on the current platform).

It seems there is some issue with outputting the full 32 bits; even with explicitly unclamped formats the data was being stomped on output. I’ve got a working solution implemented now, just putting this here in case anyone runs into a similar issue;

So I ended up binding to an RGBAUShort (R16G16B16A16_UINT), and outputting a uint4 from the construction shader. This writes the data as expected. To read it back out, if you don’t need sampling (since you can’t sample integer textures), you can load the pixel data directly through a Texture2D::Load call, or as I’m doing to make sure sampling takes the correct width, read it with;

half4 smpl = tex2D(_tex, coord);
uint r = asuint(smpl.r);

To reinterpret the 16 bytes as a uint. This way I’m able to do precise bitwise encoding in one shader and read it back out in another, with upto 64 bits per pixel.

2 Likes

Have you been able to transfer this over to system memory?