I am writing a mass rendering system in the unity built in RP which uses deferred texturing (research source: Horizon Zero Dawn graphics presentation). In order to do this I’m packing multiple normalized floating point values into uints using bitwise operations to write data - tangent space, uvs, group shadow data etc - to a rendertexture so I can perform all the PBR and texture sampling in a screen space laydown after all the depth/alpha testing has been done.
The bitwise values appear to have some quirks when sampling again in the laydown shader. To avoid getting into my encoding methods I’ve created these cases to demonstrate the problem:
This is the return code for my fragment shader when writing to the visibility rendertexture. For the y channel I’ve put a hex code where the rightmost bit ends with a 1.
This is some code from my laydown shader which takes pixels from the visibility shader, and writes to the same fragments on the camera target. encodedTangentSpace here is a direct 2D sample from the visibility texture’s y channel.
I then output flipBit directly to the red channel. Therefore objects should render red if the rightmost bit of the encoded data is 1 and black if it is 0.
The output for this case is black, which is incorrect as I have output my rightmost bit as 1.
After some experimenting I found the output is correct if the leftmost byte contains nothing but zeros
Obviously I’ve tested my bitwise logic is correct by testing all these cases in the laydown shader without input from the texture. The texture also definitely has consistent byte width (4 bytes) as I can read distinct values from all bytes.
Here is the code for declaring my visibility texture:
My outputs are converted to/from signed values using asint()/asuint() to try and prevent hlsl doing any casting/truncating.
Have I made a mistake? Or does unity’s rendertextures do some sort of normalising or flipping or something behind the scenes? Or do I need to configure my sampler? Thanks.