Sampling compressed render textures

I’m working with compute shaders and have a 3d render texture formatted as rgbaFloat which I’m currently compressing data into bit-wise as 8 16 bit halfs. The texture is read into the shader as uint4s and required values are read from the bytes as needed.

However I would ideally still like to take advantage of the texture sampling unit to efficiently interpolate between values in the texture as I feel this would likely be faster than writing my own trilinear interpolation code. Any clues on this matter?

maybe I’m missing a way to simply read the data into the shader as 16 bit halfs?

Well, since you are working with 16 bit halfs, it seems to me the simple solution is to just use two 3D render textures in rgbaHalf format.

A good suggestion but I believe that that hlsl supports 16 bit types only as a compatibility tool. I may be wrong but I think the values in memory would still be 32 bit and so negating any compression (I have quite alot of data). I’m happy to be proved wrong on this.

I am also keen to get it in one texture. I could say this is to avoid overhead but I think the overhead of having it in multiple textures is probably quite small. I think I’m just being neat at this point.

I think you are partially right on this. Inside a shader a half variable is actually 32 bits. But in a render texture this is not the case. So in terms of memory use you would be fine.

Interesting, I did just find some documentation on 16 bit formats which suggests 16 bit floats can be used a valid format…

I could try something like this but I don’t think the sampler would work with structs

struct DataStruct
{
min16float4 foo;
min16float3 bar;
min16float blah;
};

Texture3D _Data;

I may well be splinting my data across two textures

Most desktop GPUs out there today do not support 16 bit floats and will always use 32 bit floating point math internally for all in shader floating point numbers. This is true for texture samples as well, but the actual texture data is internally stored at whatever format the texture says it is. You get the benefits of data compression for transferring and sampling the texture, but not any benefits to computation.

Now some modern GPUs do support 16 bit internally. Vega and Turing both have support, and certain tasks are done at 2x the rate of equivalent 32 bit operations. There is also a cost to convert between 32 bit floats and 16 bit floats, and texture samples are still always returning 32 bit float values regardless of the texture format.