I want to store data in a texture to use in a custom shader. Specifically I want eight 8bit floating point values per pixel. My plan is to use the RGBAHalf texture format (4 channels, 16bit / channel) and pack two 8 bit values into each channel.
However while debugging my implementation (editor window, C# only, no shaders/blits involved), I found that the green channel of the resulting texture is always showing up as completely black in the inspector texture preview window of the texture asset I create, even when (what I understand as) writing 2^16 - 1 to that channel.
Relevant parts of the code:
Texture2D packedTexture = new Texture2D(width, height, TextureFormat.RGBAHalf, mipChain: false);
List<ushort> packedData = new List<ushort>(width * height);
for (int px = 0; px < width * height; px++)
{
// customDataR/G/B/A are ushort, packed using
// (byte lo, byte hi) => (ushort) (lo | (hi << 8))
packedData.Add(customDataR);
packedData.Add(ushort.MaxValue); //customDataG
packedData.Add(customDataB);
packedData.Add(customDataA);
}
packedTexture.SetPixelData(packedData.ToArray(), mipLevel: 0);
packedTexture.Apply();
AssetDatabase.CreateAsset(packedTexture, "Assets/TestTexture.asset");
After a bit more investigation, I now understand more. I figured out that 31744 is the largest value that still shows up and found out that my texture uses the R16G16B16A16_SFloat GraphicsFormat. With the leftmost bit used as sign, that together with the assumption that negative values are clamped to 0 explains why values larger than 2^15 - 1 turn out black. The values between 2^15 - 1 (32767) and 31744 have the next 5 bits after the sign bit (used to encode the exponent) all set to 1 which is used to represent infinity and NaN. So now I know why those values produce 0/black.
Now to see what other format I can get working, I tried R16G16B16A16_UInt before, but that throws an error stating it’s not supported by the platform.
Yes, you can’t interpret a short as a 16bit float and expect it to work.
Use Mathf.FloatToHalf to convert a 32bit float to a 16bit float.
In general, I find it a bit strange what you are trying to do. If you want to store ints, use an integer texture format, not a floating point format. You could try R32G32_UInt if R16G16B16A16_UInt is not supported. Or alternatively, two R8G8B8A8_UInt, if your target platform doesn’t support 64 bit texture formats. Also, look into texture compression.
Thanks!
The current data types are a product of my manually storing data into 2 RGBA images in Photoshop for a quick proof of concept to see if what I am trying to do with the data in a shader works. Hence I’m currently getting two sets of four bytes with [0, 255] color values from my two images on the C# side to force into a single texture. Plus I picked RGBAHalf because of its wide platform support.
I wanted to look into packing more data into fewer textures, because aside from the 8 values per pixel I definitely need in my shader, there are a few more values (not sure yet if 4, e.g. another non-packed RGBA texture, will suffice) that aren’t uniform across all fragments I’ll need to get into the shader. So I’m trying to cut down on the number of texture reads as there might be more textures down the line. Plus that shader will be used as a second material on an object, so I’m a bit concerned about performance.
The values I’m trying to store are actually floats in the [0, 1] range and should work well with half the original precision.
So, to put it as general as possible, what I’m after is passing 8+ per-fragment [0, 1] float values that don’t need high precision into my shader while minimizing the performance hit (and supporting as many platforms as possible (RG32 seems to not be supported on WebGL)).
The answer to all this might well be to just use 2+ “normal” 32bit RGBA textures, but I’m interested in packing texture data in general.
I didn’t look into texture compression yet because I assumed that could “destroy” my data (supporting multiple platforms with just one compression method also seems hard).
My main problem whenever I’m trying to do packing like this is to understand / find information on the different datatypes and how they behave in C# and HLSL. Are there any good resources for this out there?
For example things like when I have a _UNorm texture, what datatypes should I use in HLSL and how do their values translate from and to binary.
Thanks! I’ve come across that page before, and after experimenting some more I think I got it working. Though I still feel I’m missing something. Specifically, I guess I’m wrong with this assumption: the actual bit values don’t change when switching datatypes, only the interpretation (by that assumption, it wouldn’t matter which datatype my texture has as long as it’s 16 bits/channel and I’m not using the texel value directly, only after unpacking).
For testing, I just wrote the number 0b1111000001111100 = 61564 ( = 240 (0b11110000) and 124 (01111100) packed together) into one channel of my now R16G16B16A16_UNorm format texture.
I’ve had to change my texture sampling to float4 data = tex2D(...); instead of uint4, and the way I unpack values from that in the shader to:
float2 UnpackValues(float input)
{
uint lo = asuint(input) & 0xff;
uint hi = (asuint(input) >> 8) & 0xff;
return float2(lo / 255.0, hi / 255.0);
}
…where before I had that input parameter as uint, and no asuint()s.
I don’t really understand why I had to make these changes. Does tex2D() re-encode the sampled data somehow based on the datatype that is being assigned to?
I’m still interested in the texture compression you mentioned @c0d3_m0nk3y , does that work in general with “data” textures? Though I think I would then have to do different compression formats for different platforms.
Not entirely sure, if I understand what you are asking, but I’ll give it a shot:
The data that you pass to SetPixelData isn’t re-encoded (except maybe with texture compression, but that is explicit) but the texel format specifies how it is interpreted when you are sampling from the texture. So for example, when you use the UNorm format, unsigned short 0 becomes 0.0f and 65535 becomes 1.0f. Or when you use the a SRGB format, a non-linear transformation will be applied to the value that is read from VRAM. There are also formats that just pass the value through like int, uint, float and half. But you still have to use the same data type both in C# and HLSL for those (otherwise you’ll have to do a manual reinterpretation with asuint, for example).
In D3D, there are also shader resource views (SRVs), that specify how the value in VRAM is viewed/interpreted in the shader. So you could use a SRV with a UNorm format in one draw call and another SRV with a SNorm format in another. The data in VRAM wouldn’t change, but the values that you get from tex2d would.
You might be able to change the interpretation in the shader itself by using the syntax Texture2D, but I’m not 100% sure if that works in all cases.
Depends what kind of data textures. For example, normal textures are no problem. But it is not lossless and I think, all compressed formats are SNorm/UNorm formats. So if you needed integers or floats outside of 0-1 range, texture compression wouldn’t work.
You didn’t really say what kind of data you are trying to store, so it’s difficult to give advice.
Edit: Correction, there is BC6H which is not normalized. It was designed for HDR colors but it might work for other half textures as well as long as you are fine with the loss.
Thanks again! So that means when “manually reinterpreting” the data as in my UnpackValues() from my last post, it indeed wouldn’t matter if I was using e.g. an SFloat instead of a UNorm texture.
The data I want to store is all floats in [0, 1] so that works for compression (sorry for the initial confusion, that was because I tried it with data already encoded as colors).
I think I’ll go with uncompressed for the moment and focus on getting all the parts (creating the packed textures, making sure the unpacked data in the shader works) to work with actual, not-manually-created testing inputs. I should even be able to lower the texture resolution, maybe then there isn’t much need for compression anymore, but it’s good to learn a bit more about that, maybe I’ll revisit this.
Thanks for all your help!