Compressing texture from 32 bit to 8 bit results in less than 8 bit precision

In our game we want to store a bunch of prerendered static shadow maps. So we render them in full 32 bit precision, convert to regular RFloat texture and compress to 8 bit to save space with EditorUtility.CompressTexture(tex, TextureFormat.R8, 100);

This is where the problems arise, the resulting textures have clearly noticeable banding, and when inspected in photoshop we see that bands are 3-4 values apart from each other. This is photoshop screenshot of the resulting texture at 1:1 size ratio (no zoom).

7201336--864760--upload_2021-6-3_2-43-5.png

As you can see there is visible horizontal bands of 3-4 pixels in height. Using the eyedropper tool we can see their RGB values. This is second to last band:

7201336--864763--upload_2021-6-3_2-45-57.png

And this is one above the previous:

7201336--864766--upload_2021-6-3_2-46-34.png

As i said, there is a difference of 3-4 values with each band. Am i missing something? Is this supposed to be like this? A bug? This is Unity 2020.3.9. Would appreciate any help. Thanks.

I wonder if there are some sRGB conversions happening when you use CompressTexture() that shouldn’t be. When you create the original RFloat Texture2D, are you making sure to set it to be linear space? The sRGB setting doesn’t do anything to floating point textures, but I wonder if when calling CompressTexture() on it it’s causing something weird to happen.

You might also try saving out the RFloat texture to an exr (might need to be an RGBAFloat first) to make sure the banding isn’t showing up in that already.

Thanks for the reply!

Yes, all the textures i create are set to linear. I tried switching to sRGB, but it didnt change anything at all. So yeah, they are most likely already forced to be linear.

Tried that as well. Outputting original 32 bit or compressing to 16 bit doesnt have any banding.

Also tried avoiding CompressTexture altogether by just creating a separate R8 texture and calling GetPixels/SetPixels to copy pixels from full precision to R8, got the same banding results.

I use GetRawTextureData() / LoadRawTextureData() and byte array to manually convert texture data.

2 Likes

Yup, that worked. Huge thanks!

For anyone wondering, heres the main code (tex is the full precision RFloat Texture2D)

var finalTex = new Texture2D(tex.width, tex.height, TextureFormat.R8, false, true);
NativeArray<byte> toData = new NativeArray<byte>(tex.width * tex.height, Allocator.Temp);
var fromData = tex.GetRawTextureData<float>();
for (int x = 0; x<tex.width; x++)
{
    for (int y = 0; y < tex.height; y++)
    {
        int index = y * tex.width + x;
        toData[index] = (byte)(fromData[index]*255);
    }
}
finalTex.LoadRawTextureData(toData);
finalTex.Apply();
DestroyImmediate(tex);
toData.Dispose();
fromData.Dispose();

Interestingly enough, the preview of the texture in inspector still shows banding, but the shadows are applied correctly now using full 8-bit precision, while previously it was causing massive self-shadowing and shadow acne artifacts.

7203322--865081--upload_2021-6-3_18-22-50.png

That’s definitely being caused by an 8 bit precision linear → sRGB conversion. The smallest non zero RGB value being shown in that preview of the R8 is [13,0,0]. That’s what you get if you convert 0.00392 (1/255) from linear to sRGB.

Makes sense for previewing a bit since gamma corrected linear values tend to “look better”. But it still doesn’t make a ton of sense for the CompressTexture(), which makes me think it’s a bug.

1 Like