Clean RGB channel split for compressed texture

Hi everyone,
I am trying to split the RGB channels out of a single texture which contains RGB data. It works just fine as long as the texture is not compressed. Once compression is used I can not figure out a way to split them properly. The texture has no alpha.

My guess is that this “color bleeding” is part of the compression used. Maybe someone can confirm my suspicion? I have a solution with adding alpha but it feels a bit clumsy (all graphs shown below). I’d like to reserve alpha for transparency instead of fixing the color bleeding issue.

Unity 2019.4.21f1, ShaderGraph 7.5.3, URP

My first attempt with a compressed texture (see the artefacts?).

Now here is the result with an uncompressed texture (this is what I want but I want the texture to be compressed).

My solution with alpha (maybe there is a better way without alpha?). Texture is PNG, Mate: black Update: does not work either (see below).

Update: just realised, the alpha approach does not work either :-/
I might add that I am truely annoyed by the fact that changing texture import settings does no refresh the shader graph. That tricked me.

Thank you.

That’s how GPU texture compression works. In most of them the image is encoded as blocks, where each block only stores two color values and each pixel in the block uses one interpolated value between the two. There are many variations, but that’s the overall way they work.

Thanks, I guess I’ll have to use separate textures then.

The thing that puzzled me is that values of green exist within a block even if there are no green pixels nearby. It even spits out green pixels if there is not one green pixel within the whole image. I assume it’s due to the implementation of the used compression algorithm.

Understand that for any lossy compression texture format there is no “one perfect method” to compress any image. Lossy compression is throwing away some of the data, and it’s up to the program that creates the compressed texture to decide what data to throw away.

As a very simple example, many of the GPU specific compressed texture formats use a color palettes with a fixed number of colors per 4x4 block of pixels. If you have every pixel in a 4x4 block be a rainbow of different colors, something like DXT1 has to basically “randomly” pick just two of those color and say the rest of the colors are one of those two. DXT1 actually uses a palette of 4 colors, but you can only actually specify two and the other two are 1/3 & 2/3 blends of the first two. So if the first two colors are red and green, the second two are different different shades of mustard yellow.

Now it’s not really random, not usually. The program will have some code that tries to pick the dominant colors based on brightness or saturation, and may try to make the middle colors as useful as possible at the cost of the accuracy of the main two colors. And that’s where things get a little weird. A common way of doing it is to convert the color values from the basic RGB to some other color format, like HSV, LAB, or several others, and try to pick values based on which ones will be the most apparent to the viewer. High saturation and high brightness colors like I mentioned are often the most obvious. The rest of the color values it’s more important they’re close to the correct perceived brightness rather than correct color.

The other wrench in all of this is those primary two colors are only stored in 16 bit precision, with 5 bits for red and blue each, and 6 for green. So to perfectly match some colors you do have to set two primary colors that don’t appear in the block at all so the one of the blended color values is a match, or is closer.

What happens when you’re dealing with very dark color values is the saturation gets very, very low. Perception wise humans are bad at seeing color in dark areas, so the color picking code for the compressor Unity is using is probably putting more importance on matching the perceived brightness than the actual color, so the color value drifts away from perfect full saturation RGB values to some mix of all 3. And thus you get the weird fringing you see.

A hacky work around would be to add a small bias; multiply your color values by 1.05, subtract 0.05, and clamp between 0.0 and 1.0 being doing the ceiling for example.

1 Like

Thank you very much for taking the time to write such a detailed explanation. Compressing in another space than RGB and taking Gamma into account seems obvious (now that you spelled it out for me). I am familiar with gif palettes and jpeg compression therefore fuzzyness in the resulting image is not what surprised me.

I think my mistake was to assume RGB channel splitting is such a common thing to do (in shaders) that compression algorithms would surely retain that property (no color bleed across rgb). Which you have shown they don’t and for good reasons.

Thank you :slight_smile:

I’ve seen some compression tools havd options for this kind of stuff. Normal maps for example don’t care about color perception and there are sometimes normal map specific compression paths, or code to try to detect the input being a normal map. Though most don’t seem to bother anymore. Generally if you’re storing clean data, you probably don’t want a DXT1 texture anyway.

1 Like

That gave me an idea, which in the end did not really work out. I’ll describe my approach for others to not make the same mistake.

Fail #1
My very first try was to set the texture import settings to “normal map” hoping unity would treat it differently in terms of chosen compression algorithm. Maybe it does, but the bleeding still persisted.

Fail #2 (apparent solution)
Here is the apparent solution unsing Type “Normal” in the “Sampe Texture 2D” Node:
7457591--915254--NormalMap.png

I tried to find out how the behaviour of the “Normal” type in “Sample Texture 2D” is defined.
The docs describe this as equal to this code:

float4 _SampleTexture2D_RGBA = SAMPLE_TEXTURE2D(Texture, Sampler, UV);
_SampleTexture2D_RGBA.rgb = UnpackNormalRGorAG(_SampleTexture2D_RGBA);
float _SampleTexture2D_R = _SampleTexture2D_RGBA.r;
float _SampleTexture2D_G = _SampleTexture2D_RGBA.g;
float _SampleTexture2D_B = _SampleTexture2D_RGBA.b;
float _SampleTexture2D_A = _SampleTexture2D_RGBA.a;

Source: https://docs.unity3d.com/Packages/com.unity.shadergraph@12.0/manual/Sample-Texture-2D-Node.html

The key part being “UnpackNormalRGorAG” which I searched for in the built in shaders (https://unity3d.com/get-unity/download/archive) and the shadergraph package sources (no results). But I found “UnpackNormal__map__RGorAG” in UnityCG.cginc.

// Unpack normal as DXT5nm (1, y, 1, x) or BC5 (x, y, 0, 1)
// Note neutral texture like "bump" is (0, 0, 1, 1) to work with both plain RGB normal and DXT5nm/BC5
fixed3 UnpackNormalmapRGorAG(fixed4 packednormal)
{
    // This do the trick
   packednormal.x *= packednormal.w;

    fixed3 normal;
    normal.xy = packednormal.xy * 2 - 1; // this is the key part: transform from 0..1 to -1..1
    normal.z = sqrt(1 - saturate(dot(normal.xy, normal.xy)));
    return normal;
}
inline fixed3 UnpackNormal(fixed4 packednormal)
{
#if defined(UNITY_NO_DXT5nm)
    return packednormal.xyz * 2 - 1; // transform from 0..1 to -1..1
#else
    return UnpackNormalmapRGorAG(packednormal);
#endif
}

Now I understand why it seems to eliminate the issue. Surprise: it does not. In my shader I simply interpret the normalmap channel data as only being positive but the sampling actually remaps it from 0…1 to -1…1 return packednormal.xyz * 2 - 1;. Therefore the low luminosity parts got removed by throwing away anything that is below 0.5 (normalized color). Quite a brute force application of “bgolus” suggested “add a small bias” hack.

Here is the visual proof of this assumption:
7457591--915275--NormalMapRemap.png

From what I understand Unity chooses the compression algorithm based on the target device for me. Thus even if it works now on my test pc (and mobile device) it is not guaranteed that it will work everywhere. Also, throwing away half my already compressed data is … not ideal.

I would be willing to sacrifice some (low luminosty) parts of my texture data but I am just too unsure of whether or not this fix is reliable enough across many devices. If anyone knows please comment below. I would be interested. I think it comes down to how big the maximum bleed error is (0.01, 0.1, 0.3, …). Is there a common limit of this error across all algorithms? Let’s say like if I ignore the 0.0 to 0.1 range then I’ll be safe not matter the algorithm?

Here is what I ended up building. A graph which allows for a default 10% color bleed (configurable by a BleedingThreshold value).

7457591--915290--Threshold.png

I hope someone also wondering about these things will find it useful.

Edit: updated my text after thinking about it a bit.

Shader Graph isn’t part of the built in shader code and explicitly avoids using any of the code from them. Though both the SRPs and built in use the same math if not exactly the same code for unpacking normal maps … as do nearly all real time game engines at this point, as that’s the industry standard technique.

Also that part of the documentation is just wrong, as they renamed the function being used and didn’t update it.

And while I said some compressors have special modes for compressing normal maps, Unity does not. It does do special packing for certain platforms and texture formats, but it assumes the data being held in the texture is a normalized vector with a positive Z value. Which is what a tangent space normal map always has. If your texture is not a normal map you should not set it to be one.

Really I think the correct solution for you is to use an uncompressed texture that’s half the resolution. That “compresses” it nearly as much as using DXT1 in terms of space savings, but avoids color weirdness. Some other game engines (Unreal, CryEngine) have in the past used that as a built in option for compressing normal maps because it can look better than DXT1 or DXT5 compressed normals. But today’s BC5 and BC7 formats make that mostly moot.

Which, might be the other option for you. BC5 is a normal map specific normal map format, but BC7 might work better for your use case. I’ve not experimented with BC7 enough to know if it will have the same hue creep as DXT1 & DXT5 do.

1 Like

Thank you again for your answers and the detailed info (great stuff).
I really appreciate the effort and your patience :slight_smile:

I will look into those formats, though it is something I wanted to avoid because I am not very experienced with it. That’s why I have been trusting Unity to pick reasonable defaults for me.

Ah yes. I just saw some cg includes and assumed it had to come from somewhere. Completely forgot I am on URP in the process.

I think it might be useful to explain what I am trying to achieve. The reason I am a little reluctant to using separate low res textures is because I am actually aiming for spacial resolution rather than color depth.

I want to use this to apply some drawn patterns onto existing textures and be able to colorize those in many ways. Naturally I want the patterns to be as clear as possible (spacial res). I am also thinking about interpreting each channel as an SDF and then use and colorize that (though I am okay with my current solution). To be honest I would be okay with throwing away most of the color depth in exchange for more spatial resolution.

Here is what I am currently doing:

I have two textures (one low res for shading) and one for colors and patterns (high res, the channel mixing stuff). With the current approach I can handle 4 colors within one texture (background, r, g, b) and apply some “shading” with a second low res texture (and use alpha too).

7458695--915530--Recoloring.png
Maybe there is a tried and true method of doing this which I am not aware of?

Hope that was understandable.
I did not include the whole shader because it’s a big “messy” graph.

I think I’m missing the reason for the ceiling at all.

take each color channel, multiply it by a color, add them together, done?

I am trying to simulate that the colors have been drawn (as if being sprayed on top of each other) in this order:

  1. The background color
  2. Red channel color
  3. Green channel color
  4. Blue channel color

I know that some information is already lost because my source is just one texture. Therefore I wanted my shader to be “smart”. My problem was that with simple channel replacement the gradients always had the background color shining through. I don’t like that. It’s especially noticeable if there is high contrast between the background and the colors.

Here is an image to illustrate the issue:

Here is the full graph (file also attached).

The reason for CEIL is that I use it to get any area with any red in it and draw that in full color (I am masking it based on brightness). The CEIL in my first examples was just to show the effect as I feared it would be overlooked by any potential readers. I might be overthinking it and maybe no one will notice but it bugged me enough to try a “solution”.
I hope that was more understandable.

7462745–916313–SimpleLitTriColor.shadergraph.txt (216 KB)


With the way you have your mask texture setup, this is what you should be doing to layer stuff.

1 Like

If you want the explicit layering you described, you’d need to modify how you’re doing your textures and use a lerp like this:


Notice the texture has been modified here.

1 Like

Of course, pre adding the channels to retain the information. Why did I not think of that facepalm

And my initial concerns about compression are now irrelevant too since there is no more ceil involved. I’d call this CASE CLOSED!

Thank you so much.
If I had an award or something to give it would be yours for sure!

Here is the (hopefully) final graph (shadergraph file attached):
7463960--916477--FullFinalGraph.jpg

7463960–916478–SimpleLitTriColor.shadergraph.txt (105 KB)
7463960--916481--TriColorShaderTestMask.png
7462745--916319--TriColorShaderTestShading.png

1 Like