Using texture pixel colors as an array index

Since I’m new to shader programming, I want to describe something I plan to try and get some feedback before I spend a bunch of time writing code only to discover it won’t work – or to learn if there is a better approach.

The shader will be used to color a 2D map (and later, to lerp when the color of an area changes, which is one reason I want to use a shader instead of bitwise operations or the like). Imagine a map showing a handful of countries, and inside each country are individual states or regions. A “Base Map” texture will be the starting point. Another texture called “Region Mask” is used to identify and apply colors to the individual states or regions; I’ll explain more about that momentarily. Finally a third texture called “Country Mask” will be overlaid to highlight the borders of the countries; that is a straight color-combining operation – non-black areas will be tinted on top of the Base Map and the changes already applied by the Region Mask processing.

The Region Mask is the focus of my question. The game needs to alter the colors of individual regions as gameplay progresses. I have some map editor utilities and my plan is to generate this Region Mask bitmap with the pixels for each region set to an index number… #000000 is the first region, #000001 is the second, and so on. Then using Unity 5.4’s support for passing array data to shaders (see this), I plan to build an array of color values, and the Region Mask pixel color would be used as the index to look up the color in the array to colorize each pixel.

Is there any reason this won’t work? Is there a reason it’s a Really Bad Idea? Is there a better way?

Totally reasonable, though I would consider encoding the colors into another texture rather than using an array. Just a very long 1 dimensional texture (1 pixel high, n pixels wide). A few things you’ll need to be wary of is you’ll want to use point sampled textures for both, and the masks should be linear textures otherwise 127/255 won’t be ~0.5 it’ll be ~0.2 because of sRGB.

Thank you.

I originally expected to use a 1D texture but I thought an array would be easier to work with once I found out they were an option coming in 5.4. How would using the texture be beneficial? I suppose it could be arbitrary size (well, power of 2) whereas the array would have to be fixed-size. That’s not important to me but it’s the only benefit I can think of.

Perhaps worse, wouldn’t replacing a texture on a material shader be a lot slower than setting an array property? (In fact, yesterday I read there are also global array setters coming, I bet those are even faster, I think they operate as uniforms.)

Oh yeah, and thanks for the tip about how to import the textures. I knew that was going to be an issue, I was just sitting down to read about the options and try to figure that out.

Setting an array or a texture is going to be about the same amount of data, and as long as you’re not trying to compress / mip map it each time (which if it’s a 1d texture you shouldn’t do either). If you’re only going to be changing few colors at a time you can use a render texture and a custom blit shader to set individual / groups of colors. The main thing is indexing into an array in a shader can be slow, where reading a texture is not.

Thanks. I almost have it. For some reason it doesn’t seem to be getting the index number correctly from the Regions mask. Sample project attached with some lovely garish test images :slight_smile:

        CGPROGRAM

            #pragma vertex vert_img
            #pragma fragment frag
            #include "UnityCG.cginc"

            sampler2D _TxMap;
            sampler2D _TxBorders;
            sampler2D _TxRegions;
            float _RegionAlpha;
            sampler2D _TxColorMap;
            float _BorderAlpha;

            float4 frag(v2f_img i) : COLOR {
                float4 mapColor = tex2D(_TxMap, i.uv);
                float4 borderColor = tex2D(_TxBorders, i.uv);

                float4 regionColor = tex2D(_TxRegions, i.uv);

                // something wrong here, I think -- looks like it always uses index position 0
                float regionIndex =
                    round(regionColor.r * 255.0)
                    + (round(regionColor.g * 255.0) * 256.0)
                    + (round(regionColor.b * 255.0) * 256.0 * 256.0);

                float2 colorMapXY = float2(regionIndex, 0.0);

                // use base map when region alpha = 0, apply region color map when alpha = 1
                float4 mapOut = lerp(mapColor.rgba, tex2D(_TxColorMap, colorMapXY), regionColor.a);

                // overlay the border mask
                mapOut = mapOut.rgba * (1.0 - (borderColor.a * _BorderAlpha));
                float4 borderOut = borderColor.rgba * borderColor.a * _BorderAlpha;
                float4 outColor = mapOut + borderOut;

                return outColor;
            }

        ENDCG

2627136–184625–MapShader.zip (1.12 MB)

Hmm, looks like Unity is mangling my texture – but I think I have all the import settings correct. I usually have the Inspector preview window squashed down to nothing, but I changed the shader to just return regionColor and got this blocky mess, which is the same I see in the Inspector preview. The third one is what it should look like…

2627211--184633--1.jpg 2627211--184634--2.jpg

Ah, setting Alpha is Transparency fixed that… but the indexing problem is still there.

Dumped this screen shot into Photo Shop and confirmed the colors are still right after importing – the “1” is color rgb(1,0,0), the “2” is rgb(2,0,0) etc…

2627213--184636--4.jpg

A different test just to confirm tex2D was really returning different shades of red… and it does (of course). Yes, this is the “running out of ideas” segment of the debug session. lol

                float4 regionColor = tex2D(_TxRegions, i.uv);
                if(regionColor.a == 0)
                {
                    return float4(0.0, 0.0, 0.0, 1.0);
                }
                else
                {
                    regionColor.r *= 25.0;
                    return regionColor;
                }

2627276--184639--1.jpg

Got it. Finally realized that my “index” is also a 0-1 float against the 1D color map texture.
Completed working 5.3 test project attached in case it might help someone some day.

        CGPROGRAM

            #pragma vertex vert_img
            #pragma fragment frag
            #include "UnityCG.cginc"

            sampler2D _TxMap;
            sampler2D _TxBorders;
            sampler2D _TxRegions;
            float _RegionAlpha;
            sampler2D _TxColorMap;
            float _BorderAlpha;

            float4 _TxColorMap_TexelSize;

            float4 frag(v2f_img i) : COLOR {
                float4 mapColor = tex2D(_TxMap, i.uv);
                float4 borderColor = tex2D(_TxBorders, i.uv);

                float4 regionColor = tex2D(_TxRegions, i.uv);

                float regionIndex = 
                    round(regionColor.r * 255.0) 
                    + (round(regionColor.g * 255.0) * 256.0) 
                    + (round(regionColor.b * 255.0) * 256.0 * 256.0);

                // coordinates must be 0-1, so scale to color map width
                float4 colorMapSize = _TxColorMap_TexelSize;
                regionIndex = regionIndex / colorMapSize.z;
                float2 colorMapXY = float2(regionIndex, 0.0);

                // use base map when region alpha = 0, apply region color map when alpha = 1
                float4 mapOut = lerp(mapColor, tex2D(_TxColorMap, colorMapXY), regionColor.a);

                // todo - apply _RegionAlpha

                // overlay the border mask
                mapOut = mapOut.rgba * (1.0 - (borderColor.a * _BorderAlpha));
                float4 borderOut = borderColor.rgba * borderColor.a * _BorderAlpha;
                float4 outColor = mapOut + borderOut;

                return outColor;
            }

        ENDCG

2627313–184644–MapShader.zip (1.12 MB)

A few notes:
A texture position of 0 is actually going to be right on the edge of the texture. This is important because it’s between two pixel centers and depending on float math you might get one pixel color or the next. If you weren’t using point sampling you’d always be getting 50% of two pixel colors. You want to center sample a look up texture, even with point sampling enabled.

The solution for this you’ve already kind of found, _TexelSize. You want to add a half pixel offset to your look up uv. The texel size X and Y values are 1.0 / TextureDimension where as the z and w are the TextureDimension values, so your color map UV should be something like float2( _TexelSize.x * 0.5 + scaledIndex, _TexelSize.y * 0.5)

Similarly you’re doing regionIndex / _TexelScale.z and you should do * _TexelScale.x because divides are slower in shaders than multiplies.

Very cool, I appreciate the tips. I knew divides were slower but I thought I read somewhere that the compiler would automagically convert divides to a multiply. Better safe than sorry though, I guess.

I am also wondering about the round(color.r * 255.0) approach and whether that is really going to work for all 256 values. I experimented with the longer process shown at the page linked below (that code is commented out in the shader source in the most recent zip above) but the simpler round() version I’m using worked in this test (only 9 values) so I opted to stick with the simpler code unless I start having problems. However, it sounds like 127 (which would ideally be represented as 0.5) is a commonly used example of the problem: apparently the float comes out to something like 0.495 so you get 126.22 which round() will return as 126.

https://forum.beyond3d.com/threads/pixel-shaders-float-integer-colors.11988/page-2

I was trying to think about how to test this and I believe alternating colors in my map should expose the problem – if there is a rounding issue I should get two of the same colors side-by-side, I think.

2627439–184646–MapShader.zip (1.09 MB)

The compiler will automatically convert divides into multiplies, but only in very specific situations. If the number that is the divider is a constant value (the number is explicitly defined in the shader itself) it can compute the reciprocal (1/x) and multiply by that. If it’s coming from something set by the material or otherwise not defined in the shader the compiler can’t do this since it doesn’t know what number to use.

Makes sense. Thanks again.

Also, on the topic of color accuracy 127/255 isn’t 0.5 so you shouldn’t expect it to be. This is working as intended. The actual value of 127/255 is 0.498, 127.5/255 is 0.5. The “fixed” type float format you might have seen is guaranteed to be accurate to 1/255 over the entire range of -1.0 to 1.0, which usually much better precision closer to 0.0, and that’s an 11 or 12 bit float. The “float” type float is a 32 bit float which is going to have significantly better precision and an uncompressed texture is going to get converted from the 8 bit per pixel to a 32 bit float 0 - 1 with pretty high accuracy.

That thread is from 2004 where a lot of internal math was being done with much less precision for speed reasons. Today’s desktop GPUs are doing everything at 32 or 64 bit even if you ask it to use 16 or 12 bit floats. Mobile phone GPUs might still be affected by this, but even then you should be fine with the round you’re doing now.

As an aside the 0.498 is why “flat” normal maps always look just a little bit different than something that isn’t normal mapped which causes all sorts of headache since some tools use 127,127,255 for flat which isn’t quite 0.5,0.5,1.0. Some tools use 128,128,255 instead which still isn’t 0.5,0.5,1.0, and if you flip the green channel between programs you might have something that’s 127,128,255.

1 Like

Shouldn’t 127 be 0.5 because we’re really talking about “128/256”?

I did wonder if improved GPU float precision might be the reason my simpler code works, good to know.

255/255 = 1.0, there’s no 256/256. It’s a common mental stumble that 8 bits is 256 values, but it’s 0-255, not 0-256 since zero is one of those 256 values. There’s nothing magic about the 8 bit to float conversion (at least when it’s a linear conversion and not a gamma conversion).

127/255 = 0.498
128/255 = 0.50196

There’s no 0.5, and that’s okay as long as you’re doing the conversion back to int by * 255 properly, which you are.

If for some future thing you need exactly 0.5 from a texture then sure you can artificially limit yourself to 0-254 so 127/254 = 0.5 by taking the value the shader gives you from a texture and doing saturate(x * 255/254). Or you can use a float texture type instead of an 8 bit per channel texture, but you’d have to construct it in Unity yourself since it doesn’t support importing float formats unmodified.

Fair enough, given 256 is really RGB 0,1,0…

Two questions about that… if I choose “Bypass sRGB sampling” does that fix the problem? And “In Linear Space” is only available if “Generate Mip Maps” is checked – I don’t need mip maps, so if I uncheck mip maps, does that also bypass the problem?

Bypass sRGB should absolutely be on for this. In Linear Space should be on if you have mip maps.

Thanks. I assumed so and it’s working, but the docs for this stuff are pretty bare bones. I appreciate the help.