I have source images where each pixel is one of exactly 4 colors:
- red = (1,0,0)
- green = (0,1,0)
- blue = (0,0,1)
- black = (0,0,0)
I’m trying to modify Unity’s default sprite shader to use those colors (besides black) as a key to change to other colors. For example, any pixel that’s red should become _RedColor. My first attempt was:
fixed4 _RedColor;
fixed4 _BlueColor;
fixed4 _GreenColor;
fixed4 frag(v2f IN) : SV_Target {
fixed4 c = SampleSpriteTexture (IN.texcoord) * IN.color;
fixed4 modColor;
modColor.rgb = _RedColor.rgb*step(1, c.r);
modColor.rgb = _BlueColor.rgb*step(1, c.b);
modColor.rgb = _GreenColor.rgb*step(1, c.g);
modColor.rgb *= c.a;
return modColor;
}
This worked. The problem was it was very jagged. In exporting the image and being compressed in Unity I think some pixels had multiple colors but the shader simply would set the color to the last matching color.
So I tried to modify it to not reset the color if the next color matches, but instead blend between any colors present in the pixel weighted by their contribution. Here was my second attempt:
fixed4 frag(v2f IN) : SV_Target {
fixed4 c = SampleSpriteTexture (IN.texcoord) * IN.color;
float total = c.r + c.b + c.g;
// we set the total to 1 if it's 0 so that we don't divide by zero when the color is black
total += step(total, 0);
c.rgb = _RedColor.rgb*c.r + _BlueColor.rgb*c.b + _GreenColor.rgb*c.g;
c.rgb /= total;
c.rgb *= a;
return c;
}
This also worked and the jaggedness went away everywhere except the border of black and nonblack colors. So then I tried getting rid of the total and it was perfect:
fixed4 frag(v2f IN) : SV_Target {
fixed4 c = SampleSpriteTexture (IN.texcoord) * IN.color;
c.rgb = _RedColor.rgb*c.r + _BlueColor.rgb*c.b + _GreenColor.rgb*c.g;
c.rgb *= c.a;
return c;
}
It’s perfectly blended between any of the pixels. But I don’t get why. I was dividing by the total so that the color values wouldn’t go above 1 and it would blend between them. For example if a pixel had both red and blue present it would be adding the 50% of _RedColor’s rgb value and 50% of the _BlueColor’s rgb value. But when I didn’t divide by the total it appears nothing changed besides blending between black and other colors.
This implies to me that I never had any pixels that actually had more than one nonzero color component to them (or else I would be getting white or weird colors at those pixels). But if that’s the case, why did switching from the first step function approach to a blend approach even do anything?
This is my first foray into shaders so it’s entirely possible I’m doing something terribly, but I just want to wrap my head around what’s going on and why. So I definitely appreciate any help/ideas.