I’m having trouble setting up a shader to blend a texture with a color based on a gradient ramp. I understand what I need to do with the RGB mathematically, but I am not sure how to translate that to the shader language.
I have a mesh in Unity that has a color and a texture. My texture is greyscale with some alpha. I do not use any lightning.
What I want to do is create a shader that will blend the mesh color as an overlay onto the texture at some % based on how close the texture pixel is to white.
Lets say the pixel color value of my texture is called tColor and the pixel color value of my mesh color overlay is mColor.
So the formula is
tColor.R = tColor.R + (1- (tColor.R/255) * mColor.R
tColor.G = tColor.G + (1- (tColor.G/255) * mColor.G
tColor.B = tColor.B + (1- (tColor.B/255) * mColor.B
Lets say for example we started with the following texture pixel and overlay color:
tColor = 160,160,160
mColor = 255,0,0
I want the result to be:
255,160,160
Lets say we had another case where these were the values
tColor = 160,160,160
mColor = 255,100,0
Any fixed bit depth is complicated unless you talk in Hex; shaders work with color channels as 0-1 floating point values, just like Unity’s Color class. GPUs operate using SIMD, too, so it’s overcomplicated to list the same instruction for each channel. Your result is simple enough that it can be done with ShaderLab alone; “the shader language” does not exist. There are a few choices. “greyscale with some alpha” is too vague for us to be able to write this for you, though.
Lets say the texture is a 128x128 rect that is all one color with RGBA: 160,160,160,128
I assumed I would need some sort of CGPROGRAM to get the Mesh.colors for the overlay, then after I have the colors combine them with the texture in the way I described in my original post.
You could use Cg but you don’t need to. You didn’t address why you have an alpha channel; is it for this material, or only there for another use of the texture?
You did not absorb what I said about not talking in base 10 unless you’re living on the right side of the decimal point.
I am using the alpha channel because I am working with sprites and I want parts of the image to be transparent. Each texture fits within a 128x128 rect but only part of that region should be visible, so the rest has a 0 alpha.
So talking in terms of Unity floating point colors, lets say my texture is 0.6f,0.6f,0.6f,1 but some of it is 1f,1f,1f,0f
Also, if it helps I am strictly targeting iOS so I would like the shader to perform well in that environment.
This will run on all iOS hardware. The only thing you’d have to worry about in terms of performance is how much useless blending is happening: if the alpha is zero, you should be modeling so the invisible area is minimized instead of just using quads. If the alpha is 1 over a large area, you should be using an opaque shader instead for that area.
Do you have any grey in your alpha channel? If not, you should use PVRTexTool to compress, instead of Unity, because you can use much higher-quality compression for RGBA images. The caveat is that anything but white or black in alpha will result in artifacts. It may not matter, and I can’t really give any other advice on quality without knowing what your graphics are.
Thanks Jessy, I appreciate you taking your time to help with this.
I tried that shader and it renders the texture well! I had to make one minor modification, I added a “Cull off” , without that only the back face of the texture would render.
We do use a bit of alpha between 0 and 1 since we want our texture outlines to have smooth edges. They gently fade from 1 alpha to 0 alpha within a few pixels at the edge.
I spent some time learning more about shaders and came up with a CG based solution, it seems to accomplish the same thing. I was wondering if there were any benefits/tradeoffs between our approaches:
I can’t deal with the poor quality of RGBA PVRTC in those cases, and use solutions where I pluck something from RGB as a mask. You’ll have to decide what could be practical for you. If what you have looks good, or your graphics are uncompressed, not doing what I do is better.
I don’t know Cg well so I can’t give the best critique. What I can tell you is that PVRUniSCo reports that your shader is 11 instructions, while mine is 2. As I suggested above, you definitely need to learn to think in SIMD. GPUs are vector processors; the SGX in particular can operate on lowp variables as 3- or 4-vectors. There is no point in using floats for colors. Use fixed, which translates to lowp.