We have a very simple Forward pixel shader that renders solid colors to every object in the scene, using camera.SetReplacementShader. We are setting the R,G,B values manually to embed an identification number in to the color. But by the time the pixel makes it on screen, the pixel color has changed, very slightly. For example, Blue will go from 0.3098039 to 0.3085938, and Green will go from 0.003921569 to 0.00402832. The shader has “Lighting Off” so shouldn’t be affected by lighting at all, and there are no post-process shaders. HDR is disabled in that camera and in Graphics settings.
Any ideas what could be changing our colors? This is on Windows/Standalone, BTW, DirectX 11.
Potentially your issue: Floats only have a certain level of accuracy, and you’re nearing that limit. Usually, the line I’ve heard is that you get about 6 significant figures before you start losing accuracy in the lower ones. Depending on the output format (RGBA32, for example), you’ll also only get a certain bit-depth per pixel (in that case, 8 bits per subpixel, so 256 possible values per color). So, depending how you’re assigning your color and how it is represented throughout the pipeline, it may be losing information.
/// Split the 16 bit integer into a two 8 bit integers
int bitsPerChannel = 8;
int label = 12;
int green = lastInstanceID / 255; // most significant bit
int blue = lastInstanceID % 255; // less significant bit
return makeColor(bitsPerChannel, label, green, blue);
public static Color makeColor(int bits, int r, int g = 0, int b = 0, int a = -1)
{
float max = Mathf.Pow(2.0f, bits) - 1.0f;
if (a == -1) a = (int)max;
return new Color(r / max, g / max, b / max, a / max);
}
Then we do a Material.SetColor to store the generated color for the pixel shader to use, which the shader just outputs directly, creating a world of flat-shaded objects. The problem is, when that camera does OnRenderImage, the Source rendertexture has the colors changed from the output of makeColor.
Maybe it is a rounding error, where numbers get truncated instead of rounded correctly.
If that is the case, simply add 0.5f to the pixel value for an easy and robust workaround.
Try to imagine it like this for a second:
Every integer/byte increment is 1, so every increment in the 0-1 range (float) would be 1/256f.
=> 0.00390625
That means if you want to represent “128”, you’d save 0.00390625 * 128 → 0.5
But now even with a very slight variation you can already get the wrong value.
If you add a tiny amount (1/512) you won’t get to the next value (129), because you only traveled half the distance needed.
But if you subtract even a extremely tiny amount, you’ll have crossed the threshold to the lower number already (you’re at 127.9999… which when truncated becomes just 127, so off by one!)
If you add +1/512 you are ad the mid-point, where there would have to be a lot more fluctuations to cause any effect.
Maybe I’m wrong, but it is worth a try because it just takes 5 seconds to add +1/512 to every number.
It’s 1/255, not 1/256. The value range is 0 - 255.
1/255 == 0.0039215686274509803921568627451
With a float value that’s going to be truncated to something closer to that 0.003921569 value (internally 0.0039215688593685626983642578125) @Dreamback mentioned as getting converted into 0.00402832 in the shader. That’s a fairly significant difference in terms of float accuracy, but far less than a byte rounding error, though I’m curious how you’re measuring that final value. ie: Are you doing a GetPixels() to get the pixel value for debug purposes, or using DX11 shader debugging?
It all makes me think it’s the product of an sRGB conversion someplace getting not-quite-perfect transforms (either due to an approximation being used, or just good old floating point errors). You might get away with fixing it in the shader by doing:
What I’m doing to get the actual resulting value, is in OnRenderImage for that camera I’m passing the Source texture Unity passes in to a Compute Shader (using ComputeShader.SetTexture), which is doing this:
I’m exporting both objectID and colorValue, so I can see that colorValue isn’t the same as when it was created. The problem is, the value is close enough that most of the time it does work. But sometimes the resulting objectID is off by one. And the amount of difference is different every time, so it’s hard to compensate for it.
Found what was causing the problem: Linear Space. If we switch Unity to Gamma Color Space, the values I’m getting are correct. So the Linear rendering system is changing our colors to make them more “accurate” I guess. Hmm…too bad you can’t set that on a per-camera basis.
It’s not a per camera thing, it’s a per target thing. When you create your render target make sure it’s set to be linear. You may also need to not pass data via SetColor but SetVector to prevent conversions.
Nope, there’s multiple points where conversions can occur.
When setting a color on a shader / material Unity may “know” it’s a color and do the conversion from sRGB space to linear space.
When reading the render texture if it’s set to sRGB or linear will determine if yet another conversion needs to occur.
When the fragment shader is writing out, no conversions happen.