I’m trying to get a firm understanding of working in linear color space - not just the theory and definitions, but the underlying values as they go from an sRGB or linear texture in memory, through shaders and the frame buffer, to the screen and final pixel intensities that enter our eyes.
Here are some resources I’ve found while researching:
- Unity Documentation - Gamma Textures with Linear Rendering
- GPU Gems 3 - The Importance of Being Linear
- Learning Modern 3D Graphics Programming - Gamma and Textures
- Catlike Coding - Rendering Tutorial Part 3
- Unity forums - Confusion about Gamma vs. Linear
Example #1
Let’s begin with a simple test, to determine if my understanding is correct:
A gradient texture rendered with a custom shader under Gamma (top) and Linear(bottom) color spaces, and imported as an sRGB (left) and linear (right) texture.
Above is a gradient texture drawn with a custom shader under various settings. The shader truncates the color value to one decimal place (i.e. 0.1, 0.2, 0.3…) in the bottom half, and draws a red line to mark the center of the texture.
The texture was generated by script, is 256x256 pixels, and simply contains a linear gradient (i.e. the RGB values increase left-to-right from 0-255), exported to the project’s assets folder via EncodeToPNG(). Since the original asset is now a PNG, it has sRGB gamma correction built-in, correct? What does that mean exactly for the stored color values, do they still increase linearly from 0-255? Does it make sense to tell Unity this is a linear texture (i.e. uncheck “sRGB (Color Texture)”) or is an sRGB texture whether I like it or not by virtue of being stored in PNG format?
I’m also unsure of the actual visual result expected in the above test - I understand the idea that our eyes perceive light non-linearly (ex. a light that appears half as bright as another is actually only about 22% as intense), but does that mean the above example of a linear texture in linear space (bottom-right) is accurately increasing in pixel intensity linearly (i.e. the middle is half as intense as the right) or am I overestimating the effect, and that gradient has simply been gamma-corrected twice, making it too bright?
Example #2
While following the Catlike Coding tutorial linked at the top, I ran into another source of confusion; a marble texture is multiplied by a grayscale “details” texture, which brightens/darkens the marble when viewed up close. After switching the project to linear color space, the marble goes dark - the details texture is still set to sRGB, and so the “middle gray” that was previously sampled as 0.5 in the shader, is now sampled as about 0.22 - if I understand correctly, this is because the “middle gray” in the texture is only “middle” as perceived by our eyes, but in fact about 22% as intense as white in the linear color space, so when Unity samples the texture and finds 127, the sampler believes this is a gamma-corrected value and converts it to a linear light intensity, correct?
My main point of confusion here was in finding that the solution is not simply to uncheck “sRGB (Color Texture)” and allow Unity to import the details texture as linear, but instead to modify the calculation in the shader - why do these yield different results?
Incorrect fix: disable sRGB sampling
Correct fix: modify shader math
I expected these results to be the same - that is, I expected that simply disabling sRGB sampling would result in Unity importing the texture in the linear color space, such that when it is sampled by the shader and finds the value 127, it now understands this is already linear and gives the value 0.5 - but instead, while the overall brightness appears to be correct, the effect is somewhat muted/blurred.
The correct solution, as shown in the tutorial, is to keep the texture as sRGB and modify the shader instead, i.e. the “middle gray” values are read in as ~0.22 but then multiplied by unity_ColorSpaceDouble instead of simply 2 to cancel out the gamma-correction. That makes sense to me, but why is that different than removing the gamma-correction during importing instead of in the shader?
I’ll leave it at that for now, if anyone can provide clarity/insight/corrections, it will be greatly appreciated!