This is one of those “I must be doing something simple and obviously wrong”, but after 4 hours I can’t see it. I’m hoping someone else can spot my mistake here.
I’m trying to access a texture that contains Vector2 data within my shader, and using the values as UV coords to lookup in a second texture. Should be really simple and straightforward… Everything works, except … there’s a bizarre pixellized effect happening that I cannot explain.
To show the bug, here’s a video, I made an animation that overlays the output so its easy to see the pixellated parts. The input texture for background and foreground is being accessed by same sampler2D, nothing is different except the UV coords. The foreground should be the same resolution as the background (or very close ) – at the video’s resolution, the texture I’m reading from is almost 1:1 with pixels on the screen.
Code:
float4 color = 1;
float3 firstSample = tex2D( _IndexTex, uv ).xyz;
float2 direction = float2( (colour.r*2.0-1.0), colour.g*2.0 - 1.0 );
float dist = distance( float2(0,0), direction);
// BG: check that tex2D( ... uv ) is showing with no pixellation
color.xyz = lerp( 0, tex2D( _LeafTex, uv ), tex2D( _LeafTex, uv).a ) * 0.75;
// FG: this is where it goes weird...
if( dist < 0.25 )
{
float2 subUV = direction;
subUV = (-subUV) / 2.0 + 0.5;
float4 secondSample = tex2D( _LeafTex, subUV );
color.xyz = lerp( color.xyz, secondSample.xyz, secondSample.a );
}
return color;
Things I’ve checked:
Actual Texture format for IndexTex: it’s RGB24 (I’m creating it myself from within Unity, saving it, verified in paint package that it’s correct format + colors)
Imported Texture format for IndexTex: it’s Automatic, but switching to RGB24 explicitly has no effect
sampler precision: I tried explicitly forcing it to sampler2D_float, but no effect
IndexTex resolution: I tried downscaling from 2048x2048 to 256x256 and saving as new texture, and importing it separately - apart from slight blurring due to being scaled back up in shader, no effect. Same pixellation, same size/places on screen
Check the UV’s returned by IndexTex: if I multiply them by e.g. 10, then the pixellation effect goes away (but obviously the image now tiles 10x as much).
Check that Unity’s “max texture size” on importer is correct (it’s at 2048, but I tried putting it up to 4096 with no effect).
…so it seems to be soemthing to do with tex2D quantizing the output, acting as though it’s downscaled the image internally?!?
An RGB24 isn’t a floating point format, it has 8 bits of precision per channel, and 8 * 3 = 24. What that means the R and G channels can only hold 256 values, 0 to 255, or 0.0 to 1.0 in steps of 1.0/255.0. Really you want an RGHalf or RGFloat format. And make sure your index texture isn’t set to be point filtered.
Is that where the quantization is coming from? I can see how a 1k texture would end up having 4 pixels rounding to the same distance-values…
…except: Visually, the e.g. 1024x1024 indexing texture does NOT have quantization. It smoothly transitions across its surface.
I also tried using only a single channel from it and rendering as greyscale, e.g. .x - what comes out appears to be a visually smooth white-to-black gradient with almost no discernible quantization.
However, your “0.0 to 1.0 in steps of 1.0/255.0” does sound like it would cause exactly the effect I’m seeing overall. So maybe it’s a trick of the eyes that the gradient rendering is, in fact, heavily quantized and just is much harder to discern.
I’ll go generate some RGHalfs and see what happens…
PS: I wanted to use RGHalf initially, so that’s the long term plan anyway, I might as well go for it right now. Then I ran into Unity APIs lack of support for writing to them in C#. As a quick test, I figured I’d use the highest res texture that’s supported by their Color-based API, and then switch to RGHalf later and find a better authoring workflow.
Update using RGHalf: Unity’s APIs refuse to write negative numbers via the SetPixel command. This is what I expected from the docs. But … I found an old forum thread where you demonstrated successfully writing negative numbers via SetPixel! Line by line, code seems functionally identical to mine, so I’m trying to figure out why Unity refuses to do it…
Update2: It seems that Unity 2019.2.x’s importer is broken for RGHalf / RGFloat images. I am getting consistent corruption of incoming images as though Unity is “guessing” the ranges of colors and then pre-filtering (seems to be applying a linear-gamma conversion, looking at the way in which its corrupting them).
In particular, contrary to the manual:
sRGB tickbox in the importer has no effect
Depending on the range of numbers found in the file, Unity selectively recolors the image during the import process
So many bugs in Unity :(. All the following are broken in 2019.2 (some tested still broken in 2019.7 but I got bored):
Unity projects in Gamma space break Unity’s APIs (and the exporters silently convert/corrupt outgoing data)
Unity’s importer is broken on detecting input texture format
Unity’s importer is broken on importing/exporting EXR files
Unity’s importer is broken on disabling the evil linear/gamma space “silent conversion” filter
Unity’s Texture2D API is missing core documentation (the new Constructor isn’t documented!)
Unity’s importer won’t allow you to override it’s buggy format-detector except on a platfrom-by-platform basis
Unity’s importer is missing most of the import formats
However. There’s also a bunch of hidden, or completely undocumented, features, that you need:
There’s a replacement for TextureFormat, which - unlike TextureFormat - has all the missing formats, and is correct, and works. It’s called GraphicsFormat (!) but it’s in the Experimental.Rendering API and has been there for 2+ years (!!)
There’s a new constructor for Texture2D that uses GraphicsFormat … but no docs. Use your autocompleting IDE and hope for the best.
You can force Unity to convert textures to linear space by using:
Create a Color (e.g. “Color value;”) that’s your real color (NB: no clamping will happen at this stage, you’re safe)
Replace the Color with “value = value.linear” before writing it to the texture with SetPixel(s)
You can detect whether a force-to-linear is required by using: PlayerSettings.colorspace - if it’s already linear, don’t do the above trick (docs imply it will double-linearise the texture, although at this point I stopped caring and didn’t test it)
And there’s some things you have to do by hand:
Unity’s texture importer won’t allow you to tell it the texture format, you are required to use the extremely bad “guess” that it will consistently get wrong, 100 times in 100, for some formats/contents. But if you select the “platform overrides” you enable the REAL texture-settings, and you can select a texture format that - while still incorrect! - tricks the crap importer into importing without corruption
(my main example: RG1616_SFloat textures are always imported as BC6H compressed for no apparent reason, giving them minor corruption)
Well, you didn’t say anything about importing those textures.
But yeah. If you’re generating floating point texture assets from within the editor, it’s best to save them as .asset files rather than exporting them to .exr so they stay exactly as you created them. Or go crazy like @jbooth_1 and create your own asset importer. The hack of pre-transforming the color values (color = color.linear) is a nice work around for continuing to use .exr and getting around the forced gamma conversion on read. Does that work with negative values (when not using a project set to gamma color space)?
Wouldn’t it simply be easier if unity would just fix the EXR import? It’s been, what, 4 years since we all started sending in repro’s of this bug and they started closing them as “Won’t fix” because “someone might be relying on the broken-ness of our texture saving and loading code”…
Yes … it’s a wild take considering they could keep the current behavior as the “default” that you could override. Or you know, just fix it and make sure that some version of the settings replicated the default behavior. However considering they broke transparent psd handling for everyone and told everyone to “just deal with it”, they could also just fix it.
“(Case 1198127) TextureImporter broken, sRGB checkbox ignored, EXR/HDR images corrupted based on conctents”
Expecting a WONTFIX in 3 … 2 … 1 …
(but I’m assuming that if people keep reporting it, and the dupes keep rising, then it’ll get acted upon eventually)
NB: when I submitted the bug-report, with demo scene, I didn’t fully understand the number of things that were broken with the importer + exporting. Depending on response I get, I’ll modify the bug info now I better know what’s (not) happening.