Texture2D.LoadImage doesn't seem to preserve exact pixel values

I’m currently trying to load a texture from a png file that I stored locally. I mostly want to use this texture as a crude leveleditor, with for example (in rgba32) [0,0,0,255] being stone, [1,0,0,255] being wood, [2,0,0,255] being iron, etc. Therefore the values of the png need to be 1:1 the same in the texture in memory as the png on disk. (I also don’t want to predefine the levels in Unity, so creating texture resources is out.)

If I do this via the Unity Editor it works. However if I try and load the texture programatically it doesn’t, as the pixel values are not perfectly preservered. For example [1,0,0,255] and [2,0,0,255] are both mapped to [1,0,0,255], or another example would be [245,0,0,255] being mapped to [244,0,0,255].

To test this I created a 32x32 32-bit png with the upper left 10 pixels going horizontaly from [0,0,0,255] to [10,0,0,255]. Then I exectued the code below, which should load the image and save it with a different name as well as output the pixel colors in memory.

var tmpimg = new Texture2D(32, 32, TextureFormat.RGBA32, false); // Has the same effect as var tmpimg = new Texture2D(32, 32);
tmpimg.LoadImage(File.ReadAllBytes("Assets/orig.png"));
Debug.Log(tmpimg.GetPixel(0, 31).ToString()
    + "; " + tmpimg.GetPixel(1, 31).ToString()
    + "; " + tmpimg.GetPixel(2, 31).ToString()
    + "; " + tmpimg.GetPixel(3, 31).ToString());
File.WriteAllBytes("Assets/saved.png", tmpimg.EncodeToPNG());

Executing this I get a debug output of

Which shows me that both [1,0,0,255] and [2,0,0,255] are mapped to the same rgba value and [3,0,0,255] is mapped to [2,0,0,255] (obviously in float representation but that shouldn’t matter I think) already in memory. I also get the saved.png which when I look at it in Paint.NET shows more mismatches.

So now my question is, am I doing something wrong? Or is LoadImage not guaranteed to preserve color values? And what can I do about it?

(If someone wants to test it for themselves I attached the whole unity project as a zip.)

3138049–237978–TextureLoadTest.zip (21.2 KB)

Didn’t load the project but be sure you turn off compression on the imported texture: compression will modify your texture a lot. Also, turn off filtering (set it to point or none, I forget what it is called) so that you don’t get a blend from pixel to pixel.

You could also check if your card supports RGBA32, I normally use ARGB32. It sounds like it could be defaulting to a 16-bit version which is quantising your colour values.

I can’t change the compression as I don’t actually import a texture via the Unity Editor but only load it as shown in the posted code (and Texture2D doesn’t have any members regarding compression, only a function called Compress but that seems to only be there to compress stuff not change compression).

Setting the filterMode (as in “tmpimg.filterMode = FilterMode.Point;”) before or after calling LoadImage doesn’t change the result.

I have actually tried a few of the encodings (including ARGB32) however none of them work. Further if I manually import the texture in the Unity Editor (which works correctly) and then look at its type it shows RGBA32 as its TextureFormat.

While I’m not sure what exactly is causing it to not work, it’s easy to imagine small color changes happening in one of a hundred different places in the process. Every piece of software handles images like images (rather than like level maps), which means that tiny variations can easily get lost if any one piece of software in your stack is trying to be clever. It’s possible this could vary depending on the platform you’re on, and I’ve even seen some webservers re-encode images to save on their own bandwidth. I would absolutely not rely on images for loading levels this way unless you’re prepared to account for small changes in colors.

A much more robust solution would use text files, which you can edit in plaintext (as long as you have a fixed-width editor, like every code editor is), and won’t be subject to any compression or color space issues like this. You can read the files in line by line using C#'s standard file I/O. If you want to make a sort of diamond-shaped landmass with something in the middle, you can use this, for example:

000001100
000111110
011122111
001111100
000110000
000000000
1 Like

I tried your project and it acted like you described - I loaded the orig.png into photoshop and saved it out again and it all seems to work fine now in Unity (with the correct values). It could be a strange edge case issue with Unity and that particular texture/art program. I’ve attached the new orig.png for you to look at.

3139218–238117–orig.zip (2.91 KB)

1 Like

Actually, I had a look at the binary png and it seems like your orig.png was saved in sRGB space by accident (in Paint.Net - maybe you did ‘save for web’?) That will almost certainly cause the problems you were seeing. If you save it in standard RGB it’ll work fine.

2 Likes

True enough, this was only really supposed to be a quick hack to get something that is easily editable working (The actual images I load are 1024x1024 and therefore a bit harder to edit by hand :wink: ) and I might implement some proprietary format later on.

Well that makes sense!

I’m a bit confused with regards to Paint.Net, as I don’t see any way to store it in another colorspace (completely different format and 32 vs 24 vs 8 bit png sure but nothing about ‘save for web’ or something similar.) In any case I guess I’ll simply use a different program for now.

Thanks alot for your help! :smile:

@raedeo I have used bitmaps in this way before (to define levels) but I use a smaller amount of the colorspace so that it is easy to see visually.

Depending on how many different colors you really need, you can basically treat the three channels (r,g,b) as having only a few bits of information each.

The simplest example is to have them be one bit apiece, either on or off, which gives you eight (8) colors. The possible colors would be, in bitwise order: black, blue, green, cyan, red, magenta, yellow, white

For my checking of each pixel, I made a custom pixel read function that returns a string: “000” and “001” and “002” etc.

If you need more than that, you can go to two bits apiece, or even just three separate levels of channel. I used tri-level colors, so a color channel was either completely dark, half bright, or full-bright, and my strings from the custom pixel read function were: “000”, “001”, “002”, “010”, “011” etc. up to “222” This gave me 3^3 possible bit meanings, which was plenty, and I could still easily visuall tell apart “012” vs “021”, for example, in the texture editor.

This touches on the other problem with OP’s original system that bothered me, which I forgot to include: If your texture-as-map system actually distinguishes between 0 and 1 on a 256-color-level scale, how the hell is the person editing the levels supposed to be able to tell the difference between them? Reducing the color space to something smaller (I’d guess you can maybe get to about 8 levels of color for 8^3 or 512 distinct map tiles before you start having major issues with color recognizability) is definitely a workable version of this concept.