I’ve been trying for days at this problem and I can’t seem to make headway. I’m looking for performance advice or algorithm optimization that I’m missing. So the problem is I’m loading 55 textures - 8192-4096px and trying to retrieve the brightness/grayscale values from these images for height maps and alphamap masks.
Currently each one is roughly taking 3 seconds to process. The issue isn’t just with loading one its loading the whole 55 which can take upto 3/4 minutes in the editor then when built changes to roughly 6 mintues. I don’t know why it would be slower on the build but regardless.
I’ve tried using the Texture2D in unity but opted for using the System.Drawing.Bitmap class instead so I can use Parallelism to do GetPixel with a helper util I found instead. You can find the code below. Thanks.
Not sure how much help you’ll find on non-Unity classes on this forum - I for one don’t know anything about ImageReader - but we can definitely help you optimize Texture2D-based code. Since you are under the impression that you can’t use parallel processing on Texture2D, I’m guessing that your Texture2D code looked a lot like the code you have here - specifically, calling GetPixel in a loop. (Which, indeed, you cannot put into a thread; most large Unity classes, Texture2D included, are locked to the main thread) If that’s what you were doing, then it’s no wonder you thought it was too slow to ever work.
There are two things to note here:
calling GetPixel() in a loop is absurdly slow. If you use GetPixels() to throw all pixels of an image into a Color[ ] array, and then loop through that array, you might run literally 100x faster (depending on the size of each image, etc).
And just as importantly:
2) The array you create would not be locked to the main thread, unlike Texture2D, and could run multiple jobs in parallel.
Combine those, and it’s likely that your big bottleneck will be how fast your computer can load the textures, which there’s not a lot you can do about. (You could definitely get further improvements and offload a lot of this work to the video card if you write a compute shader, this is the sort of thing those are good at, but alas that’s not within my skill set to advise you on.)
As mentioned I don’t know the ImageReader class, but I’d be surprised if there were not a similar optimization. Going into the main data class with 880 million GetPixel calls (which is what you’re doing with all this) is unlikely to be efficient in any API.
All this is assuming that this processing is needed, which it might not be. What's the goal of this algorithm? Do the images need to be their full size? Do they all need to be processed ahead of time? If you're trying to turn them into grayscale, could you change their import settings instead? My spider-sense is tingling and I have a strong suspicion that this heavy lifting could probably be avoided by approaching the problem from a different angle, but not knowing the problem I can't advise what that other angle might be.
So the imageReader class is a class wrapper for system.drawing.BitMap class that makes it easier to use. So I can lock/unlock the image data and pull through using paralism. GetPixel is not the same as BitMap.GetPixel so it doesn’t occur the usual performance issues. I’ve made a method for GetPixel() and GetPixels()
public System.Drawing.Color GetPixel(int x, int y)
{
unsafe
{
byte* ptr = (byte*)Iptr;
ptr = ptr + bitmapData.Stride * y;
ptr += Depth * x / 8;
System.Drawing.Color c = System.Drawing.Color.Empty;
if (Depth == 32)
{
int a = ptr[3];
int r = ptr[2];
int g = ptr[1];
int b = ptr[0];
c = System.Drawing.Color.FromArgb(a, r, g, b);
}
else if (Depth == 24)
{
int r = ptr[2];
int g = ptr[1];
int b = ptr[0];
c = System.Drawing.Color.FromArgb(r, g, b);
}
else if (Depth == 8)
{
int r = ptr[0];
c = System.Drawing.Color.FromArgb(r, r, r);
}
return c;
}
}
public Color[,] GetAll()
{
unsafe {
Color[,] colors = new Color[source.Height, source.Width];
for (int y = 0; y < source.Height; y++)
{
byte* ptr = (byte*)Iptr;
int offset = bitmapData.Stride * y;
for (int x = 0; x < source.Width; x++)
{
ptr += Depth * x / 8;
int a = ptr[3];
int r = ptr[2];
int g = ptr[1];
int b = ptr[0];
colors[y, x] = System.Drawing.Color.FromArgb(a, r, g, b);
}
}
return colors;
}
}
The performance is still not that great on that either but I may be missing something there. I have also tried doing a similar thing with the Texture2D but its roughly the same performance.
Yeah so the point of it is to pull these massive textures and split them into tiles and apply alphamaps over them to create the terrain for the game. I would preferable like to just stick an image in a folder and the game to load that but if thats not possible I have other ways around it but wanted to see if I could exhaust this approach.
Actually managed to get it to work and have gotten the texture loading down to 0.5 seconds to 1 second because of that GetRawTextureData so my loadtime for all the textures went from 7 mintues on build down to around 1 minute thanks thorham3011
If anyone in the future stumbles onto this thread. The performance seems from my testing to be way faster using GetRawTextureData() then using any of the System.Bitmap classes or unity’s GetPixels() methods provided.
You could probably get it faster still by using bmp files instead of png or jpg. These two take a while to decode, while bmp files are just raw pixels.