Texture memory explodes unreasonably

I am currently loading image files from a server, then creating textures with them, and sort of surprised at the amount of memory they end up taking in runtime.

I have 33 images, which on the server are 6.56mb.

I have a routine to download them, and then I apply them to 400 Sprite Renderers. I purposely create new renderers rather than reusing the first 33, because the plan is to eventually have 400 unique images, so this actually suits my test.

My runtime texture memory explodes from 482mb, to 3.92gb after applying all 400. This is not viable for a mobile environment and my initial tests on a phone with 6gb of RAM crash before I can load them all (not even considering the rest of the runtime, which I have not included in this testing scene).

I understand that textures are heavy, and can be reduced with compression, powers of 2, turning off mipmapping, turning off read/write, etc. But doesn’t this seem unreasonable? Is something wrong? I don’t see how it’s possible for each image to take 43 times the amount of memory when it is created as a texture (I arrived at this number with the idea that 6.56mb only accounts for the first 33 image files, so 400 of those would roughly be 80mb, then 3.92gb - 482mb, divided by 80mb).

What is the best way to go about cutting down this size, with compression and setting quality lower as a last resort? Also, programmatically, since the images are being loaded dynamically? I am somewhat new to the nitty gritty of texture processing. I am currently exploring TextureImporter, but not sure how to use it with dynamic assets such as these. I am thinking I have to save to disk, then call TextureImporter(pathToNewFile), configure import, then create the Texture2D by reading from disk? Edit: I see now that TextureImporter is only for the editor.

Also I am trying Texture.Apply(makeNoLongerReadable: true) but it strangely seems to have no impact.

Can anyone point me in the right direction? Thank you in advance.

1 Like

After looking into the issue, there are significant gotchas to texture formats and compression per platform, using Get/SetPixels, and trying to programmatically handle this, dynamically.

I am thinking the easy solution here is to just compress images using C# before they hit the server, and then if necessary applying platform compression when reinflating in the runtime.

Still I think it’s kind of insane that 80mb of image data inflates to 3.5gb when made into textures.

This seems entirely reasonable.

Presumably your textures are stored as jpg or png images. These are variable compression formats used for the web, meaning the content of the image (and in the case of jpg, the quality of the compression) determines the size of the image. These are obviously convenient for web and even everyday usage as it can greatly reduce the storage size of the images compared to uncompressed equivalents.

Try opening up Photoshop and make a solid color 4096x4096 texture. Save that as a 32 bit tga file. That’ll be 64 megabytes. Now save it as a png file, it’ll be <60KB. jpg will be <110KB. This is an extreme example, but shows that there can be >1000x difference in size between compressed and uncompressed images, so 43x isn’t too bad.

However, GPUs can’t use images compressed in this format. There are several reasons for this, but the most basic issue is a GPU needs to be able to randomly access any texel on the image and get the color value quickly. Decompressing a jpg or png image can take quite a while, even on a modern computer, and especially relative to the time frames needed for a GPU (preferably microseconds or less). So GPUs instead use uncompressed, or fixed compression ratio image formats where the content of the image does not change the storage size, only resolution and format. This way if the GPU needs to access a specific texel, it can quickly calculate where in memory that data is and access it immediately.

By default when importing images into Unity at runtime, those images are converted to an uncompressed image and served to the GPU. This is because the step of re-compressing them to a GPU native compressed format isn’t free. In some cases it can take literally minutes for a single texture, for some formats and larger image sizes. Though the most recent versions of Unity have real-time compressors available for runtime importing that can take only a few milliseconds per image, though at the cost of a significantly reduced quality compared to in-editor compression.

They also default to using mipmaps which increase the space used by the texture by ~33%. So a 4096x4096 image will become around 85 megabytes.

As a side note, this is a big reason why web browsers eat up so much memory too. To display a compressed image you have to decompress it, and web browsers always use the uncompressed version of the image while displaying it.

So what’s the fix? There are a few options, though you should temper your expectations as for how much less memory you’ll be using as GPU compression formats, as generally they’re either 6:1 (w/o alpha) or 4:1 (w/ alpha) ratios, meaning you’re likely looking at reducing that 3.5 GB to just under 1 GB.

One is, as mentioned, runtime compression. This works, but do expect it to take a few extra milliseconds per image imported on a mobile device, and for the image quality to be significantly reduced. Search around on the forums and you’ll find examples on how to do that. It used to be you did it by creating a Texture2D of the appropriate image format you want (likely ETC2 if you need transparency), then importing an image into it. But I think there are slightly easier options today.

Another option would be to have multiple versions of the images on the server, small thumbnail versions and full size versions. Load all of the thumbnail versions to start with and if you need a full size version of the image only load it when needed, and unload it as soon as you don’t need it. This would let you keep the full quality version of the image and avoid the cost of compressing the image. But of course you’ll have to load the images each time they’re needed which also isn’t free. You could also have some cache of them and unload old ones once you’ve loaded too many.

The last option is don’t store them on the server as png images. Use asset bundles where they’ve been pre-compressed into GPU friendly formats. You can also use the ASTC format in that case, which has both better quality and better compression ratios vs the ETC2 format you’d be limited to when using runtime compression.

4 Likes

First I just want to say, you always give very detailed and thoughtful responses (I am somewhat new to posting, but have lurked for a long time). I can see the time you put into your response. And thank you for that.

At this point I would happily take just under 1gb. I did previously consider thumbnails and caching, but was hoping to avoid getting into the weeds on it just because there is so much else to implement, but I think if it comes down to “this idea is simply unviable without thumbnails/caching”, then, that’s my answer on where my efforts will be directed next.

For runtime compression I have been experimenting with the following:

Texture2D tex = new Texture2D(2, 2, TextureFormat.RGB24, false);
tex.LoadImage(downloadedImages[index].FileStreamResult);
tex.Compress(false);
tex.Apply(false, true);

(In tests, I think simply setting the compressed format in the Texture2D constructor wasn’t having any effect. Calls to Texture2D.format were showing as RGB24 even when specifically setting TextureFormat.ETC_RGB as the argument in the constructor, which is why there is the additional call to Texture2D.Compress)

This works fast enough, and image quality is high, but it doesn’t reduce the footprint as much as is needed. Still, a step in the right direction.

Something that remains unclear to me is, when I take a representative image (size of 92kb) from the group of 33 and create it in the project and set compression to “None” in the inspector, it shows the resulting texture having size 2.0mb, which is only 22x. I guess it is probably along the lines of something I was reading elsewhere, which is that textures created from images imported through the editor itself are treated differently than anything done at runtime. I just wish it was clear how to achieve these results programmatically. I will keep looking.

Also I would have never considered storing them on the server as assets directly. I appreciate this idea and will explore it more.

With preprocessing of the test files, I got the texture memory down from 3.5gb to 0.86gb, with very little noticeable visual difference… then in an additional test using Texture2D.Compress pushed it down further to 0.55gb, with no visual change at all.

Thank you so much for the tips bgolus.

2 Likes