Texture2D memory

Hello,

I’ve been doing some memory analysis and I have some questions regarding raw texture size for Texture2D.

I create a Texture2D based on a png external image using the LoadImage method.
In this example, the texture is 2048x2048 and the texture format is ARGB32.

I would expect that the raw size in memory would be: 2048pixels x 2048pixels x 4 channels x 4 bytes/channel = 64Mb

In the memory profiler I see this:


(114.4Mb managed + 114.4Mb native)

MipCount is 1 and MipChain is false, so I think no extra space should be taken by this.

My question is: what causes this difference? (114.4Mb is a lot bigger than 64Mb)
Am I missing something? Am I making some mistake in my calculations?

On other note, is it normal that a texture takes 2x memory (managed + native)?

Thank you for your time.

Those numbers do seem strange.
Been a while since I had to use the memory profiler but maybe the managed/native you see listed is the same.
There’s also this in the docs if you haven’t found it yet;

In the pre 1.0 versions of the memory Profiler, the All Objects Table contains Unity Objects that have a managed and a native object as separate entries, and the managed object doesn’t just have a size in the Managed Size column, but also references the Native Object’s size in the Native Sizes column. It’s the same bytes that this size refers to, i.e. a duplication of information but not of actual size in memory.

If you could copy your LoadImage experiments into a Unity 2022.2 project and inspect a snapshot from that in the 1.0 version of the Memory Profiler package, the selection details on the right will give you Mata Data information on the texture, including if Read/Write is enabled (doubling it’s size for the CPU memory copy that is retained). With these versions, you’ll also get the native size split into CPU and GPU memory.

114.4Mb = 64Mb (4096x4096xARGB32) + 50Mb (4096x4096xRGB24)

You haven’t specified Unity version, but if you’re using ImageConversion.LoadImage, don’t forget to specify “true” to “markNonReadable”, if you don’t need a system memory copy of the texture. This should reduce it in half.

Check the size of your texture, might be it’s bigger than you’re expecting.
Or maybe it’s not exactly a power of 2 and was upsized? As an option to validate, check what the Editor inspector says on that png if you are covert it normally through the editor UI.

As for the exact reason for the different formats (RGBA32 and RGB24), there might be 2 reasons:

  • Your existing texture format for Texture2D you were loading into was RGB24
  • As GPU side texture size is calculated rather than actual size, the older versions of Unity not always correctly calculated actual size. This is might be the case here and the actual memory size is 128Mb. We’ve made a lot of efforts in 2022.2/2023.1 to correct that.
1 Like

Thank you for the replies, it makes a lot more sense now :slight_smile:

1 Like

There was some funny business in my own sample that was throwing me off.
However, I do think there’s something funny going on with the markNonReadable flag.

I did some testing in both Unity 2021 and Unity 2022 and I can never see a difference in used memory regardless of setting markNonReadable to true or false.

In this scenario, I’m using a 4000x3750 texture.
4000x3750x4channels = 57.22Mb

Both 2021 and 2022 show 114.4Mb in the memory profiler.
It looks to me that memory is indeed duplicated: 57.22Mb x 2 = 114.44Mb
Passing markNonReadable to true or false makes no difference.

The code I’m using:

byte[] fileData = System.IO.File.ReadAllBytes(filePath);
tex = new Texture2D(0,0,TextureFormat.RGBA32, false);
tex.LoadImage(fileData, true);

Could this be a bug?

Are you testing it in Editor or in a built app?

I’m testing in the Editor.

Editor doesn’t release CPU copy. You need to test it in the player.

2 Likes

I didn’t know about that, that’s good to know. Thank you.