I’ve been doing some memory analysis and I have some questions regarding raw texture size for Texture2D.
I create a Texture2D based on a png external image using the LoadImage method.
In this example, the texture is 2048x2048 and the texture format is ARGB32.
I would expect that the raw size in memory would be: 2048pixels x 2048pixels x 4 channels x 4 bytes/channel = 64Mb
Those numbers do seem strange.
Been a while since I had to use the memory profiler but maybe the managed/native you see listed is the same.
There’s also this in the docs if you haven’t found it yet;
In the pre 1.0 versions of the memory Profiler, the All Objects Table contains Unity Objects that have a managed and a native object as separate entries, and the managed object doesn’t just have a size in the Managed Size column, but also references the Native Object’s size in the Native Sizes column. It’s the same bytes that this size refers to, i.e. a duplication of information but not of actual size in memory.
If you could copy your LoadImage experiments into a Unity 2022.2 project and inspect a snapshot from that in the 1.0 version of the Memory Profiler package, the selection details on the right will give you Mata Data information on the texture, including if Read/Write is enabled (doubling it’s size for the CPU memory copy that is retained). With these versions, you’ll also get the native size split into CPU and GPU memory.
You haven’t specified Unity version, but if you’re using ImageConversion.LoadImage, don’t forget to specify “true” to “markNonReadable”, if you don’t need a system memory copy of the texture. This should reduce it in half.
Check the size of your texture, might be it’s bigger than you’re expecting.
Or maybe it’s not exactly a power of 2 and was upsized? As an option to validate, check what the Editor inspector says on that png if you are covert it normally through the editor UI.
As for the exact reason for the different formats (RGBA32 and RGB24), there might be 2 reasons:
Your existing texture format for Texture2D you were loading into was RGB24
As GPU side texture size is calculated rather than actual size, the older versions of Unity not always correctly calculated actual size. This is might be the case here and the actual memory size is 128Mb. We’ve made a lot of efforts in 2022.2/2023.1 to correct that.
There was some funny business in my own sample that was throwing me off.
However, I do think there’s something funny going on with the markNonReadable flag.
I did some testing in both Unity 2021 and Unity 2022 and I can never see a difference in used memory regardless of setting markNonReadable to true or false.
In this scenario, I’m using a 4000x3750 texture.
4000x3750x4channels = 57.22Mb
Both 2021 and 2022 show 114.4Mb in the memory profiler.
It looks to me that memory is indeed duplicated: 57.22Mb x 2 = 114.44Mb
Passing markNonReadable to true or false makes no difference.