Recently i’ve tried to scale images during run time of the game (e.g. downscaling downloaded image from 56004300 to 20481500 or like that) and figured that the game consumes more and more memory with each processed image.
I’ve poken around for a while, ensured that all objects derived from UnityEngine.Object are destroyed and so on. For now, i’ve narrowed down the problem to this snippet of code:
using System;
using UnityEngine;
public class Tester : MonoBehaviour
{
[SerializeField]
Texture2D texture;
void Update()
{
texture.GetPixels();
GC.Collect();
}
}
If i attach this script to an object in scene (which does not contain any other objects except standard camera) and start playmode, i see in profiler that memory consumption of editor grows up (not that fast as when i do scaling images in game, but still).
The texture passed to serialized field is 4096*4096 one color image.
I’m using Unity 2018.4.6.f1, but same problem appears in 2019.4.16f1.
using System;
using UnityEngine;
public class Tester : MonoBehaviour
{
[SerializeField]
Texture2D texture;
void Update()
{
var colors = texture.GetPixels();
texture.SetPixels(colors);
texture.Apply();
GC.Collect();
}
}
This variant eats memory very fast. But the texture should be uncompressed and read/write enabled for this to work.
Well, I highly doubt that there’s an actual memory leak. The returned array is a managed array. So it can’t really leak. Regardless of your testing setup, you should avoid reading the pixel data every frame as the allocation of such huge arrays is slow and will produce huge amounts of garbage. Furthermore memory fragmentation could also cause the reserved system memory to grow as arrays require a continuous memory area. So even when you call GC.Collect to release the array, when the array is reallocated there may be a tiny object inside that old space so the new one does not fit into the old “gap”. That’s all not leaking memory as reserved system memory stays allocated but is free to be used for smaller objects. Large objects should generally be avoided if possible.
That said you may want to use GetPixels32 instead which uses Color32 values instead of Color values which are 4 times smaller (Color32 has 4 byte values while Color has 4 float values).
Note that in your last test code your GC.Collect call can not collect the array you have allocated in this frame as it’s still referenced by your local variable. So you will always have at least 2 versions of your array in memory and the Collect call can only collect the array from the last frame.
As I said you should avoid reallocating such huge array all the time. Even if you need to update the data every frame, there’s no need to reallocate the data array. I’ve actually used such an approach in my parallel step Mandelbrot renderer.
This is just a test code. In real application i download an image (approximately once a minute or two), then i need to resize it (because it has huge pixel size and could not be rendered on mobile device “as is” (they support textures only up to 2048 pixels) and after that i can show it.
Problem is, that after every iteration of this cycle, application reserved memory size grows (by approximately 150 megabytes) and eventually it is closed by Android OS, which is of course undesired.
Now i’m trying to narrow down the memory problem. I’ve tried turning off resizing completely (just trying to show downloaded texture). It does not show texture (this is predictable) but does not raise memory consumption, so i think problem is with resizing itself.
Well, i’ve thought about heap fragmentation too. If that’s the case, that’s a pity.
Yeah, tried that too. It really consumes less memory (after every image, heap grows by about 20-30 megabytes), but problem still persists.
Good note, thanks. Nevertheless, as to my understanding, heap size should grow to some value and stop on that (but in my case i’m experiencing constant grow of heap size).
Unfortunately i’m not the boss here - AFAIK, there’s no way to get Texture2D’s pixel data into preallocated array.
Did you try attached project or only the script?
If project, what version of Unity are you using? Maybe the effect does not arise in all Unity versions.
I only ran the script, not the project. Kind of reluctant to run actual code that someone tells me is causing a problem. I did look at the source and can’t see any differences.
I’m using Unity 2019.4.7. My machine is a late-model laptop, but not a super-fast one. The GC on a slower computer may not keep up with your demands. I don’t know much about the C# GC, but the Java GC can fall far behind your memory allocations, even if you explicitly call it and ask it to do a collection. The doc on the C# GC suggests it will honor your call to Collect before returning, which would suggest that any leak you have isn’t in the code you’ve posted. However, if it only schedules a collection and returns before that’s done, it’s entirely possible that allocating big chunks sixty or more times each second will cause the GC to lag.
Try this: add a counter to your Update method and have it just return immediately after it has been called a few hundred times. If the GC is lagging, when your Update method stops allocating memory, you should see the memory usage suddenly drop when GC catches up. If you do, you’re not leaking memory at all. You’re just allocating it so fast that the GC can’t keep up with you.
Please try it if you don’t mind. Maybe something differs in settings of texture or in project settings.
I’ve added a counter to be sure that there will be 10 iterations. After ten very laggy frames (so predictable) Unity ate about 3Gb of memory and stayed on that. You can see the results in Profiler window (screenshot attached). Memory allocation grows almost linearly for several frames and then stays steady.
Also i’ve made a screenshot of detailed memory view in Profiler window: it appears that most of the memory goes to Managed Heap. Question is, why it’s not garbage collected.
Also i’ve tried to leave Unity working after that for a while and memory consumption does not drop, so GC is not trying to return that memory.
That’s assuming the heap section the array was in hasn’t since been abandoned because the managed heap had to grow for new Allocations. If it has been abandoned, a newer sections with a higher address range is now created and now the active heap section where new Allocations go. Abandoned heap sections are not reused to place smaller allocations in there, they are just waiting for all objects in them to be collected and then the section is returned to he OS. You can see this happening in the memory map of the Memory Profiler Package and we’ll make inspecting managed heap fragmentation way more obvious in hat package going forward.
P.s. your description is accurate for our native bucket allocators thought
I’ve used Memory Profiler too (and i find it very useful tool). Here are Tree Map and Memory Map screens from it (did not attach whole capture because it’s about 2.6Gb size).
If that’s really fragmentation problem, can i do something to avoid it? The task is to take an image downloaded from network, resize it (because of huge pixel size) and then use as Texture.
I’ve tried using SixLabors.ImageSharp library for resizing image, it does not have this problem, but it’s very slow. Maybe someone knows good native library for working with images, which have Unity bindings? I’ve looked on ImageMagick, but did not found native libraries built for Android ARMv7.
Asides from that and maybe a reason that others in this thread couldn’t reproduce this: this could be down to Mono 3.5 allocation behavior and not be reproducible with 4.x on newer versions than 2018.4 (and might also be specific to the combo of 3.5 + Android, as there were platform specifics to the mono implementations back then). Gotta double check that still though.
Good point, thanks. I’ll try it and report afterwards.
And about resize - in main project (not this test one) i’m using it of course to change texture pixel size. But anyway i need pixel data somewhere to perform image scaling.