The following code run on UWP will raise the memory by 100MB every iteration until the application crashes.
What am I missing here?
We’re seeing behavior where we’re downloading smaller files and saving them to disk, but never getting that memory back. This is the smallest reproducible example that looks to be the root cause of our memory issues.
This is running on a desktop with a ton of memory, but our application runs on a Hololens where we really only have 900MB, and where unity engine takes up 350MB standard leaving us with just 550MB for our app
UnityWebRequest.Get() will use DownloadHandler buffer which keeps downloaded bytes in memory.
If you are saving downloaded stuff to file, you should be using DownloadHandlerFile instead, this will give you the small memory footprint.
Also, it is a good practice to dispose UnityWebRequest once you no longer need it by either putting the request object in a “using” block or calling Dispose() explicitly. The memory where data is stored is native memory, so GC might not run. Doing GC.Collect() explicitly could help, but it’s costly and I would advise against it.
Using the DownloadHandlerFile does fix this issue.
But it seems strange to me that we have no way to force clearing the memory when we know we’re about to crash because we’re running out of memory.
I’ve added System.GC.Collect(100, System.GCCollectionMode.Forced, true); after writing file to disk with File.WriteAllBytes, but that doesn’t seem to do a lot.
The above code crashes after about 10 iteration (~1GB of app memory usage) on a PC that has 32GB of memory.
Running this on a Hololens 1 seems to crash the application after 20iterations at around ~1.3GB.
The wider issue is that we use a GLTF model loader from the MRTK codebase:
Essentially, a GLTF is a large JSON file. The JSON file embeds textures and mesh data as a base64 string. When we load the model, the mesh data is read as a base64 string and extracted into a byte array (this has a large memory footprint). It seems to me that any time we allocate a large buffer (like a memorystream or a byte array), there’s risk of never retrieving that memory back for later use.
Specific things I can point to in the GLTF plugin:
Reading large file to memory is incorrect approach in general. You should be open a file stream and process data in pieces. You can also process data in pieces by using DownloadHandlerScript.
No. It means that loading large files in memory is bad in general and should be avoided. They should be read and processed on the fly. Unless you have a wast amount of memory (but then you could say that the same files aren’t really large).
Also, before you call GC.Collect, you have to make sure you release all references to the memory you no longer need.
I appreciate your feedback and help on this, and I agree with your assessment that in general we should stream instead of reading the file content into memory all at once.
In the above examples (where I wasn’t yet using DownloadFileHandler), if I call Dispose on the web requests it should release all references to memory and should properly GC? I understand that this is bad practice
Calling Dispose() should free the internal buffer inside DownloadHandler, that will be the size of the file. The rest of UWR stuff should have fairly small memory usage.
One problem that you have is that your “request” variable is a member of the class and you don’t null it anywhere, so UWR stays alive (not eligible for GC) until you do another request and without explicit Dispose() that also means keep all the downloaded data in memory.
It doesn’t really behave much differently. Memory keeps going up until it reaches ~1.2GB. I then pause that ugly file download and dump (not application pause, just skip the logic) and I see that the GC does recover some memory, but leaving it running does not reduce memory back to what the application was at when it booted up.
This line is a problem. Accessing data in DownloadHandler allocates a new byte array each time and that array is only released by GC. So you are essentially paying for your file twice - native memory used internally, that is released on dispose, and managed array, which is released whenever GC decides to run (or is forcibly run via GC.Collect).
Hi, we’re seeing the same behaviour in UWP (Hololens) using HttpClient. My current suspicions is that there is a large object heap issue in UWP/IL2CPP where some large buffers never get cleared. If I download objects of 0-200k and access them as a byte array everything appears to be fine (or stream them to disk), when we hit objects of 200k+ the memory climbs.
I could do this reliably with a test app on HoloLens using the following steps:
download a 1.6mb png
access the data as an byte[ ]
dispose of the stream and de-reference the array
wait a few seconds and start again.
Over a couple of minutes the memory use will push up towards 900mb.
If I convert the same image to a jpg (220kb) and perform the same process memory doesn’t appear to grow (or nowhere not as fast).