Unity 64 bit still has out of memory error

I have created a point cloud viewer using the free point cloud asset from the asset store. I wrote my own file format reader and it works well for small data sets. However, point clouds can get really big. When I try to read a large point cloud, I quickly run into the “OutOfMemoryExeption: Out of Memory” error. Everything I have found here at the forums say that 64-bit Unity should fix this problem, and I understand why, but it is not fixing it for me. I am using Unity 64-bit editor and have my project set to build a 64-bit executable. When I try to allocate nearly 2GB for storage, I still get the Out-of-memory exception. I get the same result when playing from the editor and running my standalone executable. I have 16 GB of RAM in my computer. I have thought about trying to load in smaller chunks, but I ultimately have to have it all in RAM eventually. Does anyone one know of something I’m missing in Unity? I can read this data set into multiple other applications on my computer just fine.

“Stop trying to load that much data” is not an acceptable answer for this project.

This has nothing to do with having a 32bit or 64bit architecture. In general .NET arrays are limited to Int32.MaxInt elements which is about 2 billion (2147483647). In addition the actual memory a single object can use is also usually limited to this size. I’ve read that somewhere in .NET 4.5++ they have removed this limitation for x64 builds when you specify gcAllowVeryLargeObjects in the configuration. Though i haven’t found anything for the mono side especially the version Unity uses, but i guess the same limitations apply.

Since the loading code allocates a single Vector3 array to hold all points at once (which is pretty stupid since the data is then split into several meshes…) The size of the array might run into it’s memory limit.

492866607 points times 4 times 3 == 5914399284 ( almost 6 GB of memory). The “colors” array is even worse since a Color has 4 float values instead of 3 so that one takes another 7885865712 (7.8 GB)

The loading code could be made much more efficient:

  • Don’t read the whole file into a single array. Just create the seperate meshes as you read in the data.
  • The Mesh class now has a SetVertices method which can take a List. This allows you to reuse the same List for every Mesh. It also has a SetColors method. Since the index buffer is the same for every mesh (just an array of increasing integers) you can create it once and use it for every mesh.
  • Using Color32 instead of Color will decrease the size by a factor of 4. The color values in the off format is specified in byte values anyways.

The only things you can’t do “on the fly” is calculating the overall min value and adjust all points at the same time since we would parse the data in chunks. Of course you could calculate the min / max / average as you parse the whole file but at the end you would need to revisit every created mesh object, read it’s vertex List, update the points and write them back.

However even he had calculating the min position in mind he actually doesn’t do it at all. Also the way he calculates the min value makes no sense. If the actual min value is (0,0,0) he will actually use an arbitrary point due to:

if (minValue.magnitude == 0)
    minValue = point;

While this point cloud loader seems to be working for many people, it’s actually a very bad implementation. I know that SetVertices probably wasn’t available when this script was written, but the overall concept isn’t very robust.

try this