Meshes using way more data than expected / calculated

So I’ve been recently working on a mobile game that uses procedural mesh generation. I’ve got it working really well, except that the memory is way over the top.

I created a simple test case to demonstrate this issue.

If you post the code below into a GameObject in a blank scene, it will create 16,000 meshes, each with 289 vertices, and 1,536 triangle indices. If we assume that the majority of mesh data is this, and that we are using 4 byte floats and ints, then we can calculate that the size per mesh should end up being about 15364 + 2893 (for each component of the vector) * 4 * 2 (one for normals, the other for the vertex data) which equals 13,080 bytes per mesh. Multiply this by the total mesh count and we get 209,280,000 bytes, or 209.28 megabytes.

If we run the actual code, we end up with over 700 megabytes of RAM used.

Am I misunderstanding something, or is Unity actually creating that much more data for each mesh?

Here is the code so you can try this yourself. Just put it in a blank object in a blank scene, and set the globalMaterial variable to some material from the editor.

using System.Collections;
using System.Collections.Generic;
using UnityEngine;

public class MeshTester : MonoBehaviour {

    public Material globalMaterial;

    const int chunkSize = 16;

    // Use this for initialization
    void Start () {

    void InitializeGameObjects() {
        for (int x = 0; x < 40; x++)
            for (int y = 0; y < 10; y++)
                for (int z = 0; z < 40; z++)
                    var g = new GameObject();

                    g.AddComponent<MeshRenderer>().sharedMaterial = globalMaterial;
                    g.transform.position = new Vector3(x * chunkSize, y * chunkSize, z * chunkSize);

                    var mesh = new Mesh();

                    var verts = new Vector3[(chunkSize + 1) * (chunkSize + 1)];
                    var tris = new int[chunkSize * chunkSize * 6];

                    int i = 0;

                    for (int x1 = 0; x1 < chunkSize + 1; x1++)
                        for (int z1 = 0; z1 < chunkSize + 1; z1++)
                            verts[i] = new Vector3(x1 + x * chunkSize, y * chunkSize + Mathf.PerlinNoise((x1 + x * chunkSize) * 0.1f, (z1 + z * chunkSize) * 0.1f) * 5.0f, z1 + z * chunkSize);


                    i = 0;

                    for (int x1 = 0; x1 < chunkSize; x1++)
                        for (int z1 = 0; z1 < chunkSize; z1++)
                            tris[(i + 0)] = Get1DIndex(x1, z1);
                            tris[(i + 2)] = Get1DIndex(x1 + 1, z1);
                            tris[(i + 1)] = Get1DIndex(x1, z1 + 1);

                            tris[(i + 3)] = Get1DIndex(x1 + 1, z1 + 1);
                            tris[(i + 5)] = Get1DIndex(x1, z1 + 1);
                            tris[(i + 4)] = Get1DIndex(x1 + 1, z1);


                    mesh.vertices = verts;
                    mesh.triangles = tris;

                    g.AddComponent<MeshFilter>().sharedMesh = mesh;


    int Get1DIndex(int x, int y) {
        return y + x * (chunkSize + 1);

After further investigation, it appears that it is the mesh renderer, not the mesh itself that is using up most of the RAM. This still begs the question, what the heck is going on?

Once the mesh is created and you no longer need to modify it through code then call this to free up the memory

1 Like

Thanks Karl. I tried what you said, and I'm not sure it made a big difference, if at all. The odd thing is that it shows about 800 mb of memory usage on iOS in Xcode, but in Instruments it only shows about 200 mb worth of persistent allocations, which would make a lot more sense imo.

My bad! Actually UploadMeshData(true) does help, just not as much as I would've hoped.

It shrunk the memory usage by about 200 mb. This is great!... but the memory usage is still very high.

A few other things I have noticed:
* It appears there is a lot of overhead with MeshRenderer. If I render the same amount of triangles, but with 1/8 of the MeshRenderers (for example), I get substantially less memory usage. About 300 mb less. It is expected that more MeshRenderers = more memory, however with this big of a difference while rendering essentially the same thing, one has to wonder what is going on
* Instruments seems to be reporting substantially less than what Xcode reports as far as memory usage. On iOS, Xcode reports about 800 mb of memory usage after using UploadMeshData(true). Instruments reports about 200 mb in persistent allocations. Why the big difference?

There will be an overhead with more mesh renderers, I can't say how much. You may find DrawMesh and DrawMeshInstanced are better. Have you tried the profiler and the memory profiler?

Ok, so I did another test. If I use the memory profiler you mentioned, it only accounts for about 260 mb of RAM, whereas Xcode is reporting the app is using about 1 gb of RAM. Do you have any idea why there is such a big difference? I should mention that I am using unmanaged code, but even if I profile it in Instruments, it still only shows about 370 mb worth of persistent allocations.

I also have tried DrawMesh, but the performance decreased significantly.

Is there anyone on the Unity dev team that would be able to look at the Mesh internals and explain to me where all of the memory is going?

You would need to file a bug report although I suspect this is not a bug.
Have you tried using the Profiler to see what is using the memory? A quick test showed the Mesh using 267.7MB for me. Use the detailed view and click take a sample to get further information.

Well it wouldn't be a bug if it actually took up that much memory. If you run it on an iOS device you will get a much higher memory usage.

Have you attached the profiler to the iOS device?

Yes, it reports a similar amount of memory usage to what you are saying. The problem is that Xcode reports a lot higher memory usage, as does the device. The iOS device and Xcode are reporting around 800 mb of memory usage, while the Unity profiler connected to the iOS device only reports about 200-300 mb of memory usage.

Does that make sense?

What do you mean by the device? (Where are you seeing ram usage?).

I am too lazy to look it up right now, but I think I’ve recently seen something about XCode reporting being wrong because of a Unity bug or something.

If you look at the Debug Session tab in Xcode while you have a device connected with the running app, you will see memory usage. I know this is the actual device memory usage because I get low memory notifications on the device, which shouldn’t happen with only 200-300 mb of memory usage.

Are you using ios 12?

We’re still trying to figure with Apple whether this is some issue with Xcode/Instruments memory measurement or an actual bug on our side.

Thanks. I just tried running on an iOS 11 device. I do get a lower memory usage, but only by about 150 mb. The memory usage is still over 800 mb on iOS 11 in my game.

Can you file a bug report and include all your findings? The IOS team will have to look at it.