Can a dev explain what Mesh.UploadMeshData does?

The function Mesh.UploadMeshData is undocumented, however it sounds useful for optimizing memory usage. What does it do exactly? I posted this question to Unity Answers as well.

Does this function upload the mesh’s RAM buffers onto the VRAM then clear them so as to not waste memory? I’m creating a mesh from a script and will only be removing and replacing it; no modification. Is there anything to gain from using this function immediately after mesh generation?

I’d appreciate answers from the dev/staff or someone with Unity source code access. Thanks.

Hi,

it looks like a documentation bug! Thanks for pointing that out.

It copies the mesh to the GPU memory and frees it from the main memory. Therefore, you will not be able to modify it once it’s “uploaded”.

1 Like

Great, this is exactly what I hoped it did. Thanks for the response.

One more question Tautvydas, the function takes an argument “bool markNoLongerReadable”, what happens if I pass false in? Is it then still readable somehow?

Hi,

that’s correct - I somehow managed to miss that part out of my post. It will only free main memory it occupies if you pass in “true”.

Hi,

I’d like to ask about this method. I have a project where I need to create a huge number of meshes procedurally. It’s a market with all the products on the shelves. Products are placed by the user. I combine their meshes to decrease draw calls. I’ve noticed that when I do that, I get those performance spikes each time a combined mesh is first seen by the camera. I was searching for a solution and finally used mesh.UploadMeshData(false) on each combined mesh. The spike vanished, but I’m curious if it’s a good way to go in such situations?

I believe normally, a mesh’s data is only sent when it is first seen. I have seen reports of some people making a mesh visible to an off-screen camera in order to instantiate it first.

In my procedural engine, I have taken the approach of ‘rendering’ directly into the combined mesh and tracking the offsets, and then moving an object’s vertices to 0,0,0 when it is no longer ‘visible’ (or needs to be detached for some dynamic operation) and then ‘double buffering’ – writing into alternate meshes that are assigned if the triangles change.

I am not certain how this affects your situation, but I believe your UploadMeshData is triggering the same thing as the ‘fake camera trick’ that I had seen someone else use. I am just looking into UploadMeshData now though.

I’m making a procedural terrain and have noticed that, according to the profiler, if I call UploadMeshData(whatever) at the end of each mesh generation the amount of memory consumed by the meshes doubles! ~50 Mb of meshes becomes ~100 Mb.

Shouldn’t it be the other way around, at least with UploadMeshData(true)?

Ok. According to the profiler, a mesh that hasn’t been seen yet (hasn’t been uploaded to GPU) takes 48.9 kb. A mesh that has been uploaded takes 311.6 kb. This explains why UploadMeshData(whatever) boosts memory consumption by meshes. If I just rotate my camera around to make sure that all the meshes have been visible at least once, I get the same memory consumption as if I just called UploadMeshData() for each of the meshes.

What I don’t get is why there’s no difference between UploadMeshData(true) and UploadMeshData(false).

False alarm. UploadMeshData(true) indeed reduces memory consumption (311 → 263 kb per mesh), but only in builds, not in the editor.

1 Like

Why would I use this feature, and how can it help optimise performance in my game?

It allows you to control when a scripted Mesh’s data is sent to the GPU and what happens afterwards. Normally, a mesh isn’t sent to the GPU when vertices are assigned, but instead when it is seen by a camera. i.e. first visibility by a camera calls this function or a function like it. You can use this to ‘preload’ those meshes and avoid stuttering if that is a problem you are seeing with many scripted meshes suddenly being visible. (terrain chunks or spawned items)

The argument changes things a bit – Unity normally keeps a copy of each script-created mesh in-memory. e.g. mesh.vertices returns a copy of the vertices from the in-memory mesh. If you pass “true” as an argument to this function, the in-memory copy of the mesh is released and mesh.vertices is not accessible after using this command.

As an example, if you are using mesh collision for each terrain chunk and those chunks update often from explosions/modification and you want to combine meshes after a period of inactivity to reduce drawcalls while still being mindful of the time it takes to assign a large collision mesh, this could be helpful in optimizing memory usage when the terrain chunk is generated on-the-fly from noise or you are managing pools of vertices manually.

In that case, it keeps you from storing a second copy of the mesh data in-memory for something that’s transient anyway (the individual chunks) and you would get a sort of flow control ability on the larger combined meshes where you could upload N per frame a few frames before they are seen.

I think this represents a sort of intermediate step for procedural/dynamic generation where you want a real GameObject for something like a mesh collider or need to support legacy/pre-DX11/pre-OpenGL Core. Otherwise, mesh colliders are slow and it would be better to use the source data for collision detection (heightmap/noise) and the Graphics.DrawProcedural* functions.

I’m not entirely certain here, but if you are doing your own frustum culling (e.g. spherical culling on a planet), there may be another benefit to incorporating this.

Hi!
We have a bug in pawarumi: randomly, the bullets all disappear for just a frame. The bullets are rendered by a MeshRenderer with a dynamic Mesh set on the MeshFilter.
In the Awake() of our bullet renderer, I have this:

        mesh = new Mesh();
        mesh.name = "Bullets";
        mesh.bounds = new Bounds(Vector3.zero, new Vector3(100, 1, 100));
        mesh.MarkDynamic();
        GetComponent<MeshFilter>().mesh = mesh;

Our Update() does this:

    // rendering
    List<Vector3> vertices = new List<Vector3>(4 * 512); // 512 quads should be a good start
    List<Vector4> texCoords = new List<Vector4>(4 * 512);
    List<Color32> colors = new List<Color32>(4 * 512);
    List<int> indices = new List<int>(6 * 512);
    void Update()
    {
        vertices.Clear();
        texCoords.Clear();
        indices.Clear();
        colors.Clear();

  
// blahblah, fills in the buffers

        mesh.Clear();
        mesh.SetVertices(vertices);
        mesh.SetUVs(0, texCoords);
        mesh.SetColors(colors);
        mesh.SetTriangles(indices, 0);
    }

Should I use UpdateMeshData() here ? Could it fix our bug ?

PS: I believe the bug started with our update to 5.5, but I’m not entirely sure.

EDIT: Nevermind, I finally found out what the bug was. It was a matter of rendering order.

BTW, not sure if it’s supposed to work that way, but I had a surface shader with rendering order set to Transparent+1, a regular shader with rendering order set to Overlay, and the second was rendered before the first one! I switched the surface shader to a regular one and the problem was gone.

And I’m still interested to know if I should UpdateMeshData() manually or not.

1 Like