What is the order/connection of array elements in mesh.vertices?

I’m simulating mesh subdivision. But I only have to know the position of each mesh vertex, to interpolate between each vertex & see if the distance is big enough to create a new (temporary) vertex, which is added to listOfSubdivisionVertices, and then returned to my algorithm.

But I recently found that mesh.vertices contains 24 elements (thread for that) for a regular 3D cube with 8 vertices. So 16 (normals + UVs) of those vertices are unnecessary data & wasted computation time, but I couldn’t find anywhere what the order of these vertex arrays are.

Like:

{
   vertex1
   normal1
   uv1

   vertex2
   normal2
   uv2

   etc
}

or:

{
   vertex1
   vertex2  
   vertex3

   normal1
   normal2
   normal3

   uv1
   uv2
   uv3
}

so: is it the first combination of elements (single vertex data grouped together) or the 2nd combination (array of mesh vertex vector3, array of normal vector3 & array of UV vector3)?

Well, almost all you have concluded so far is wrong ^^. The vertices array only contains positions. It does not contain normals or uv information. Those are stored in seperate arrays. Note that this layout is only for scripting usage. The actual vertex format used by the GPU may look completely different.

So a single vertex is made up of a position (from the vertices array), a normal (from the normals array) and a UV coordinate (from the uv array). They are grouped by the same index. So index 0 in the vertices array belongs to the index 0 in the normals and the uv array ( or any other additional vertex attribute like secondary uv, tangents, colors, …). That’s how a vertex is actually defined inside the Mesh class.

How the vertices are actually ordered is not specified anywhere. They could be in any order. It doesn’t really matter in which order they are since the triangles array will actually form triangles by providing 3 vertex indices which that triangle should be made of.

The “unnecessary data” is not really unnecessary. A cube has 6 sides and therefore each side needs distinct normal vectors. Since the normal vectors are defined at the vertex, you need 3 different versions of the same vertex position since at each corner 3 different faces meet. So you have 3*8 vertices or 24. Another way to look at it is that a cube is made up of 6 unrelated quad meshes. Each quad has 4 vertices. Those 4 vertices will all have the same normal vector in order for the face to appear “flat”. Now you simply have 6 faces so you need 6 * 4 vertices == 24.

Just to be clear about that: Where a certain vertex is located in the vertices array is completely irrelevant as long as the triangles reference the correct vertices. So a triangle may use vertex #0, #5 and #11 while another triangle may use vertex #2, #3, #4. You could even jumble all vertices up as long as you also adjust the triangles array to still reference the correct vertices, the mesh would not change at all. The vertices array is litterally just a bucket of vertices. The triangles array actually introduces order into the mix.

If you want to examine the vertices and triangles of a Mesh in Unity, you can use my UVViewer editor window. Once you have it in an “editor” folder you can open the editor window through the menu. Now just select any gameobject with a mesh renderer in the scene and you can view the UV map(s) of that mesh. I’ve also added a triangle list view which may be useful here. Here’s the result in Unity 2020.2.3f1 (the version I currently had at hand).

As you can see the first triangle is made up of vertex (0, 2, 3) and the second triangle is made up of vertex (0,3,1). Therefore vertex 0,1,2 and 3 belong to one face of the cube since those two triangles actually share two vertices. The next face is made up of the vertices 4,5,8,9, the next one of 6,7,10,11.

Keep in mind that this is not somehow a fix rule. That order may even change from Unity version to Unity version. In fact the default sphere mesh has changed several times over the years because they changed the UV map and where the seams are. They have done this to get better lightmap UV coordinates. So you should never rely on a certain vertex order for whatever you want to do.

A long time ago I made this MeshHelper class which provices a 2 way and 3 way subdivide method. However the Mesh class has changed quite a bit since then. Back then we only had 16 bit index buffers. Also the support for more UV channels as well as different mesh topologies was added later as well which isn’t considered by my class. However it should still work for the default cube and simple triangle meshes.

[ SOLVED ]

EDIT: new accepted answer, keeping this one (my own) to archive comments below

Found that the 2nd option is the correct one after some more digging

For every vertex there can be a normal, texture coordinates, a color and a tangent. These are optional, and can be removed at will. All vertex information is stored in separate arrays of the same size, so if your mesh has 10 vertices, you would also have 10-size arrays for normals and other attributes.

Docs