I’ve been working on a game that generates/renders a fractal terrain at runtime using meshes. I was doing this on very small power of 2 sized maps and everything was fine. As soon as I increased the size to 128x128 the application became a slide show.
I culled my project down to a bare bones test case. I was trying to find why the performance is so low with only 32k triangles, but without luck. I’m hoping I’ve missed something simple.
Admittedly I have a pretty old machine. it’s a dual core 3.1ghz with 4gb of ram and directx 9. It does have an nvidia gtx460 gpu however.
I created a new project with 3 objects and 1 script. The default camera, an empty game object with my script attached to it, and a third empty game object. The third one is a prefab with a mesh renderer, mesh collider, and mesh filter attached. I disabled vsync in the project settings.
Here’s my lone script:
using UnityEngine;
using System.Collections;
public class gen : MonoBehaviour {
public Transform mt;
private Mesh mesh;
private MeshFilter mf;
private MeshCollider mc;
private MeshRenderer mr;
private int size = 129;
void Start ()
{
CreatePlaneMesh();
for (int x=0; x<size; x++)
{
for(int z=0; z<size; z++)
{
Transform tr = Instantiate (mt, new Vector3 (x*3f, 0f, z*3f), Quaternion.identity) as Transform;
}
}
}
public void CreatePlaneMesh()
{
mesh = new Mesh ();
mf = mt.GetComponent<MeshFilter> ();
mc = mt.GetComponent<MeshCollider> ();
mr = mt.GetComponent<MeshRenderer> ();
mf.mesh = mesh;
Vector3[] vertices = new Vector3[]
{
new Vector3( 1, 0, 1),
new Vector3( 1, 0, -1),
new Vector3(-1, 0, 1),
new Vector3(-1, 0, -1),
};
Vector2[] uv = new Vector2[]
{
new Vector2(1, 1),
new Vector2(1, 0),
new Vector2(0, 1),
new Vector2(0, 0),
};
int[] triangles = new int[]
{
0, 1, 2,
2, 1, 3,
};
Vector3[] normals = new Vector3[]
{
new Vector3( 0, 1, 0),
new Vector3( 0, 1, 0),
new Vector3( 0, 1, 0),
new Vector3( 0, 1, 0),
};
mesh.vertices = vertices;
mesh.uv = uv;
mesh.triangles = triangles;
mesh.normals = normals;
mesh.RecalculateNormals ();
mc.sharedMesh = mesh;
}
}
No update methods. Nothing else in the project. When I run it in the webplayer I get sub 20 frames per second. That seems very low when rendering <25k triangles on screen. The docs state you should keep your triangle count under 3m on a desktop, so I’m really confused. It pegs one of my cpu cores at 100% so I’m guessing this is CPU limited instead of GPU limited.
Is there some simple gotcha I’ve overlooked here? I plan to cull distant/hidden objects in the future, but I’m concerned about the low FPS.