Is there a difference between using unity’s terrain system, vs importing my own mesh to use as terrain with a mesh collider? is one better for performance?
It’s mostly the ease of use through tools. Unity seems to want to catch up with other engines that already have a non-destructive layering system for terrain modifications and although they haven’t yet, if they do, the tooling will be much better compared to the amount of custom tools you’d have to write to achieve the same workflow. If you want to build small architectural or non-organic surfaces the tools to support this are probably better in you favorite modeling app.
One significant difference though is that heightmap based terrains are by nature not able to model overhangs and similar structures. They also tend to introduce artifacts along very steep height edges as their resolution often just naively uses a regular grid. There are ways to generate more clean meshes but the unity terrain doesn’t do this. So using meshes you have the option to manually add detail where needed if this is your case. If you intend to modify your terrain at runtime, heightmap based ones are easier.
thanks this helped
Terrain will be far better performance wise due to it being instanced. They basically take a small set of tile meshes and then instance those all over applying the height in the shader. Plus the built in LOD. If you need 3D terrain though you are stuck with meshes.
Terrain colliders are optimized also, perform better then regular mesh colliders.
is the performance with a mesh something i should worry about for a large open world game? is there anything i can do to improve the performance if i use a mesh?
Unless Unity’s terrain system performance has improved dramatically in the last year, meshes are actually far better for performance. At least this was my conclusion after my research for our large open-world game Eastshade. We used unity terrain for the first two years, then ended up switching to meshes and it was absolutely the right decision. This is mostly because of how many draw calls even a basic blobby Unity terrain creates, due to the way it chunks the pieces. Also the the real-time edge collapsing as you get further away is extremely heavy, much heavier than LODing a large chunk with a simple distance check. Its especially bad when you consider just how many triangles the terrain system uses to give definition, since the topology is even and indiscriminate squares. If the geometry is a regular mesh instead, you can use a good edge collapse optimizer (most 3d packages have one these days) to shave off 75% of the triangles with no visual change, leaving most geometry where its needed. I wrote a big article on foliage optimization, and while most of it is not applicable here, since we’re talking about terrain/ground itself, there is a small section on terrain. I’ll paste the relevant text:
Another cool thing about meshes is you can reuse them and arrange them more easily. We have large cliffs in Eastshade that we copied around to make interesting areas quickly. There’s also just so many more tools available to make the terrain look good when your authoring regular meshes. So my vote is for meshes :). But every game is different.
I also have found meshes to be much faster, but that’s for a well researched, tested, and complicated pipeline. The only savings I see using terrain is the actual memory required for the meshes can be noticeable in a very large world, but it’s still not much compared to the overall savings with drawcalls.
edit: Also, according to some experts, LOD’s are not always a good thing and there are often better ways to accomplish more optimized results.
It would be interesting to have an update on these performance comparisons with URP: from what I understand the performance gained from going from Unity Terrain to a Mesh was due to the reduction in Draw Calls, however the SRP Batcher (used with URP) only slows down when there is a change in shader, not a change in material so maybe the performance gain is now less significant?
djweinbaum and ron-bohn maybe you have an opinion on this or you might even have tested it?
I haven’t touched the terrain system in years, so I wouldn’t know. But I have to imagine the edge collapsing computations are still pretty expensive. I also needed an order of magnitude more triangles for the same definition, since the terrain system made even quads (instead of a software mesh optimizer which puts edges where you need them). The difference was often huge, like 100k triangles vs 6k triangles, and the mesh still had more definition. Idk if the edge collapser is smarter now.
We can debate this for years, its what works for “you”, test things out. Mesh as far as “look/definition”, is great, but is it right for “your” game/workflow?.. Ball in your court…
Just conveying my personal findings. There is an easy way for anybody to see for themselves: Put a mesh with one material in a scene, hit play an open up the stats window. Do the same with a terrain, and you’ll see the difference. In some cases “my way” might be over-optimization for say a linear game that uses occlusion culling and such, but if you’re building an open world of any significant size, every little thing adds up. Some of the more advanced open world tools such as “World Streamer” use meshes as default, for example.
It absolutely matters with what kind of pipeline you are using, target device(s) etc. However draw calls are draw calls and if you’re on a small or solo team, there’s several universal concepts that have applied for quite some time. Terrains can much more convenient for some situations.
Just a use case example of a world that I’m currently working on…I have about 5x5 sq km of buildings using a single atlas that take less of a performance hike that a 1x1 sq km terrain using a single texture.
Thanks to all of you for the comments. I’ll try swapping my Unity terrain for a mesh in Cradle of Chaos tonight.
Thank you for this information, very helpful. Are you using your own solutions for streaming and vegetations?
And may I say how beautiful your game is, simply gorgeous! HDRP?
I am currently using world streamer and it uses mesh terrains for distant terrains (as LOD). Near terrains are Unity terrains. This is their default setup. However with World Streamer (since it has have nothing to do with terrains -streams scenes) you can do whatever you want.
I am also considering getting rid of all my Unity terrains and just using Mesh terrains. Once you have fooliage (and other detail) over it, it is hard to tell the difference I think
Thanks, guys for the info! This made a HUGE improvement for me. My project is a VR project (URP) so everything beyond 50 or so meters looks like crap anyway so don’t really care for distant fidelity. And this is with instance checked and in the editor (it would improve more on a build).
p.s. You have to add about 10 more fps though, for some reason when I use windows snip, I lose about 10 fps. And ignore the ground detail, I just need to move up the rocks.
Unity
Mesh
EDIT:
I did a proper build and I can say I am not going back to Unity Terrain any time soon. I gained good 3-6ms ( and over 6ms in some places! I know, hard to believe for me too). For those who only know fps, this is roughly about 20-40fps.
Another huge thing I noticed was my loading speeds. They went from about 1.5-2 minutes to <5 seconds.
I should point out that this approach isn’t without pain points and will require you to do additional work. But I think this is well worth it.
For streaming just used the regular built-in LoadAdditiveAsync. For the vegetation, well, I give a pretty good rundown of it in the foliage optimization article I linked to there.
And thank you! No its just the built-in deferred (Unity 2018). I wrote all custom surface shaders, but used Unity’s standard spec and lighting model for the most part.
I am currently using word streamer (which uses the Unity’s Async loading), but been thinking about making my own. WS is fine but I think I can do something simpler (using the same async method) and I like the total control when it comes to extending and troubleshooting.
Yeah I have been looking at your DevLog, it is a good mine for those of us who are less experienced. Thank you very much for sharing!
You said “And thank you! No its just the built-in deferred (Unity 2018)”, 2018?. My friend you are far behind the times. You should always use a LTS build to begin with. But 2018 is far behind. At least use 2019 LTS version.
2018 is still under LTS. I am in 2020 LTS and it is far unstable than any other version of Unity I’ve used. Of course, it has great features but I spend a good chunk of my day staring at progress bars or recovering from crashes. I think 2018 is still the most stable, 2019 was better than 2020 when I used it. I can understand why most of the veterans stick to older LTS versions. Hope things will be better with 2021.
TBH the 2019 LTS is the best out there… hands down, you should try it, very stable, and compliant… Also your crashes could be due to “version conflicts”, due to version used… Dont use 2020 and above, Why?, bugs arent worked out yet, to early of a version…