What are the advantages of converting a terrain (for example a terrain created using the Gaia asset tool) into a mesh?
Does it improve performance in some way? Can you have bigger maps without affecting FPS?
One thing that doesn’t look very good on the maps I make is the way grass pops into view in the distance as the camera moves forward. Would converting to a mesh help this?
I use a mesh for the collider on my terrains. Terrain colliders give me strange results with physics (like a rigid body passes partially through it and then gets flung back out). It also allows you to edit the “terrain” in a 3d modelling program like Blender which often gives you much better control for fine adjustments. If you have vertical surfaces, you will have texture stretching since terrains are textured as if they were a plane so people will use a mesh for the cliffs and UV map them in Blender or whatever.
No, adjusting the detail distance on the terrain will. In fact with a mesh you cannot paint grass and trees the same way.
If you use a different terrain shader(like Distingo or RTP) you can easily avoid to texture stretching on the more vertical faces.
There are advantages to using meshes over terrain, but some of that is best used during the design phase. For example, if you want overhangs, you can’t directly do that with Unity terrain, but you can easily do that with a mesh.
With performance, a mesh generally would be less performant the bigger it gets, but if your terrain isn’t all that big, a mesh could suddenly be faster, and in fact, many mobile games use a mesh instead of terrain for the performance, especially since mobile games tend to be smaller in size(of course not always). The thing about using a mesh though, is that you have to figure out the shader for it, assuming you want the material to be similar to a terrain. You need a shader for combining multiple textures with a splatmap. There are alternatives, but generally that is one of the best ways to get the same look.
You also have to be careful of light sources depending on how you have your game configured, Having many lights in your scene and having a huge mesh can cause performance issues.
Terrains have to be computed again for depth and for lighting (in forward, its horrific) and shadows, so you end up paying the cost to generate the terrain from that camera view several times. It’s rubbish. You’re just pissing away precious millisecs using it.
It does have the benefit of level of detail and authoring but it’s sole reason to exist in my view is to sacrifice cpu time in exchange for your development time, assuming you will set up correct lods for your meshes and so on.
We actually found the opposite. We couldn’t get consistent results with physics and mesh colliders for terrains, fast objects would just pass right through them. Terrains have a thickness settings which is cheaper than adjusting time steps and detection modes and works much more consistently and reliably for collisions.
A mesh takes a single draw call vs multiple ones, which can sometimes speed up performance. A terrain has LOD, so more draw calls but fewer polygons, which can also sometimes speed up performance. One thing you’ll definitely want to do with a terrain is set the “Pixel Error” field to something higher, like 10 or 20. The default for some reason is 1, which means the LOD hardly ever kicks in.
Unless your platforms are mobile / Intel integrated GPU’s it’s hardly ever an issue. We’re at the point now where next generation consoles will be rivaling along the lines of Titan’s.
So I’ll take my card as an example Radeon 390, I’ve stress tested it up to around 60 Million tris (via segmented terrains) and from 120FPS it knocks off around 20 / 30 (that’s with lighting / post etc. enabled)… Unity’s terrain (on a 14KM2) example usually takes up 400 - 500K tris with no LOD’s…
I’ve still got one hell of a budget there, I mean that’s with occlusion culling effectively disabled (frustrum 5K). Although in DX11 compatible GPU’s I’m only able to get away with around 2.5 / 3K draw calls before the frame rate effectively tanks. But with DX12 etc. even the limitation on that is getting pushed out…
I’ve not tried it on a base unit (min test platform) which is a GTX 470, but even so I’d have to be going at it VERY hard to even have 8Million tris in a scene… For the amount of draw calls I believe LOD’s many a times are nothing but a waste of time. Although for mass foliage I will always use them…
You’d have to make one HELL of a game, or aiming for some really pretty poor hardware for it to matter.
I think it’s hard to have a rule about it. I know that the last few years, devs have been saying “Screw polycount, all we care about now is draw call count” but it’s not always quite that simple. If it were, then every game would just use one giant skinned mesh with one giant texture atlas. In my current game, I use a Minecraft-like chunk system to divide the world into chunks so that I can occlude/LOD parts of it. If I just merge every chunk in the world together and turn them on at full LOD, it absolutely tanks performance, despite the fact that technically it’s only a few draw calls. Remember if you make everything one giant mesh, it has to render the entire mesh, including everything behind you, so you don’t even get decent camera frustum culling.
Very interesting, thanks for sharing! Do you know what “The Forest” uses, terrain or meshes? And do you use terrain colliders or mesh colliders for your game?
I also found the performance on Unity terrain to tank with very low pixel error settings (so sad, that’s like the only setting where it didn’t look crappy). I guess maybe it’s not drawing the polygons that’s tanking the performance (like you said, it shouldn’t). Maybe the retarded terrain culling has more work to do on low pixel error levels? Just a guess though.
What HW are you using? Because I have mine set to 1 and base bitmap set to something like 7000 (over-ride from script) and I’m getting 60+ FPS…
It heavily depends on the density of the terrain mesh in Unity, let’s say you use a 4096 HM my system will tank as well… I tend to split it into many 256 / 512 tiles if I can…
RTP has a whole terrain tesselation system to compensate for pixel error so you can offload work from the cpu. Video where he discusses it is under spoiler.
No idea what the forest does, you should ask @larsbertram1 as he is more familiar.
My game currently doesn’t use terrains because of the performance overheads and how difficult it is to work with. It’s all meshes with mesh collider. I don’t want to use mesh collider because its quite slow, but I believe Unity might open up height collider at some point (aka terrain collider). We’ll see.
I’m currently using RTP with tessellation and it is indeed a lifesaver. I use a low res heightmap with pixel error at the max of 200, and it looks better than it did when I had a super high res heightmap and pixel error of 1, at a fraction of the performance. Only downside is that it’s a DX11-only feature, so it doesn’t work on some graphics cards.
I have an i7 with a GTX 670. I use 4k heightmaps from worldmachine, so I guess that’s the reason then. Tiles will interfere with Distingo (the shader I use) because that currently has a large per instance cost in the editor.
Thanks, I’ll check it out tomorrow!
What’s keeping you from using the terrain collider without the terrain renderer? I’ve tried it (add a normal terrain and just deactivate the terrain component and have another object render the mesh), and this seemed to work and be faster than the mesh collider (at least if you use a lower resolution heightmap for the terrain collider, like 512*512. It just seems like a super clunky workflow because it was hard to line up the terrain with the mesh (I didn’t found a better solution than eyballing it 0_o).
@larsbertram1 : well, consider yourself asked . I’m curious because the forest has the best looking terrain I’ve seen in Unity so far and I still get good performance.
I thought I read reviews talking about how terrible Forest’s performance was, did they smooth it out post release or am I just mixed up?
The DX11 thing is why I haven’t dug into this yet. Being able to crank pixel error way up really makes a noticeable difference, especially on lower end hardware. Do you have special handling for older cards?
I’ll move this up the priority list. Thanks for the tip.
Currently no, but according to the RTP developer, I could just have two versions of the RTP shader precompiled, one for DX11 and one for DX9, and I could add that to an options window or something. The DX9 version would just have to set the pixel error back to something low and would be slower and uglier, but hopefully good enough. My game’s still got a ways to go before it’s ready to release, so I’m not worrying about it too much at the moment. By the time I release, DX11 cards might be more ubiquitous.
Edit: To be clear, if you haven’t used RTP - It has a “shader setup” window where you choose the features you want to use. If you don’t want tessellation, you just uncheck it, and then it works fine with DX9. But it has to recompile the shader every time you turn something on or off, so it’s not something a player could do on their end. You’d need to have two precompiled shaders and write some code to swap between them.