We’re looking at performance issues in our game and I’ve discovered that Terrain.UpdateMaterials (which is performed as part of the culling step) is taking around 2ms a frame. Which is particularly annoying since we never change those materials.
Anyone else seen anything similar or know how to fix this?
We’re using unity 2018.3.2f1, though the problem has been there throughout previous versions. We use a custom terrain material.
I don’t have a sample project, but I am seeing a similar loss in performance (about 1ms per camera, with no GI enabled) on Terrain.UpdateMaterials. Could you perhaps shed some light on what this function does/when it runs?
Hi, I noticed the same problem here. I’m using Unity 2018.4.0f1.
I was testing the performance of my own terrain shader. What my own shader does is I modified the standard shader and enabled terrain tessellation and TerrainLayer mask texture in the standard render pipeline.
I was hoping that I will have performance loss due to the added features but ironically I don’t have the Loading.ReadObject in Terrain.UpdateMaterials so I actually have 16+ms performance gain. I really don’t understand why.
I tried to enable and disable GI but the results are the same.
Hi @richardkettlewell , I have submitted a bug report (case number 1204901) regarding this issue.
We noticed that Loading.ReadObject doesn’t exist in the built program so it’s editor only.
I don’t see Loading.ReadObject when using a newly created terrain or after import heightmap but It appears after I painted TerrainLayers on it and we have Splatmaps.
I failed to reproduce that Loading.ReadObject disappears when I use my own terrain shader as I mentioned previously.
Bump,
We are using Unity 2018.4.18f1 and Terrain.UpdateMaterials takes 4 ms (64 calls due to our map being tiled) while the GenerateBaseMap called inside Terrain.UpdateMaterials takes only 0.01 ms!
It looks like some bad C# code is happening inside Terrain.UpdateMaterials and not the generation itself is the problem.
We are also using GPU computed basemaps by overriding the BaseMap and BaseMapGen dependencies in shader.
I appear to be experiencing the same issue, both with the standard terrain shader, and microsplat. Each terrain chunk takes 0.3 ms per frame, adding a good 8-10ms every frame to Camera.Render, with the vast majority of the time within being consumed by Loading.ReadObject. I’m using the legacy pipeline, and have tried with and without runtime GI and am seeing it in both. Edit: Seeing this in 2019.4.0f1. I’m also using MapMagic, but don’t see why that would cause this.
Unfortunately, case 1204901 was closed by QA because the reporter didn’t come back to them to answer a follow-up question. Someone will need to report a bug and see it through to the point where QA reproduce it and send it to the dev team to fix, in order for progress to be made on this.
This crazy CPU cost has forced me to basically re-implement the terrain using compute shaders. Almost 3ms on Switch with 9 terrains. That’s all you need to repro it: create a scene with 10+ terrain components, run on Switch (or XB1, or PS4, it’s not that much better there either) and watch it eat away your main thread in the profiler.
It seems to be whatever code is responsibly for setting the material parameters needed for terrain rendering is doing int very, very badly. The only way to make that bar do down is by assigning a null material to the terrain component.
It happens on PC too, which is where I’m experiencing it. I will get around to that report and repro scene eventually, had to spend yesterday on my actual paying job.