The following page says objects with non-uniform scaling hurt performance, but doesn’t say whether this has an effect each frame or only when the object is first used: Unity - Transforms
Does anyone know whether it hurts performance each frame?
The following page says objects with non-uniform scaling hurt performance, but doesn’t say whether this has an effect each frame or only when the object is first used: Unity - Transforms
Does anyone know whether it hurts performance each frame?
honestly don’t even worry about it unless you have like 10000 instances of this object in the scene. There’s an infinite number of micro optimizations you can make, like saying “++i” instead of “i++” in for loops. Optimizations like this affect performance sooooooooo little, that it’s really not even worth paying any mind to unless your game desperately needs optimizing. Even if it does, there are better things to be optimizing than non-uniform scales.
Though to answer your question, the docs say this:
“… In order to transform vertex normals correctly, we transform the mesh on the CPU and create an extra copy of the data. Normally we can keep the mesh shared between instances in graphics memory, but in this case you pay both a CPU and memory cost per instance.”
I’m not 100% sure, but it sounds to me like the performance hit is only once each time the scale is changed
If it’s only a one-time thing, then it’s not a problem. But I’ve noticed that my framerate seems to suffer more than it should as I add new objects to a scene, and I thought it might be due to my heavy use of rescaled objects (using the same object at different scales in different parts of the scene to create easy variety). But it could be due to a lot of other factors.
Instantiating objects is expensive. If you can get away with it, create all of your objects in advance, set them all to inactive, then activate them when they should appear.
When the framerate drops, does it stay dropped or is it just a stutter?
It stays low. And right now I’m just putting objects into the scene via the editor, so there’s no runtime instantiation. It just seems like the lag increases more than it should for the number of objects in the scene.
Do they have realtime shadows? Expensive materials? Anything in an update loop?
Also, are they all the same prefab or many different prefabs?
There were a lot of shadowcasters, but disabling shadows didn’t have any noticeable effect on the framerate. Most of the objects use the Standard shader. A large percentage of them recycle the same small set of meshes (lots of rocks using the same mesh but with different scaling; capsule objects with different scaling etc). One possible problem is the large amount of unnecessary polygons in the rock meshes (since the rocks are at least half hidden underground but still have a bottom side in the mesh); but each rock has only a couple dozen polys or so, which makes it less likely as a major cause.
Yeah, I don’t think the poly’s that are obstructed are costing you much, especially if they are culled because of back-faces.
You said:
To re-quote the docs:
“… in this case you pay both a CPU and memory cost per instance.”
“This case” being the case in which you have non-uniform scales. If all your objects have different scales, then there is a small performance hit per-instance. That said, I would think they only update the mesh once, not per-frame… you might want to try having all the objects at the same uniform scale just to see if that’s the problem, if it’s not too much work.
Doesn’t it have to with dynamic batching too?
Objects might not be dynamically batched that have non-uniform scale.
I remember seeing this somewhere. Then i found this thread:
So I don’t know if this restriction still exists.
It’s not listed in the current manual as a restriction of dynamic batching :