Transforming entire scene seems to hit performance really hard

Hi,

In my scenegraph hierarchy I have my entire world in one node and transform it every frame, with respect to some user input (no non-uniform scaling is involved here). When I had a look at the profiler recently, I found that this seems to be a very bad idea since the CPU spends quite a long time (10%) on set_localScale, set_localPosition and set_localEulerAngles, which doesn’t really make a lot of sense to me. I guess I have to rewrite some code to only move the camera, in order to avoid this, but I’d be very interested in the reason for this since it seems only like setting three transform matrices to me (and maybe multiplying them already). I don’t have any collision meshes in my scene that would have to be updated. So, what happens when I set the localtranform properties?

Thanks,
iko

When you move a parent object, all child objects need to be moved together. It might seem like you are only, for example, moving a single object in the y-axis, but actually the game engine need to update the positions, scales and rotations of every child object too in order to have precise information for features like rendering, physics and also allowing you to know where your object is in the world coordinate system - everyone has changed positions, after all.

It is impossible to move an object without actually changing their own matrices. Moving a parent is just a nice way of moving one object and letting unity handle all the child moving for you.

Yeah, well, this is what’s not making sense to me since I know OpenGL programming quite well, and I’m pretty sure DirectX is not much different on that: whenever you traverse through your scenegraph for rendering, all you do is multiplying transformation matrices onto the matrix stack. It is not like you move every single object in the scene, but only the parent object, which just changes one transformation matrix. This is done every frame for every object, unless there are objects merged for saving draw-calls. So in fact, to my knowledge, there is not a single additional operation, it’s just the parent object’s matrix, that changes. The relative position of all child objects within this parent stays the same, the absolute position is therefore changed inherently. Unless there are optimizations going on, I don’t know about, I cannot explain the extra effort spent on the child objects. The only thing I could imagine is calculating all absolute positions of all objects for physics simulation, where you need world transformations, but like I wrote, I have no physics in my game.

Unity has no way of knowing whether you might decide to use physics at any time, so using physics or not isn’t particularly relevant in any particular case; you’re always “using” physics in the sense that the physics engine is always there and active.

–Eric

So you’re saying even if there are no collision meshes present as components of the gameobjects, they exist and are updated nonetheless, and this is the reason for this massive overhead?

No; I would suggest that the world positions of all objects are always required, so you can’t just update the transform matrix of the parent object.

–Eric

I’d like to add that changing the “initial” matrix for opengl is nearly the same as moving the camera around (except that scale wont work afaik). It is important for unity to have all world coordinates matrixes properly updated because not only anything will potentially use this data, the coordinates are also sent to the rendering pipeline after some processing (like culling operations and converting world coordinates to eye coordinates).

I had very little contact with opengl myself, but I believe that in when rendering in opengl you feel more like you’re moving the world around a fixed vision point. This “concept” is kind of lost on higher level engines, since they are disguised as the “camera” (the “initial” matrix).

One more important thing: I’ve read somewhere that unity3d represents scaling as vectors. I don’t understand the math, but seems like this can lead to potentially undesirable skewing when you have a hierarchy of non uniform scaled objects with rotations. I say potentially because for me it feels like expected behavior, but here and there I see people being taken by surprise.

@Eric: That’s what it looks like. What I would be interested in is if this is indeed the case and what the reason for this is.

@Kirlim: You’re right, it’s the same, it’s just multiplying one matrix to the stack, so it is not reasonable to have this overhead I’m experiencing in the first place, unless Unity is really managing all object transformations in world space, like you wrote. I’m not saying you’re wrong, but I would be surprised if this was the case since for the way the render pipeline works, this is not necessary, and I cannot see any advantage. But who knows. Maybe I’m missing something.

I was hoping to get this confirmed by a Unity staff member since this would be a very important thing to keep in mind when building scenes.

PS: I changed my code to only move the camera, not the root node, and the performance glitch is gone. While moving the root node took 4 to 6 ms, the duration for moving the camera is insignificant. Weird.