How much overhead does Unity impose?

I’m wondering since it’s a middleware framework, how much overhead is there? I think this would be more of an issue for constrained devices like the iPhone. Will writing something with Unity for iPhone result in a lot of wasted CPU cycles or is Unity by and large a very efficient and lean running engine?

Part of this is an engine architecture question and I guess the other part is a question of geometry occlusion algorithms and the complexity of the scene that you’re loading, so I know the answer will depend on a number of factors.

Define wasted CPU cycle?

What I mean is, to what extent does the level of abstraction that Unity provides end up eating up the available power?

Would you describe Unity as being in the performance category of engines or does it favor convenience over performance? I know both can be achieved to some degree.

I guess I am trying to figure out, if you write the same game, once in Unity and once bare bones in raw code, is Unity going to take up 10% more resources? 35%? 50%?

The more it takes up, the less you can get out of the hardware, so I’m trying to figure out what kind of tradeoffs it makes.

You should consider the time you would spend in writing your game in raw code. Then they have full time programmers in engine optimization, probably your game would be slower if you work alone or in a small team.

I think the only portion of the engine that would provide a significant amount of overhead is the scripting engine. As I understand it, the engine itself is written in native code, but any scripts you write will run inside the mono VM (except where they call out to the native portions of the engine, of course).

If you want hard numbers, look for an analysis of Mono itself; I would guess that’d be pretty close. But since mono JIT-compiles everything to native code anyway, I expect the difference will be small.

I believe that UT has stated before that C# JavaScript runs at something like 50% the speed of native C++, which is, as I understand, extremely good for a scripted environment.

It is the conventional wisdom here in the Unity Community that if you design and optimize your code properly, your code will not be the performance bottleneck. That is, it is pretty easy to have your game be GPU bound and not CPU bound. Overall performance will depend more on poly count, terrain size, draw distance, screen resolution, etc.

It is also pretty easy to write code that chews up all your CPU, particularly by doing too much in Update loops, or excessive use of some functions like raycasting. But profiling your game can show where those things are happening and let your get rid of them.

Every time someone (myself included) has asked, “should we write a C+ plugin instead of using C#/JavaScript for performance” the answer has been “not unless you have very specific low-level needs such as custom device control, etc.”

Maybe it would be better if I worded it this way. Absent any scripting, and just by loading a typical scene by today’s standards, with typical animation, how much overhead is the architecture of Unity imposing?

That is absent AI, game logic, networking, input, and anything not related to loading and animating the graphics.

I’m aware of the value of such trade offs, I am asking a purely technical question here.

None, or at least not any more than any other engine…it runs at native speed.

–Eric

Does anyone know if it uses a scene graph? What kind of occlusion algorithms? Portals? How is it organizing the geometry? Is it smart about how it batches rendering calls? Etc?

I’m trying to understand how smart it is as a pure graphics engine vs. say just a very elegant tool that allows you to put together a beautiful scene.

I wasn’t able to glean any of this from the docs, but maybe I’m missing this somewhere.

At the moment it only does frustum culling, however the iPhone version supports some form of occlusion culling which will hopefully be rolled back into the main build. From what I’ve seen (haven’t used it yet) you manually specify occlusion zones.

On the iPhone we do a lot of engine optimizations specifically for the iPhone. We have been working very closely with imagination tech (Makers of the mbx chipset used in the iPhone) to get the most out of the chipset in the iPhone. We have spent a lot of time optimizing vertex formats for optimal processing speed on the iPhone.

Occlusion culling is a PVS solution and has no overhead that would be in any way visible in a profiler. The solution is geared towards long precompuation phases at build time and fast runtime performance.

Essentially, the kind of performance you can get by rendering using Unity on the iPhone is going to be very difficult to achieve with a raw C++ application and you will have to know a lot of about the specifics of the hardware to beat it.

Thanks for that detail. Is the Mac implementation as well optimized in terms of geometry?

Mac/Windows don’t have the PowerVR’s geometry formats, so this is hard to answer…

Basically, we try to optimize as much as possible. In iPhone’s case, this involved talking to PowerVR guys a lot, lots of benchmarks, timings, peeking into what the graphics drivers do and so on. Quite similar story on the Wii.

In Windows/Mac case, it’s not as focused because there’s no single, known and non-changing hardware target. For example, it does not make sense to disassemble the drivers to see what is happening behind the scenes, because with next driver release it all could be different. So here the optimizations are more “general”: minimize batches, minimize state changes, etc.

In general, this a hard question to answer… the “overhead” does not have any standard units to measure it with. “Unity has 10.7 millioverheads of overhead!”

Wow, and I heard Torque only has 9.8mohs! :wink:

Yeah I understand that. You basically answered my question. I wanted to know if you did things like batching, etc. Sure you can get more static and direct with a static target like a console. I understand that. That said there are best practices with both NVidia and ATI, so it’s not like you have no options, but I’m gathering that you’ve taken those measures.

@C Jkr,
There is 30 day trial of Unity Indie for Mac, try it out on various graphics cards to see what you think? Also:
http://unity3d.com/support/documentation/Manual/Advanced.html
http://unity3d.com/unity/features/graphics

re: iPhone. The UT folks are programming gods. I still can’t believe what they done with the iPhone 1.0 release. It’s good stuff.