I’m currently in the beginning phases of developing a game, but have already done some work. I have 2 choices: 1) Unity3d built-in physics, or 2) a much simplified ‘home-brewed’ approach where performance isn’t an issue. Before considering embarking down the built-in physics/rigidbody path, I’d like some general answers from the community as far as what to expect.
This game could have up to 1000 small objects (maybe more) on screen at once which need to be collided. However, if it makes any difference, the objects are very simple, and the physics doesn’t need to be complicated beyond detecting if there is a collision or not, and then resolving it fairly simplistically (like, stopping the colliding objects).
The game needs to be able to run on what I will call ‘generic’ hardware. The game should not require someone having to go out and get an expensive fancy graphics card. I don’t know if the graphics card has anything to do with physics or not - you tell me.
My question is simply whether, in your best estimation, based on your experience with Unity, and based on the very general information I have given, whether the built-in physics system could or should be considered. Note that I’m asking what the performance would likely be for the system I have described.
Either answer (yes/no) is fine, I just don’t want to bother going down a certain path and coding up tests and profiling if I can get a quick and dirty answer on it beforehand. Note that a ‘no’ answer simply means I will use a ‘home-brewed’ physics approach where performance wont be a concern, but I will still use Unity for everything else.
A simplified version of this question might be “what are some rough numbers on how many simple, small 2D objects Unity’s physics engine can handle while still remaining reasonably performant (generic, non high-end hardware)?”
Quick addendum: I did search around on the web for some kind of profiling program which would test my generic system and show what the performance would be, but didn’t find anything.
Unity using PhysX, so it will use the CPU if no compatible Nvidia GPU is found. Even on CPU though, you should be able to handle a few hundred on common hardware.
The better question is, why do you need so many dynamic objects, and is there a reason you can’t just use Particles for most of them instead? Could you elaborate on your game idea more? Then we might be able to better direct you on the most efficient route for such game mechanics.
(Addendum: Unity Pro has a “Profiler” that lets you see how many milliseconds it’s taking to process each type of operation. So you can see how long it’s taking to calculate all the rigidbody2D physics for example. And thus what the performance is like on your machine.)
Honestly, the best answer would be to just try it and see what happens. That will give you a more accurate answer than what you may find on the web. If you need info on how well it works slower machines, post a web player and ask for feedback.
You don’t need to build the whole game, just something that spawns a 1000 objects, and detects collision and takes action. That will give a benchmark to star with.
@zombiegorilla: I had actually just tested it quickly to give my reply, at 1000 spherical colliders with rigidbodies and a sprite, I was hitting around 18FPS, and this is on an Intel i7 930 @3.2ghz and extremely fast memory, as well as 2x Nvidia GTX 460’s (although I’m not sure it was being GPU accelerated? Wonder if there’s a way to check that…)
500 was well above 30fps. Performance is probably better with box colliders instead too.
Unity physics are run on the CPU 100% of the time. No GPU, nVidia or otherwise, is ever used. Also, Unity uses Box2D for 2D physics, so referring to “the physics engine” is actually ambiguous since there are two.
If you are using 2D physics and you have to have so many 2D rigid-bodies around then try to use CircleColldier2D which gives the best performance and try to get as many of those rigid-bodies sleeping at one time as possible. Considering starting them asleep as well so you don’t get a huge hit initially until they sleep. Keep an eye on the profiler, it’s the number of contacts being resolved that is generally the killer.
Something’s wrong there, since I just tested the same thing and got 400-430fps, with a 2.8GHz Mac Pro and a Radeon 5870 (web player). Most of that is the rendering, though, rather than the physics…disabling the renderers gets 1000fps. But apparently nothing is allowed to run faster than that on OS X, so the “real” speed is probably quite a bit higher. That was using PhysX; switching to Box2D is only slightly faster, interestingly.
You had all 1000 rigidbodies falling into a tightly contained area and colliding with eachother and the walls? That’s what my test was, so they were piled high on top of eachother, probably not a realistic scenario for what he needs perhaps. Also, my walls were made out of Edge 2D colliders, that may have affected it as well, dunno. Your Mac Pro may be lower frequency, but it might have a better process architecture than mine if it’s somewhat new. Do you know what CPU it is? i7 930 is getting a bit old, it’s a 5 year old architecture now.
PhysX does and is aimed at GPU accelerated physics, but only on Nvidia cards, since Nvidia owns and updates it. Of course there is a software(CPU) mode to it though, but it’s odd that Unity would disable the core GPU acceleration feature of it, why take away a beneficial feature? Perhaps MelvMay could touch on this?
Just a sprite with a collider (you can toggle between circle or box) and you toggle whether or not there is constant force. Turns red when colliding.
On my MBP, it seems to only really start to slow down at around 1500 elements, though that may be because the screen is full and everything is colliding.
Offhand I don’t know exactly what model of CPU; a Xeon of some kind, around 4 years old or so. In any case there’s no way it should be 20+ times faster.
Not in Unity. It’s not that they “took away a feature”, it was never there to begin with. It’s an older version and is customized (they’ve been using it since it was Ageia, before nVidia bought it).