This was one of the main reasons which led me to purchase Unity :
I’m pretty satisfied with iPhone Unity for now, but I just realized I made blindtrust, and furthermore I never got any point of comparison with other solutions.
So my 2 questions are simple, and should not be very unfriendly as we can see UT state themselves that 30-40% perf gain.
Here we go :
Do we have any official benchmark to prove that iPhone Unity beats openGL ES by 40% ?
Do we have benchmarks of every solutions (including Unity and even pure Xcode) that exists to developp games on iPhone ?
I’m just curious.
Thank you
p.s : I know benchmarks are not a proof, but they’re close to.
I read a post of a UT member who explained that they have close relationship with Imagination Technologies (iPhone’s GPU provider), and were directly adressing parts of code to the iPhone GPU thanks to that.
No, that’s definitely not the case. Although we’re in good relationship with Imagination Tech, iPhone OpenGLES drivers are made by Apple (and there is explicit statement in EULA preventing circumvention).
Regarding benchmarks, that is a comparison with GLBenchmark - site doing OpenGLES bencmarks for difference devices. What we mean is that Unity uses smallest size for vertices and good stripping algorithms which can give us better results than cited by those benchmarks (I could push around 900KVerts/s, but in a completely synthetic scenario of course).
Which would mean you’re beating Apple themselves ?
Pretty impressive, but knowing how protective they are, didn’t it bring you some complications with them ?
(thanks for the url)
edit : oh and that would just mean writing an iPhone game in pure Xcode will be slower fps than in Unity … that’s precisely what I wanted to know, which is great news
No, that’s nothing to do with Apple. As I said we do use OpenGLES drivers, no way around that. It just a matter of how optimal is data being sent to the OpenGLES driver. GLBenchmark doesn’t really try to push envelope in this case.
Two things that have been bugging me about these optimizations and which I could really use clarification on:
Joe said something during the beta that gave me the impression that the higher vertex throughput came at the cost of sometimes duplicating vertices that otherwise would not be duplicated. Is this so? If so, under what circumstances are they duplicated? Is the 10,000 number that gets thrown around pre-duplication or post-duplication? Does the number in the stats view reflect pre or post duplication?
How do you know which channels of data can be stripped? For example, if a mesh has two UV channels, are those always sent to the GPU or is Unity being smart and stripping out the second UV channel when the mesh will be rendered with a shader that doesn’t use them?
Can we get some info in an inspector about how Unity will process a mesh (bytes per vertex, which channels may/will be stripped, level of vertex duplication, etc) that would allow us to tailor our artwork to better take advantage of these optimizations?
Still, I don’t want to hijack this request, but there’s one more thing coming to my mind about iPhone performances :
I currently got a skinned char with more than 30 different rich animations, for a total of 700 to 800 frames.
I am not doing a 30 frames / Sec baking of course, but there is still a LOT of transformations.
Actually I imported only 4 full animations in Unity, but didn’t noticed a single Kbyte of VRAM increase.
I’m aiming at 8 chars, with 40 animations each (~ 1000 frames per char). Each char have 40 bones, with an average of 10 animated key bones, and there will only be 2 chars onscreen.
For now I hit 30 fps easily with 2 chars, and just 3 MB of VRAM, which is mainly eaten by my HUD.
-----> Long story short : Do you have an estimation about how many VRAM do an average single converted FBX Transform[ ] can eat ?
Looks like it’s very featherweight …
If I’m understanding it properly, the answer is… “It depends”.
First off, OpenGL ES 1.1 doesn’t keep VBOs around [I think, this is another area my understanding is weak in], so it’s not really a question of “how much VRAM does this use” as “how much data is submitted for processing each frame”.
Unity is apparently packing things down based on how much precision you actually need to represent a given model accurately. 8 bytes per component of a vector3, or 16, or 24, or 32 – whatever is needed based on the details of the mesh.
So basically – it depends on your mesh. Should have nothing to do with your skeleton, BTW. Unity doesn’t do skinning on the VGP yet, it does it entirely from the CPU.
To clarify, this behaviour is not something in OpenGL ES 1.1; it’s more of an implementation detail (of Apple’s iPhone drivers). Partly caused by architectural constraints of the graphics chip, partly by some inefficiency (could also say “stupidity”) in the drivers.
So bones weren’t stocked in memory ? Kinda dangerous for performance smoothiness …
As I understood correctly the roadmap, it won’t be the case anymore. Which leads to an alternative of my initial question : will it make appear some kind of memory constraints ?
I would be very disappointed if I was suddenly forced to cut my chars animations by an half to save memory overloads