Hi,
I’m might missing somethings obvious, but profiling a Develop Build of our application, I noticed that, as soon FPS go under ~60, the profiler stops receiving information. This is not the case if I test the application directly from the Editor. This is not what I want since the editor tends to introduce too much overhead, and we are already tight on performance.
What I’m misunderstanding here? Is this supposed to work this way, is this a bug, or I have missed an important part of the documentation?
Lastly, it seems to me, that the time spent to process an entire frame is always the highest between GPU and CPU, as far as the sum of the two stays below 16ms. But, if this sum is higher, than the time spent for a frame is the sum itself.
E.g. GPU 8ms and CPU 3ms => Frame time is 8ms (~120FPS)
while: GPU 14ms and CPU 16ms => Frame time is 30ms (~30FPS)
Are these two questions in somehow related?
We have Unity Pro 4.1.5 and my target platform is Windows 7.
Thanks,
Fabio