Getting the rendering time progromatically

Hello All,

I’m interested in actually understanding how the HDRP additional parameters are costing me in term of time, I’m using Unity to only do 3D simulation. To explain it more, I’m running the program to loop through a specific number (say 10) to generate 10 different scenes (different in terms of object positions and materials and parameters values such as shadows) each time am capturing the scene using RenderTexture to PNG image.

I’m interested in getting the exact render time of each of the 10 scenes, I want to understand for example how does enabling the shadows affect the rendering time?

I tried using UnityStats.renderTime, but found no clear explanation on its functionality so wasn’t sure about it.

Thanks for your help

Bump:)

I did this recently using FrameTimingManager (Unity - Scripting API: FrameTimingManager). It gives you an accurate GPU time in milliseconds but you may need to render for a set amount of time to get a stable value. Alternatively you could just use the editor profiler or the ‘stats’ overlay in the editor scene window.

The reason I asked for a new option is that what’s in the profiler contradicts with the values in the ‘stats’.

You might look into Unity Game Simulation, there is more information here: https://forum.unity.com/threads/unity-game-simulation-documentation-and-sign-up.888241/

To my understanding, this doesn’t support HDRP directly, am I right?

I’m wondering if “Camera.Render” inside the profiler does the thing? I mean does it indicate the rendering time?

If yes, is there a way to get it through code?

Sorry for troubling you guys.

Hi @raghadalghonaim ,
Which Unity version are you on? And is it ok to just get these timings in a development player?
What @tonemcbride mentioned is a nice way to get GPU timings but it’s sadly only available on some devices and graphics APIs yet (like GPU profiling, something we need to sort out). You can get the Render timings on all platforms, at least in development builds, via the Profiling.Recorder API. With that you can record all relevant root Rendering samples while setting it to CollectFromAllThreads. Only issue is that these samples mostly occur both on the Render thread and the Main thread and you can’t separate them with this API. (You can’t even set one “Camera.Render” Recorder to record the main thread and one to record all threads and then subtract the results, because they pointing to the same native recorder… that use-case was somehow overlooked with this API.) We’re working on a new ProfilerRecorder API that doesn’t have that restriction (and multiple other upsides) for 2020.2.

Hello Martin,

I’m using unity 2019.3.10f. As for your suggestion, I’m not sure if I did understand correctly what’s the difference between the main thread and render thread? sorry if this is a dumb question, this is my first time with Unity.

To explain my use-case more, I will each time render a single image and I just want to store the rendering time somewhere, I thought this would be an easy thing to do but not sure why is it that difficult… :frowning:

I was looking into the profiler thing for a week now but found no direct way to store its info from script to json or csv.

I really appreciated your response…

The main thread is where most of your standard MonoBehaviour type code executes and most Unity Systems run or at least start their work for each frame (these Systems might then continue their work in other threads).

When you ask for some visuals to be presented on screen, e.g. through a Camera, that Camera will then Render what it sees, i.e. grab and process all the relevant info about the scene, it’s view frustrum and the elements in it, including their shaders and materials and all that. It does some culling and other preprocessing of the data on the CPU to bring it into a format it can then hand over to the GPU. That’s the Rendering bit happens in part on the main thread and, if you have Multithreaded Rendering enabled (which should be the default nowadays), in part on the render thread. You can see that a bit more clearly in the CPU Profiler Module’s Timeline view.

Now, once the CPU hands over to the GPU. Tracking that work over on the GPU is where it’d get a bit more tricky. in 2020.1 we added GPU recording capabilities to the Recorder API, but those timings come with a bit of a delay because the GPU work isn’t happening that in sync with the main thread, and the information needs to be send back and synced with the main thread, where you are using the Recorder. There are however CPU view Samples that give you some info on how long the GPU was processing a particular frame, e.g. by the Gfx.Present Sample on the Render Thread or the Gfx.WaitForPresentOnRenderThread Sample on the main thread. however, these timings also potentially include time for VSync and the ones on the main thread might start at a different point in time than when the GPU started processing the frame, so all of this is rather indirect info.

This Unite Now talk goes a bit more into this and how this whole flow looks like in the Profiler. You can observe how all of this works for your case via the Timeline view and checking the short guidance on the meaning of these samples in the Profiler documentation, and then use that info and the samples you’ve identified as most significant for your measuring scenario to Record these timings with the Recorder API.

1 Like

Thanks Martin!

This is so helpful.

1 Like

@MartinTilo

I’ve been trying to use the Profiler API (2020.3.17f1) to get the render thread timings, but I can’t get it to work. I feel like I’ve tried every permutation, at this point, of the Profiler apis with various graphics sampler names:

Recorder rhitime = Recorder.Get("Gfx.PresentFrame");
ProfilerRecorder rhi = ProfilerRecorder.StartNew(ProfilerCategory.Render, "Gfx.PresentFrame");

These both return recorders with valid=true, but always return 0 for all timings.

  1. What platforms is this supposed to support? It doesn’t work on either Editor or Standalone Android for me.
  2. Why are there two APIs? I can’t figure out for the life of me what’s different about them – do all samplers work in both?
  3. Do I need any special settings enabled? I’m using Development mode in standalone, and I have the Profiler open in the Editor.

I’ve just given the ProfilerRecorder side of this a test. I also only get 0 in the Editor but only because there is no “Gfx.PresentFrame” marker present in my case (checked the profiler) so, that’s kinda expected then. It worked as expected in Windows Standalone and Android. I’ve used .LastValue instead of .CurrentValue though. If I use the current value I get mostly 0 for “Gfx.PresentFrame” and “Camera.Render” but sometimes non zero if the last frame’s render thread ends before my script reads that Recorder value. That is to be expected as my code executes before the Rendering code of the current frame and as the Render Thread still laps over into the current one in these instances, it is still writing to the “Current” value, which really is last frame’s value that has not yet been reset as the frame is still “ongoing”.

So, do you use CurrentValue or LastValue?

Because we built ourselves into a corner with the old Recorder API, which, among other things couldn’t record Counters (they weren’t conceived when it was written), is a class (bit of an issue with DOTS) and just wasn’t properly adjustable to become what is now ProfilerRecorder. It is a bit unfortunate to have both of these at the same time for the transition period. The underlying bits are getting reworked and GPU measurement capabilities added so that we’ll be able to deprecate Recorders going forward, which should clarify things again.

That question depends a bit on the Unity version. IIRC markers for Counters are not supported by Recorder API, and GPU measurements for ProfilerRecorder only supported on newer Unity versions (+2021.x if I’m not mistaken right now).

Really appreciate the response, thanks!

Unfortunately, I still haven’t been able to get this working on Android standalone. Here’s the code:

ProfilerRecorder gfxTime = ProfilerRecorder.StartNew(ProfilerCategory.Render, "Gfx.PresentFrame");

// Executes every update in a coroutine:
Debug.Log(gfxTime.LastValue);
yield return null;

Is there some way I can sanity check this? Other counters like Triangle Count work just fine. This is on the Oculus Quest 1 and 2.

Based on what you’re saying it sounds like I should ignore the Recorder API from here on out. It might be nice to deprecate the API going forward, because it really wasn’t clear which one was the ‘new’ api and which one was the old one.

Weird…

Not too sure but you could record more than 1 sample (specifying the count in the constructor) and then print them all out e.g. getting them via .GetSample or CopyTo or ToArray.
Before my last post, I’ve checked this situation by plotting the LastVale into a ProfilerCounter, profiling the build and adding that counter to a custom profiler module. I then checked the value reported via the counter against the time reported for Gfx.PresentFrame in timeline view. I also checked in timeline view where my code set the counter value vs where the frame flip happened on the Render Thread.

Yep we’ll do that as soon as ProfilerRecorder has reached full feature parity, i.e. supports profiling GPU, and anything internal that relies on it has been rerouted.

This update on FrameTimingManager stats is relevant to this thread as it includes relevant rendering times.