hello , I’m tring to make my asset supported by HDPR, but i ran into some performance isue:
In my asset i use camera to generate a heightmap of the scene.
Currently, I’m using castum pass (with override Unlit shader) to implement this inside HDRP.
In my asset, the generation of this heightmap happens very, very frequently (~300 times per secound!!!) and I use it for some general (Not graphical) computation that i need to preform five times each frame.
To make the generation process fast in HDRP i set generationCameraHD.hasPersistentHistory to be true, and i disabled every HDRP’ features for this camera include : post-processing, exposure, volume calculation, reflection, shadow mapping, lightning, transparency, blending, fog, sky…
But even now after all these optimizations the generation process still runs X3 Slower compare to builldin-RP!!!
In the profiler:
What can i do to disable all these unnesesrly time consuming HDRP processes in the camera rendering cycle???
I also start to consider to render all meshs in the scene menualy into the render texture without using of camera but for doing this I need very fast and optemised frustum culling mechanism (Like the one cameras have).
First maybe out of subject question : why do you need to render the depth 5 times each frame ? Couldn’t you render once and reuse ? (No offense, you might have a very good reason, it just feels a bit counter intuitive)
Now, about the actual optimisation :
- Do you need persistant history ? That for history buffers, so afaik, used for TAA and other multi-frame effect.
- A big part of the “what is this s***” part is volume evaluation. iirc it is not possible to totally disable it, but you can set you depth camera volume layers to “none” to lessen the effect ?
- To be sure the camera is not rendering objects twice, set the layermask to none, as the custom pass should be the only thing rendering here.
Hello Remy_Unity, thanks for your comment.
I am writing here some clarifications regarding your comments:
Unfortunately, I have good reason to do so. I emulate a physical sensor in a synthetic environment (My Unity Scene). This sensor should works close to 300Hz (Approximatly 5 times per rendering frame) and its should measure a large amount of distances each time. I found a way to do so using simple rendering insted a lot of raycasts. On Build-In-RP Projects its works close to 10 times faster compare to the using of jobs system for ray casting.
I found that setting hasPersistentHistory to “true” can save a lot of camera preparation when I call the camera. render several times a frame (I realy don’t know why). If i leave the camera without hasPersistentHistory then i get in the profiler something that looks like the following:
I already did it:
[Sorry for my English]
Yeah, I don’t see how much it could get better
I havent tried this, but maybe setting the camera to “Fullscreen Passthrough”, which totally disables HDRP loop for this camera, and manually call objects rendering yourself for it in the camera command buffers, could give you better performances. But you’ll have to do culling yourself, and not SRP batcher to help you either …
Getting back on the “render 5 times per frame” subject, what are the things that change between these 5 renders ? Is the camera moving ?
I think I finally found how to deal with this HDRP overhead but is it not very alegant way:
I wrote a script that insert blocks of code into the HDRP core code - The script adds a new boolean property to HDAdditionalCameraData called “skipHdrp” and then it injects a code of very basic SRP application into the “Render” meshod in Runtime\RenderPipeline\HDRenderPipeline.cs file. I also added a condition that determines whether to skip the HDRP stuffs or using my SRP code section based on the camera’s skipHdrp property. I hope that HDRP’s core code signature will not change drastically
1 Like