HDRP high GPU memory issue

I heard quite a lot of people mention that HDRP has high GPU memory usage issue. One of the people tells me that it’s caused by dynamic resolution feature will try to get big render texture but it doesn’t release properly and keeps staying at GPU memory forever. Any plan to properly fix the issue and also try to reduce GPU memory as much as possible?

HDRP is using a render graph internally in order to have the most efficient possible use of GPU memory. However the cost can still be quite high when many features are enabled. The best way to reduce memory usage in this case is to either lower the resolution or disabled unused features.
Regarding the dynamic resolution issue you are describing, internally HDRP keeps render textures at the highest size we ever rendered and then renders into a partial viewport for lower resolution renders (either another off-screen camera or a dynamically downscaled render for example). The reason is to avoid reallocating every render textures every frame which would have a very high cost. The result is that we have to pay the memory cost of highest resolution you need to render.
One case where this behavior is not desirable is when you render very rarely to a much higher resolution (for an upsampled capture for example) in which case the GPU memory is lost. For this we have this API: ResetRTHandleReferenceSize which is used to reset the internal render textures to a specific maximum size.

@Julien_Unity I’m reviving this thread to avoid creating a similar one and losing the context of the previous discussion.

I’m reading above and I’m a little bit unclear still over how the HDRP RT buffers are allocated when upscaling is used.

For example… let’ say I’m using DLSS with 0.5 scaling factor, and my screen res is 4K. I notice that a huge amount of HDRP RT buffers are allocated at 4K, consuming massive memory (2.5GB in total). But it’s also clear that when looking at the frame debugger, most stuff is just being rendered at 1/4th size (bottom-left corner) - my upscale injection point is at “After Post Process”. So conceivably almost no buffers need to be allocated at 4K, except for the final frame buffer and a few others.

So is this what ResetRTHandleReferenceSize() is supposed to deal with? Can I call this, or something else, to tell HDRP… “Hey, I’m now using DLSS and upscaling factor 0.5, let’s get rid of this huge memory waste” ?

If we can save basically 1.5GB of currently purely wasted GPU memory, that would be a game changer.

@mgeorgedeveloper1 this unfortunately is not the way the system works.
it might be technically doable to have some lower res render textures for all the stuff that happens before the upsampling (depth buffer, gbuffers, …) but actually HDRP shares a lot of rts between passes at various states of the pipeline, so it might be rather tricky to divide them properly into separate groups. maybe once rendergraph is better integrated this will become an option.
for now (HDRP 14 and even 17) the final output resolution determines the size of all internally used render textures.

nevertheless ResetRTHandleReferenceSize is a very useful function. zhink of the user changing the screen resolution or window size.
but as it is not straight forward i will leave some findings/thoughts here:

  • Screen.SetResolution does not change the the resolution or window size immediately. i guess unity will wait for the next v-sync event but i do not know.
  • so calling ResetRTHandleReferenceSize() right afterwards in code most likely will fail as there are some safety checks if the resolution really changed etc.
  • instead of calling it right afterwards you may delay it using Invoke("MyResetFunction", 1.0f/30.0f);. using 1.0f/30.0f gives Unity plenty of time to actually change the resolution (30Hz).
  • next hurdle on my way to make it actually work was to get a reference to the render pipeline… so i just leave the solution here: HDRenderPipeline refToPipeline = (HDRenderPipeline)RenderPipelineManager.currentPipeline;