When rendering AO, Unity renders an extra pass of all geometry in order to render a depth texture first. This is because the exact set of buffers (and even the existence of the depth buffer at all in any sort of subsequently addressable memory) isn’t guaranteed on modern tiled architectures.
Assuming we’re targeting a broad range of modern high-end devices (high-end Vulkan Androids, high-end Metal iOS, PC, Mac and WebGPU) is there any more efficient way to “get hold of a depth buffer” to render AO than rendering all the geometry twice, given that our geometry is very dense and geo accounts for almost all of our frame time? E.g. can we use MRTs, binding an F32 depth texture as a second RT, and render our own depth buffer as part of a single pass and then modify code to feed that to AO?
We’re currently using BIRP and PostProc stack for this. Would moving to URP eliminate this issue (as in actually make things almost 2x faster the way they get if we don’t use AO on super dense geometry, as opposed to Unity really wanting us to do it, us needing to do it sometime before Unity7, and it hopefully being a bit faster… I totally believe there are many good things about URP, but want to understand if this very specific issue would change.)
Possible answers include “no that’s not possible” - I haven’t really worked at “what exactly is the GPU doing here” level since PS3/X360.