Rendering deph buffer without an additional DepthOnly geometry pass for Camera.depthTextureMode.Depth

When rendering AO, Unity renders an extra pass of all geometry in order to render a depth texture first. This is because the exact set of buffers (and even the existence of the depth buffer at all in any sort of subsequently addressable memory) isn’t guaranteed on modern tiled architectures.

Assuming we’re targeting a broad range of modern high-end devices (high-end Vulkan Androids, high-end Metal iOS, PC, Mac and WebGPU) is there any more efficient way to “get hold of a depth buffer” to render AO than rendering all the geometry twice, given that our geometry is very dense and geo accounts for almost all of our frame time? E.g. can we use MRTs, binding an F32 depth texture as a second RT, and render our own depth buffer as part of a single pass and then modify code to feed that to AO?

We’re currently using BIRP and PostProc stack for this. Would moving to URP eliminate this issue (as in actually make things almost 2x faster the way they get if we don’t use AO on super dense geometry, as opposed to Unity really wanting us to do it, us needing to do it sometime before Unity7, and it hopefully being a bit faster… I totally believe there are many good things about URP, but want to understand if this very specific issue would change.)

Possible answers include “no that’s not possible” - I haven’t really worked at “what exactly is the GPU doing here” level since PS3/X360.

i pose one question to you. how is the depth buffer supposed to be made if the geometry in the depth buffer is never sent to be drawn? sadly depth buffers don’t just appear magically, though i wish they did. to create the depth buffer, unity MUST render the whole scene with its own shader (depthnormals pass) to output depth information. your ideas about storing buffers may have some validity in a lightmap type approach, but the way you describe the size of your geo makes me think the topology might not allow that, at least at a reasonable resolution. at the end of the day, the only solutions i can imagine for removing the need to draw the depth buffer is a solution in the object’s shader and not a screen space one.

though, i am making assumptions that your camera and geometry move. if your scene isn’t changing in the camera, you COULD cache the depth buffer by rendering your own.

Not drawing twice. Not not drawing. Traditionally depth buffers do just appear when rendering polygons with depth write enabled, so they’re a side effect of rendering with the depth buffer algorithm. It’s just that on many architectures they now only exist in tile memory unless you resolve them.