Hi,
Is there any way to reuse the scene depth to mask a render texture in a custom pass? I use a layer mask to render some geometry to a texture and I want the geometry that is in front of this one to mask my texture.
I tried comparing the scene depth and the mesh depth like so, but with no luck
float depth = LoadCameraDepth(varyings.positionCS.xy);
float d = LoadCustomDepth(posInput.positionSS);
float alphaFactor = (d < depth) ? 1:0;
my custom volume settings:
Thanks
Hello,
By any change would you be able to set the target depth bufer to camera when rendering your objects (+ override the depth state so you don’t write to the camera depth buffer) ? With this the objects you render in your mask would be able to perform depth-testing against the scene (but not against each other).
Otherwise i don’t see anything wrong with the code that you posted (just wondering why are you using two different coordinate type to load the custom and camera depth buffers ?). I suggest you to debug the depth you’re sampling, you can use our conversion functions LinearEyeDepth
to convert the raw depth value to a depth in view space, like here: HDRP-Custom-Passes/Assets/CustomPasses/TIPS/Resources/TIPS.shader at master · alelievr/HDRP-Custom-Passes · GitHub It will be easier to visualize in this format.
I actually used the example you mentioned and they are using these different coordinates too, that’s why mine look like that.
so using the linearEyeDepth trick I was able to see that the depth texture from the scene looks fine, but the depth from the mesh renders as fully white. So I guess that’s why the depth testing is not working.
I thought my camera depth target was already set to camera, no ?
About overriding the depth state, I’m not sure if I’m doing this correctly ( probably not ), but I create my depth buffer like so:
_rtDepth = RTHandles.Alloc(
Vector2.one, TextureXR.slices, dimension: TextureXR.dimension,
colorFormat: GraphicsFormat.R16_UInt, useDynamicScale: true, isShadowMap: true,
name: "Depth Mask", depthBufferBits: DepthBits.Depth16
);
then I do the objects rendering like this:
var result = new RendererListDesc(_shaderTags, cullingResult, camera.camera)
{
rendererConfiguration = PerObjectData.None | PerObjectData.LightProbe | PerObjectData.LightProbeProxyVolume | PerObjectData.Lightmaps,
renderQueueRange = RenderQueueRange.all,
sortingCriteria = SortingCriteria.BackToFront,
excludeObjectMotionVectors = false,
layerMask = maskLayer,
stateBlock = new RenderStateBlock(RenderStateMask.Depth) { depthState = new DepthState(true, CompareFunction.LessEqual) },
};
CoreUtils.SetRenderTarget(cmd, _rt, _rtDepth, ClearFlag.All);
HDUtils.DrawRendererList(renderContext, cmd, RendererList.Create(result));
What i meant is that when calling your SetRenderTarget() you pass your custom color buffer and the camera depth buffer and because it already contains the objects in your scene, the other objects you’re drawing are going to be depth tested against the scene. But the problem is that you wont have your objects rendered in your custom depth buffer. So if you need your custom depth buffer for other steps in your effect you can’t do that.
You can get the camera depth buffer using this function: https://docs.unity3d.com/Packages/com.unity.render-pipelines.high-definition@7.1/api/UnityEngine.Rendering.HighDefinition.CustomPass.html#UnityEngine_Rendering_HighDefinition_CustomPass_GetCameraBuffers_RTHandle__RTHandle__
Thank you so much for your help, but sorry I still can’t get it to work
So I tried passing the camera depth like you said with the GetCameraBuffers() method, but I still get the same result were my custom color buffer renders like if there was nothing in front.
RTHandle source;
RTHandle depth;
// Retrieve the target buffer
GetCameraBuffers(out source, out depth);
// Render the objects in the layer mask into a mask buffer
var result = new RendererListDesc(_shaderTags, cullingResult, camera.camera)
{
rendererConfiguration = PerObjectData.None | PerObjectData.LightProbe | PerObjectData.LightProbeProxyVolume | PerObjectData.Lightmaps,
renderQueueRange = RenderQueueRange.all,
sortingCriteria = SortingCriteria.BackToFront,
excludeObjectMotionVectors = false,
layerMask = maskLayer,
stateBlock = new RenderStateBlock(RenderStateMask.Depth) { depthState = new DepthState(true, CompareFunction.LessEqual) },
};
CoreUtils.SetRenderTarget(cmd, _rt, depth, ClearFlag.All);
HDUtils.DrawRendererList(renderContext, cmd, RendererList.Create(result));
var compositingProperties = new MaterialPropertyBlock();
compositingProperties.SetTexture("_Mask", _rt);
HDUtils.DrawFullScreen(cmd, fullScreenMat, source, compositingProperties, shaderPassId: 0);
Is that what you meant?
When calling the CoreUtils.SetRenderTarget, you’re clearing all the targets that you’re binding (so the camera depth buffer). That means there is no more depth information in the camera depth buffer, you can set the clearflags to ClearFlag.Color so it only clears the color
Ahhh that was it, works now! thanks a lot!
2 Likes
Hey @antoinel_unity ,
I’m now rendering objects with a command buffer in the custom pass because I don’t want them to be rendered by the camera as is. It was working fine in unity 2019.0.3f5, but now in f6 it broke.
The uv for the depth and the uv for the color seem not to match anymore. In the full screen pass I need to multiply the UV by the _RTHandleScale.xy for the depth to work, but not multiply it for the color. Since they are the same render target I cannot set a uv for the color and an other one for the depth right ?
EDIT: I made it work using CoreUtils.SetRenderTarget instead of using the SetRenderTarget straight from the command buffer.
Hey,
When sampling RTHandles in shaders, you must always multiply your raw UVs by the _RTHandleScale.xy because when you have multiple cameras (scene view + game view at same time for example) it avoids to sample outside of the current camera viewport size. Note that you can also use a Load operation with the screen corredinate, in this case you don’t need to scale anything.
And yes, the CoreUtils.SetRenderTarget also sets the viewport and so avoid to write to a full render target which can cause scaling issues.