Hey, the title pretty much says everything. Im trying to use the camera depth texture inside another shader and thats why im trying to Blit the camera depth into a rendertexture via OnRenderImage.
The only problem i have is, that its not called once in the HDRenderPipeline project.
I created a new default 3D project (no hd render stuff) and there the OnRenderImage method is called as always.
Where can i hook up the blit method in the HDRender project?
Alright, through a lot of search i found out that i could use a custom post process stack effect to blit the depth into a rendertexture like this:
C# part:
using System;
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using UnityEngine.Rendering.PostProcessing;
[Serializable]
[PostProcess(typeof(DepthExporterRenderer), PostProcessEvent.BeforeStack, "Custom/DepthExport")]
public sealed class DepthExporter : PostProcessEffectSettings
{
//public RenderTextureParameter depthTexture;
}
public sealed class DepthExporterRenderer : PostProcessEffectRenderer<DepthExporter>
{
public override DepthTextureMode GetCameraFlags()
{
return DepthTextureMode.Depth;
}
public override void Render(PostProcessRenderContext context)
{
var sheet = context.propertySheets.Get(Shader.Find("Hidden/Custom/DepthShader"));
//sheet.properties.SetFloat("_Blend", settings.blend);
context.command.BlitFullscreenTriangle(context.source, context.destination, sheet, 0);
}
}
[Serializable]
public sealed class RenderTextureParameter : ParameterOverride<RenderTexture> {}
and the following shader:
Shader "Hidden/Custom/DepthShader"
{
HLSLINCLUDE
#include "PostProcessing/Shaders/StdLib.hlsl"
TEXTURE2D_SAMPLER2D(_CameraDepthTexture, sampler_CameraDepthTexture);
float4 Frag(VaryingsDefault i) : SV_Target
{
float depth = LinearEyeDepth(SAMPLE_DEPTH_TEXTURE(_CameraDepthTexture, sampler_CameraDepthTexture, i.texcoordStereo));
return float4(depth, depth, depth, 0);
}
ENDHLSL
SubShader
{
Cull Off ZWrite Off ZTest Always
Pass
{
HLSLPROGRAM
#pragma vertex VertDefault
#pragma fragment Frag
ENDHLSL
}
}
}
Though i do have another problem now because the displacement which im using is based on this depth texture and that seems to be changing every render alghough nothing is moving.
Im going to open another thread for this case.
These are my guesses. Post a gif if it’s not too much trouble and I can tell ya if that’s the case or not.
Typical Depth Buffer “Jitter” Scenario:
Typically it will “jitter” because of when (actual time during the render process) you are accessing the current Camera’s Depth Texture. The depth buffer gets filled as opaque geometry is drawn to the screen and isn’t “filled” until that has been done, so, if your shader is used as part of the opaque geometry queue, the depth buffer for that frame has not been filled yet and it is probably using the depth buffer from the previous frame. Your depth buffer is basically lagging behind one frame. To fix this, you generally set the queue of the shader to “Transparent” since transparent geometry gets rendered after opaque geometry (at which point the depth buffer will be filled)
Your situation:
I believe the situation is similar for you. You copy the depth buffer via post-processing to another buffer and use that in your shader but post-processing is done after opaque/transparent geometry is drawn which means your shader has already been used for rendering and you are copying what is now the previous frame’s depth buffer to your intermediate depth buffer. Now it’s the next frame and your shader is again used in the render pass but now you are using old depth buffer data stored in that intermediate buffer
Side Note:
It might also “jitter” because there is also the SceneView camera which has it’s own depth pass stuff so you might actually be using the SceneView camera depth buffer at times in addition to your main camera. This should be rectified when entering Play Mode and not having the SceneView active
EDIT:
Jitter issue was a little different from what I expected due to vertex displacement. Can see the “jitter” here: Does the depth texture slightly change in each render, although the whole scene stays still?
@wyattt can you confirm if the above way is correct current way to do depth for new pipelines, such as if I wanted to do a standard colour by depth type shader such as that on the example for depth texture on the documentation? Really struggling to recreate an underwater fog/Coloration shader
This is how you should currently do it (minus the post-processing and intermediate buffer part though).
Off the top of my head, here are the steps you’d have to take:
SRP and Depth in custom Shader (similar to non-SRP workflow):
- Enable depth on your camera or in the pipeline settings asset if there is one (for Lightweight in specific)
- Set the “Queue” Tag to “Transparent”
- Add sampler2D _CameraDepthTexture to your shader
- Sample the camera’s depth buffer using the LinearEyeDepth function that Desoxi used above
- Use the linearized depth value to do your depth-based coloring
SRP and Depth in a Shader Graph:
- In your Shader Graph, add a Texture2D property via the Blackboard
- Set the reference to “_CameraDepthTexture”
- Set “Exposed” toggle to false
- Drag the property into your Shader Graph workspace
- Add a Sample Texture 2D Node and plug the _CameraDepthTexture node into the Texture2D input port
- You now need to sample the depth texture using screen space UVs, so you’ll need to add a ScreenPosition node and plug that into the UV port of the Sample Texture2D Node. This will give you the screen position of the current mesh fragment
- The output of the Sample Texture2D Node will now give you the stored non-linear depth value for the screen pixel where the current mesh fragment is going to be drawn (you’ll have to linearize this yourself until our Shader Graph Depth Node makes it into a package release). You can look at the Unity shader source to get the code for that function
- Do your depth-based coloring with the linearized depth value
- Create a Material out of your Shader Graph
- Set the Render Queue to Transparent via the Material Inspector
For both cases, you might also need to compute the depth for you mesh fragment and compare that to the depth value stored in the depth buffer to get the color comparisons you want
Brilliant thanks very much for detailing the steps clearly for me and anyone else who reads this!
HDRP now supports Depth RenderTextures by default if you want to access it outside shaders, look here: Feedback Wanted: Scriptable Render Pipelines page-20#post-3707713
Hey, Im using LWRP 3.0, upgrading the project from a shader forge one. In shader forge I had “write to depth buffer” selected and the different alphaed objects blended as expected. When I moved to shader graph i get this sorting issue. Please can someone offer a solution? I cant really modify the mesh to remove inside faces at this stage in the project. (FYI is a hologoram so I kind of need it all to be transparent to add fade in and out effects)