I implement a former ImageEffect to HDRP with a CustomPostProcessVolumeComponent and I have various issues.
When I follow these instructions I am unable to add the post effect to the BeforePP list, only to the AfterPP. It seems to be fine for my needs but having raw, non-tonemapped values could be beneficial in some cases.
Is there a way to change the order of PostProcessing Components? It seems that each has a predefined priority. Component inspector position has no impact.
“The injectionPoint override allows you to specify where in the pipeline HDRP executes the effect. There are currently three injection points:”.
This line in the example allows you to set execution point:
public override CustomPostProcessInjectionPoint injectionPoint =>
CustomPostProcessInjectionPoint.AfterPostProcess;
I remember it worked fine for me sometime in August last year, but things might have changed since then.
Check Keijiro’s HDRP post processing examples for blit examples.
You need to reserve RTHandles for each render pass you need, like one for your horizontal pass, one for your vertical pass, then render the final result to the destination or whatever you call the incoming image (RTHandle) in your Render() override. Then use HDUtils.DrawFullScreen to render your shader pass.
At least that’s how I’ve done it, while there’s no proper documentation available it’s just guesswork, trial and error. Not sure if it’s the right way but it worked for me.
But it just doesn’t work. I made a few checks to see if the materials and shaders are fine and they are.
For example when I change var lastRT = source.rt; then “_MaskTexture” is set and I can use it in the shader, but the way I do it the texture remains black.
If I call HDUtils.DrawFullScreen(cmd, m_Material, destination, _prop, 1); I also get what I expect.
The way I create the RTHandles are taken from the example you posted, the Streak Effect.
public RTHandle GetNewRTHandle(HDCamera camera)
{
var width = camera.actualWidth;
var height = camera.actualHeight;
const GraphicsFormat RTFormat = GraphicsFormat.R16G16B16A16_SFloat;
var rt = RTHandles.Alloc(scaleFactor: Vector2.one, colorFormat: RTFormat);// RTHandles.Alloc(width, height, colorFormat: RTFormat);
rtHandles.Add(rt);
return rt;
}
public override void Render(CommandBuffer cmd, HDCamera camera, RTHandle source, RTHandle destination)
{
if (m_Material == null)
return;
m_Material.SetFloat("_Pixel", Pixel.value);
m_Material.SetFloat("_Amount", Amount.value);
m_Material.SetFloat("_Amount2", SecondAmount.value);
m_Material.SetFloat("_Threshold", Threshold.value);
m_Material.SetTexture("_InputTexture", source);
if (rth1 == null)
rth1 = GetNewRTHandle(camera);
if (rth2 == null)
rth2 = GetNewRTHandle(camera);
HDUtils.DrawFullScreen(cmd, m_Material, rth1, _prop, 1);
var lastRT = rth1;
//for (int i = 0; i < Iterations.value; i++)
//{
// cmd.Blit(tmp1, tmp2, blurMaterial, 0);
// cmd.Blit(tmp2, tmp1, blurMaterial, 1);
//}
m_Material.SetTexture("_MaskTexture", lastRT);
HDUtils.DrawFullScreen(cmd, m_Material, destination);
}
Alright this shows that posting full source is beneficial. The issue was the difference between LOAD_TEXTURE2D_X and LOAD_TEXTURE2D. For my RTHandles I have to use LOAD_TEXTURE2D. I do not plan for VR so I don’t mind but maybe there also is a way to enable VR mode for RTHandles in case this becomes necessary.
Here is some insight
Good to hear you got it working. Yes it would be good to also have the shader side of code visible as quite a few things have changed (when compared to built-in render pipeline.)