Hey! I say again up there because there is a post with the same title , with a solution that left me unsatisfied.
I am trying to make a custom post processing effect on HDRP that pixelizes the image. I’ve been successfull in doing this in the past by blitting the source texture into a temporary render texture with a small resolution, and then blitting that one into the destination.
I plan to treat this image with my shader with a material, but for now without any material this just won’t work.
Before in v2 Post Processing stack this was really easy:
public override void Render(PostProcessRenderContext context)
{
RenderTexture temp = RenderTexture.GetTemporary(smallWidth, smallHeight);
temp.filterMode = FilterMode.Point;
context.command.Blit(context.source, temp);
context.command.Blit(temp, context.destination);
}
This worked OK.
Now I’m trying to do this on a CustomPostProcessVolumeComponent:
public override void Render(CommandBuffer cmd, HDCamera camera, RTHandle source, RTHandle destination)
{
RTHandle rt = RTHandles.Alloc(scaleFactor: Vector2.one * 0.25f, filterMode: FilterMode.Point, wrapMode: TextureWrapMode.Clamp, dimension: TextureDimension.Tex2D);
HDUtils.BlitCameraTexture(cmd, source, rt);
HDUtils.BlitCameraTexture(cmd, rt, destination);
rt.Release();
}
The other post with the same title established that cmd.Blit() won’t work at all in this case and it seems that HDUtils.BlitCameraTexture does the extra leg work you need for to do such thing. Why? What does it do exactly?
And in the end it gives me a gray screen. Why? BlitCameraTexture()ing directly source to destination works alright. Also making the scale factor as (1,1) does not make a difference. Does the command buffer only accept one blit in this case? Am I using RTHandles correctly? Is there something I’m not seeing?