Pixelize Effect in URP

First, I’d like to say, that I am a complete noob in URP.
Following some outdated tutorials, I created a simple Pixelize effect using URP ScriptableRenderPass. I switched RenderTargetHandle for RThandle, because I don’t like seeing warnings in my console.

using System;
using UnityEngine;
using UnityEngine.Rendering;
using UnityEngine.Rendering.Universal;

[Serializable]
public class PixelizeEffectPass : ScriptableRenderPass {

    private const string PIXEL_BUFFER_PROPERTY = "_PixelBuffer";

    private RenderTextureDescriptor descriptor;
    private RTHandle cameraColorTarget;
    private RTHandle pixelBuffer;
    private int pixelScreenHeight, pixelScreenWidth;

    private readonly CustomPostProcessRendererFeature.PixelizePassSettings settings;

    public PixelizeEffectPass(CustomPostProcessRendererFeature.PixelizePassSettings settings) {
        this.settings = settings;
        renderPassEvent = settings.renderPassEvent;
    }

    public override void OnCameraSetup(CommandBuffer cmd, ref RenderingData renderingData) {
        pixelScreenHeight = settings.pixelScreenHeight;
        pixelScreenWidth = (int)(pixelScreenHeight * renderingData.cameraData.camera.aspect + 0.5f);

        descriptor = renderingData.cameraData.cameraTargetDescriptor;

        descriptor.height = pixelScreenHeight;
        descriptor.width = pixelScreenWidth;
        descriptor.depthBufferBits = 0;

        RenderingUtils.ReAllocateIfNeeded(ref pixelBuffer, descriptor, FilterMode.Point,
            TextureWrapMode.Clamp, name: PIXEL_BUFFER_PROPERTY);
    }

    public override void Execute(ScriptableRenderContext context, ref RenderingData renderingData) {
        var cameraData = renderingData.cameraData;
        if (cameraData.camera.cameraType != CameraType.Game) {
            return;
        }

        CommandBuffer cmd = CommandBufferPool.Get();

        using (new ProfilingScope(cmd, new ProfilingSampler("Pixelize Effect Pass"))) {
            Blitter.BlitCameraTexture(cmd, cameraColorTarget, pixelBuffer);
            Blitter.BlitCameraTexture(cmd, pixelBuffer, cameraColorTarget);
        }

        context.ExecuteCommandBuffer(cmd);
        cmd.Clear();

        CommandBufferPool.Release(cmd);
    }

    public void SetTarget(RTHandle cameraColorTarget) {
        this.cameraColorTarget = cameraColorTarget;
    }

}

But there’s an issue: it only applies to Camera color target, so it can’t be used after Post Processing, only before it. I’m curious, if there is a way to make it apply after Post Processing. Once again, I am a complete noob, just followed some basic tutorials and then used Unity reference to tweak it away from obsolete parts. Also, I didn’t manage to make it work with materials using RTHandle, so I just compress the texture and then stretch it back with 2 Blits.

It works flawlessly without Post Processing or Before it, but if I put it After, the image isn’t pixelized. Also, when I put it before, the Post Processing gets applied on top and it isn’t pixelized, unlike objects, shadows, etc., so the whole thing looks bad (e.g. round bloom on top of pixelized sphere, that’s probably the easiest way to test it).

I thought to ask, because I might miss something, like making this a PostProcessing effect, or modifying some other texture.

You could bind the _CameraColorTarget texture that post-processing outputs to (note: not the same as your variable neccessarily). Can you also show the RenderFeature in which you are calling ‘SetTarget’? (The detail of which target you are using isn’t visible in your code above)

As an aside, this way of doing the low-res rendering is inefficient. You are rendering the scene at full resolution in the camera target, blitting it to a low resolution texture, and then blitting it back. It will work, but there’s wastage. To get around that inefficiency in ProPixelizer’s next update I’m using a pass to intercept and change the targets before DrawOpaques so that the standard draw calls are instead redirected to the low-resolution buffer. Afterwards I then blit the low-res target back into the original high-res buffer with point sampling.

1 Like

Here’s the code:

using System;
using UnityEngine;
using UnityEngine.Rendering.Universal;

public class CustomPostProcessRendererFeature : ScriptableRendererFeature {

    [Serializable]
    public class PixelizePassSettings {
        public RenderPassEvent renderPassEvent = RenderPassEvent.AfterRenderingPostProcessing;
        [Min(0)] public int pixelScreenHeight = 270;
    }

    [SerializeField] private bool enablePixelizePass;
    [SerializeField] private PixelizePassSettings settings;

    private PixelizeEffectPass pixelizeEffectPass;

    public override void SetupRenderPasses(ScriptableRenderer renderer, in RenderingData renderingData) {
        if (renderingData.cameraData.cameraType == CameraType.Game) {
            pixelizeEffectPass.ConfigureInput(ScriptableRenderPassInput.Color);
            pixelizeEffectPass.SetTarget(renderer.cameraColorTargetHandle);
        }
    }

    public override void AddRenderPasses(ScriptableRenderer renderer, ref RenderingData renderingData) {
#if UNITY_EDITOR
        if (renderingData.cameraData.isSceneViewCamera) {
            return;
        }
#endif
        if (enablePixelizePass) {
            renderer.EnqueuePass(pixelizeEffectPass);
        }
    }

    public override void Create() {
        if (enablePixelizePass) {
            pixelizeEffectPass = new(settings);
        }
    }

}

I’m sorry, but I’m not at all fluent in URP reference, to implement this part. Although, I was already sure, that it is inefficient, since I looked at Frame Debugger a lot, while trying to make it work at all. So I understand, that this can probably be done much more efficiently. Since right now it renders 3 times: full resolution, smaller resolution and full resolution again. But I don’t really know, how to make it render at smaller resolution from the beginning and then stretching it too full just once.

I only glanced at the code- what is the effect supposed to look like? If it’s a low-res, pixelized version of the entire screen, you can turn down the rendering scale and use point filtering.

By it’s nature, it will apply after everything is rendered, including post-processing.


1 Like

Well, it doesn’t work by itself, since it’s relative to the resolution. But I made it to scale based on resolution with code, by modifying renderScale there. So it’s mostly working as intended.
Still, I haven’t seen anyone use render scale for this before. Why? Are there any downsides?

Just checked a bunch of things and what I’ve noticed so far, is that it messes with bloom perceived intensity (the lower the scale, the more lit the scene becomes with the same bloom settings, referring to default bloom effect). Another thing is that renderScale doesn’t seem to go lower than 0.1. Which makes it impossible to use for making really low res pixelation effect (and can lead to inconsistencies with different screen resolutions, eg: 192x108 is 0.1 for FullHD, but for 4K this pixel resolution is impossible, as the lowest would be 384x216). At this point, my desired resolution 480x270 would just be impossible for 5K. It seriously doesn’t seem like a scalable production ready approach to me.

Are there any alternatives?

That was one of the ways I’ve done it, at least for small demos.

Alternatives:

- Render your camera to a texture → you won’t be limited to a slider scale. Just set the exact values.
This worked exceptionally well for getting otherwise impossible visuals and performance on mobile years ago.

- Render your camera to a texture, but this time use a UV quantization shader. I’ve used this to create censorship-like effects. It shouldn’t mess with post-processing, since you can choose to render full resolution.

Also, having read your post in some more detail, have a look at this . I was having issues with my own instancing renderer feature not being able to inject on the “AfterPostProcessing” renderpass until I followed Unity’s source.

Can someone provide an example of a renderer feature that implements this? I would like to achieve the same effect of using renderscale and upscale filter set to nearest neighbor but setting a specific resolution. I’m using Unity 2022.3.7f1 (URP 14.0.8).
Thanks

1 Like

Thank you for the answer, but I already know, that renderer camera data in URP does not include post processing data, so even if I did this after post processing, it wouldn’t be included in the output.