Getting a pixel look in 3D

Hey folks!

I’m working on a 3D game where I want to give it a nice pixely look.

Here’s what it looks like so far

To achieve this, I’m currently rendering to a render texture and then filling the screen with that render texture. This gives everything a nice, blocky aliased look and everything is still affected by lighting. Perfect. I love it.

The problem I have though is this approach is a blanket approach. Everything in the game world is affected by this. I can see myself maybe wanting better control over some things in the world (like 3D text objects which I plan to add) to be untouched by this effect to help legibility.

So, how can I do this more smartly? Is there a way I can do this on a per-object basis but still achieve the overall look above? What would you suggest?

Thanks for your help!

2 Likes

IIRC

layers and cam culling, with masking

Cam A renders everything HD
Cam B renders “B” [with A masking] rendertexture + overlay

Could you step me through that in more detail please? With the render texture approach I’m stretching it out to cover the entire screen so I’m unsure how I’d mix in anything which wasn’t affected by it. A few people I’ve talked to have alluded to a multi-camera approach but suggested it has drawbacks around depth of objects.

If there are any tutorials or examples you can link me to that would be fantastic!

There’s an approach to handle off screen particles that render at a reduced resolution that takes the depth texture from the main camera and does the depth test in shader or otherwise blits the depth to the second camera to allow for mis-matched resolutions. Usually this is going from a high resolution camera’s depth to a low resolution, and then compositing the low back into the high.

See this project (which may or may not work anymore).
https://github.com/slipster216/OffScreenParticleRendering

For you this would be taking a low resolution camera’s depth to a higher resolution target, but the idea is the same. This would allow you to handle depth occlusions between miss-matched resolutions, at least between the main scene and one higher resolution layer. This really only works for sorting against opaque stuff though. The higher resolution stuff will render over anything that’s transparent.

Here’s an example I know does work* as I wrote it as an example for someone else recently. Kind of an ugly implementation, but functional.

4191097–371143–LowResOffscreen.zip (7.26 KB)

Example of what the above project looks like (with an added line to make the offscreen texture use point sampling). It’s the opposite of what you’re looking to do, but again, could be made to work.

1 Like

Hello bgolus!

This looks really encouraging. When you say you added a line to make the offscreen texture use point sampling, where and what does that line actually look like?

Thanks!

Line 68 of LowResOffscreenCamera.cs add:
offscreenRT.filterMode = FilterMode.Point;

Ah, think I figured it out!

4193653--371527--upload_2019-2-7_21-10-44.png

You beat me by minutes :stuck_out_tongue:

@bgolus I’m a bit lost when you suggest I want the opposite. What’s preventing me from simply putting the majority of my scene objects on the off screen layer and then having them appear pixelated?

The problem I’m having with the overall approach though is the object I want to maintain a decent resolution is 3D text. If left on the default layer then this fails to render in front of stuff correctly.

Any tips, or am I looking at the precise reason why I need to do things the other way round?

The idea behind this technique is you can composite opaque or transparent things against a different resolution opaque geometry. If there is transparent geometry in the first pass this fails as the second pass has nothing in the depth buffer to test against. The depth buffer is what allows opaque objects to sort against eachother properly regardless of the order they’re actually rendered in, but transparencies don’t write to the depth buffer and sort between eachother based on the order they’re rendered, or against what has already been written to the depth buffer.

Here’s the steps I was thinking you’d probably need to do:

  • Render low resolution main scene with main camera
  • Copy depth from low resolution scene to a global _MainCameraDepthTexture render texture
  • Blit depth & low res color to higher resolution target
  • Render high resolution stuff

Mixing low and high transparent stuff would require rendering each low resolution thing out into its own render texture and drawing them back in individually in back to front order. Plausible, but potentially very expensive. For the style you’re going for it seems like keeping everything opaque in the low res would be okay for the look, or kind of ignore the rare cases of the high res stuff overlapping.

Basically if your text isn’t drawing using an opaque shader with a shadow caster pass (what is used to fill the depth texture) it won’t be able to composite against it if it’s rendered first.

Can you post the code you’re using to do the low resolution camera right now?

@bgolus Sure thing. I appreciate your help. It seems like I’ve bitten off a bit more than I can chew (I’ve never touched depth buffers or such) but, if it’s possible, I think it would be a really nice asthetic if it can be done.

Here’s the original setup of the camera:

Feeds into a “low res” render texture which is then referenced in a script. The script then works as follows:

using UnityEngine;
using System.Collections;

public class Pixelation : MonoBehaviour
{
    public RenderTexture renderTexture;

    void Start()
    {
        int realRatio = Mathf.RoundToInt(Screen.width / Screen.height);
        renderTexture.width = NearestSuperiorPowerOf2(Mathf.RoundToInt(renderTexture.width * realRatio));
    }

    void OnGUI()
    {
        GUI.depth = 20;
        GUI.DrawTexture(new Rect(0, 0, Screen.width, Screen.height), renderTexture);
    }

    int NearestSuperiorPowerOf2(int n)
    {
        return (int)Mathf.Pow(2, Mathf.Ceil(Mathf.Log(n) / Mathf.Log(2)));
    }
}

I then have the render texture asset itself set to a filter mode of ‘point’ to give everything a hard pixel look. The end result is the original image in the thread.

Try this script. Also remove the render texture from your main camera. You control the resolution with the first setting on this script. Currently has a bug if you disable / enable it at runtime because Screen.width & height returns the wrong values and I don’t know why.

using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using UnityEngine.Rendering;

[RequireComponent(typeof(Camera))]
public class LowResOffscreenCamera : MonoBehaviour {

    public int lowResHeight = 160;
    public int highResMult = 2;

    public LayerMask offscreenLayers;
    public Shader depthCopyShader;
    public Shader depthFillShader;
    public Shader compositeShader;

    private Material depthCopyMat;
    private Material depthFillMat;
    private Material compositeMat;

    private new Camera camera;
    private Camera lowCamera;
    private Camera highCamera;

    private CommandBuffer mainCameraDepthCopy;
    private CommandBuffer highFill;

    private RenderTexture lowRT;
    private RenderTexture highRT;

    void Awake () {
        camera = GetComponent<Camera>();

        lowCamera = new GameObject("Offscreen Camera", typeof(Camera)).GetComponent<Camera>();
        lowCamera.enabled = false;
        lowCamera.transform.SetParent(camera.transform, false);
        lowCamera.CopyFrom(camera);
        lowCamera.renderingPath = RenderingPath.Forward;
        lowCamera.cullingMask ^= offscreenLayers;
        lowCamera.depthTextureMode |= DepthTextureMode.Depth;

        depthCopyMat = new Material(depthCopyShader);
        depthFillMat = new Material(depthFillShader);
        compositeMat = new Material(compositeShader);

        mainCameraDepthCopy = new CommandBuffer();
        mainCameraDepthCopy.name = "Copy Depth Texture from Main Camera";
        mainCameraDepthCopy.SetGlobalTexture("_MainCameraDepthTexture", BuiltinRenderTextureType.Depth);

        lowCamera.AddCommandBuffer(CameraEvent.BeforeForwardOpaque, mainCameraDepthCopy);

        highCamera = new GameObject("Offscreen Camera", typeof(Camera)).GetComponent<Camera>();
        highCamera.enabled = false;
        highCamera.transform.SetParent(camera.transform, false);
        highCamera.CopyFrom(camera);
        highCamera.renderingPath = RenderingPath.Forward;
        highCamera.cullingMask = offscreenLayers;
        highCamera.depthTextureMode = DepthTextureMode.None;
        highCamera.useOcclusionCulling = false;
        highCamera.backgroundColor = Color.clear;
        highCamera.clearFlags = CameraClearFlags.Nothing;

        highFill = new CommandBuffer();
        highFill.name = "Fill Color & Depth";
        highCamera.AddCommandBuffer(CameraEvent.BeforeForwardOpaque, highFill);

        camera.cullingMask = 0;
    }

    void OnEnable()
    {
        float ratio = (float)Screen.width / (float)Screen.height;

        int lowResWidth = Mathf.RoundToInt((float)lowResHeight * ratio);
        int highResWidth = Mathf.Min(Screen.width, lowResWidth * highResMult);
        int highResHeight = Mathf.Min(Screen.height, lowResHeight * highResMult);

        lowRT = new RenderTexture(lowResWidth, lowResHeight, 24);
        lowRT.filterMode = FilterMode.Point;
        lowRT.Create();

        highRT = new RenderTexture(highResWidth, highResHeight, 24);
        highRT.filterMode = FilterMode.Point;
        highRT.Create();

        highFill.Clear();
        highFill.Blit(lowRT, BuiltinRenderTextureType.CurrentActive);
        highFill.Blit(BuiltinRenderTextureType.Depth, BuiltinRenderTextureType.CurrentActive, depthFillMat);
    }

    void OnDisable()
    {
        Destroy(lowRT);
        Destroy(highRT);
    }

    void OnPreRender()
    {
        lowCamera.projectionMatrix = camera.projectionMatrix;
        highCamera.projectionMatrix = camera.projectionMatrix;

        lowCamera.targetTexture = lowRT;
        highCamera.targetTexture = highRT;
    }

    void OnRenderImage(RenderTexture src, RenderTexture dst)
    {
        lowCamera.Render();
        highCamera.Render();
        Graphics.Blit(highRT, dst);
    }
}

@bgolus Hello. Thank you so much for this. I’ve tried to integrate it into your test scene for now. I put the script you outlined onto the main camera. How did you envision the layers working? As in, should high res objects be on the ‘offscreen layer’ or low res objects?

At the moment I’m getting a result where it isn’t clear what is being affected:

Here’s how the main camera looks for the above:

In this case the higher res stuff uses the layer. Technically both the low and high res are rendered “off screen” with this script, so it’s poorly named.

Note, like with my previous version, objects in each layer won’t be affected by shadows from the other. That’s a lot more work.