Unity RenderPass: Reading from input buffers

So, I’ve been working on writing an SRP lately, using the (fairly nice) RenderPass API. From what I understand, each subpass is given attachments as either inputs or outputs. I’m writing a deferred renderer, and need to read from a GBuffer. I have thus created a lighting subpass that outputs to my lighting buffer and takes in inputs from my albedo, normals, etc.

My problem, however, is that I can’t seem to find any examples or documentation on how to actually read from my inputs.

The documentations states:

The rendering results of previous subpasses are only available within the same screen-space pixel coordinate via the UNITY_READ_FRAMEBUFFER_INPUT(x) macro in the shader.

There is, however, no documentation on that macro itself. The macro does not exist in UnityCG.cginc, or in UnityShaderUtilities.cginc or UnityShaderVariables.cginc, which were the only includes in my shader. From experimentation, it appears that the macro is used as UNITY_READ_FRAMEBUFFER_INPUT(inputName,uv), however this results in an error stating that “_UnityFBInputinputName” is undeclared. Declaring Texture2D _UnityFBInputinputName solves the error, but the result is always grey. (I know for a fact that ‘inputName’, which in my case is called ‘g_Normals,’ is populated- I am able to display it on the screen.)

Is there another macro I need to be using with this? Neither UNITY_FRAMEBUFFER_INPUT, nor UNITY_DECLARE_FRAMEBUFFER_INPUT did anything. Could someone point me in the right direction of some examples, or at least how to use UNITY_READ_FRAMEBUFFER_INPUT? The documentation seems lacking on this particular macro.

Thanks in advance!


(Because of course I find it immediately after posting this question)

I figured I would leave this up as interim documentation while these macros aren’t in the Unity docs. Also, I am much more familiar with D3D and OGL renderers than with Vulkan or Metal, although the usage of the macros should be the same (their implementations will differ). I’m developing primarily for DX11 on an NVidia card, but hopefully (most of) this will apply no matter what renderer you’re using. I’m also leaving this question open in case anyone with more knowledge or a different renderer has something to point out.

UNITY_READ_FRAMEBUFFER_INPUT(x) is defined in HLSLSupport.cginc, along with:


(Where idx is the ID of the input (not the name) of the input)
I’m not quite sure why I thought it would be using the name of the variable, I guess I was tired. These macros take the index of the input as it is in the input array.

The actual read macro is somewhat misrepresented in the docs, as it actually has 2 inputs: UNITY_READ_FRAMEBUFFER_INPUT(idx, v2fvertexname).

Vulkan and metal additionally have _MS versions of all of these macros for multisampling. Vulkan/Metal’s UNITY_READ_FRAMEBUFFER_INPUT_MS(idx, sampleIdx, v2fname) also takes a 3rd parameter to specify which sample to read. These do not exist on D3D or other renderers.

D3D, OpenGL, etc. use the general fallback for obtaining renderpass inputs. The Declare macros all declare a sampler-less Texture2D object, and the Read macro performs Texture2D.Load().

This is an important distinction when working with RenderPasses and Framebuffers, vs RenderTextures!

Most of the time, texture data is obtained via Texture2D.Sample(float2 uv), which is fed UV Coordinates in the [0-1] range.

Texture2D.Load(int3 coords) is different. It returns exact pixel data, given Pixel Coordinates in the [0-texsize] range, and doesn’t perform any filtering or access modification.

These coordinates can be obtained via the VPOSor SV_Position (frag shader) semantics, or, assuming you know the texture size (_ScreenParams.xy usually), UV coordinates can just be multiplied by that size and cast to an int.

(On a slightly unrelated note, it might be a good idea to use Load in fullscreen effects anyway, if you don’t actually need sampling, since its a fair bit faster than Sample() and consists only of a single fetch from memory, while Sample() does a lot more.)

There is an additional restriction to using these macros:

The rendering results of previous subpasses are only available within the same screen-space pixel coordinate

This means no convolutions, no refraction or distortion, nothing but the previous value of the pixel that you are on. Accessing other pixels is entirely dependent on renderer, hardware, stupid things like per-pixel race conditions, etc., and you probably won’t get anything at all. This means that pretty much all of the ‘interesting’ postFX will require you to bind your RenderTargetAttachment to a RenderTexture first, and things like this should probably be given a bit of distance from your SRP anyway. Deferred lighting, megatexturing, etc. can still be done.

This is by no means an exhaustive documentation of this macro, and I’m sure I made some mistakes, so please leave a comment or edit if you have more info or I made an error.

Good luck and Happy rendering!