I wanted to try making a “damage warning” vignette for the player as it seems like a simple starting point for a fullscreen effect, however, In order to do this, I obviously need to draw the entire screen and simply “overlay” my effect on top.
To make it easy to add such effects in reaction to gameplay and my environment I opted for a custom post processing effect that I could then toggle using the volume system in HDRP.
Thus, I decided to start exploring.
In prior tutorials, I was able to create distortion effects by this method when the shader is on an object in the world:
Essentially, sampling the camera’s scene color and adding an offset that was moving, to the camera. (The example came from a water shader I made by following other instruction)
Since it seemed like the right place to start, I did the following with my hdrp full screen post process shader graph:
By all logic that I am aware of, this should be simply outputting the contents of the screen normally. I started by injecting the effect after post processing, but the screen was entirely black so I started moving it around.
The screen is entirely black regardless of where it is injected.
I am just using the normal unity HDRP. Is this a bug? All my research has been murky on this topic.
If this is not a bug, Is there a proper way one should be sampling the render buffer of the camera in a fullscreen post process shader?