I’m trying to find a solution to post-process the “final frame-buffer” image. I want to do this to apply a brightness filter, which must affect both, the scene and Screenspace Overlay UI elements. However, I can’t figure out how to do that without using a multi-camera setup.
How do you implement a post-processing step that affects “Screenspace Overlay” elements with a single camera only?
I’ve tried several different approaches but I didn’t find anything that either works and doesn’t make the UI setup much more complicated. Here is what I did try so far…
What does not work
OnRenderImage, it’s called before Screenspace Overlay UI is rendered.
CommandBuffer API with CameraEvent.AfterEverything, also called before Screenspace Overlay UI is rendered.
My problem with these two approaches is that “Screenspace Overlay” Canvas’es aren’t rendered at this time, which means my post-processing doesn’t affect the UI.
What does work
It does work when I change all “Screenspace Overlay” Canvas’es to “ScreenSpace Camera” and use a multi-camera setup like:
Camera 1 = 3D content
Camera 2…N = UI
Camera N+1 = Post-process only
The rendering flow is then as followed:
3D content camera is rendered
HUD camera is rendered
Pause camera is rendered (if game is paused)
Post-processing camera is rendered which applies the image effect to everything that has been drawn before
Canvas’es with “Screenspace Overlay”
Unfortunately, this approach has a few downsides:
It adds a significant amount of overhead (due to each camera doing several fullscreen clears and copies in deferred rendering) and I’d like to keep the framerate at an acceptable level.
Using cameras means I have to maintain the Canvas “Sorting/Order in Layer” as well as the “Depth” setting in the Camera to draw the UI in the correct order.
UI setup is more complicated in general with ScreenSpace Camera
I probably miss something relatively obvious, but it does seem extremely complicated for such a simple problem.
I would be interested in an answer as well. I don’t want to set my UI to screenspace camera, because then it gets affected by things like motion blur too, which I do not want. Ideally I’d like to just apply post to the UI as a whole before it gets rendered to overlay, because ideally I’d like different settings for the bloom on the UI than the bloom on the 3D scene.
Aaand I hit a similar issue. Looking through frame debugger, it looks like screen overlay is rendered AFTER image effects, and I have overlay objects that need to be processed by an image effect along with everything else in the scene.
I looked at ReCore Definitive Edition (Xbox One) yesterday. The brightness settings in this game do not affect the UI either. Perhaps it’s not worth the trouble to come up with a cumbersome approach that doesn’t fit well into the engine. It really bothers me though, because it’s such a simple problem actually.
There’s no solution. Basically, you can’t (or at least, you couldn’t, at the time of writing my previous post) apply post effects to overlay canvas. So you’l have to replace it with world-space canvas very close to camera.
Here’s an excerpt from my code (using Unity 2019.4.4f1) as an example for anyone else coming to this thread.
private void Start()
{
StartCoroutine(ScaleScreenCoroutine());
}
private IEnumerator ScaleScreenCoroutine()
{
while (true)
{
// Wait until rendering is complete
yield return new WaitForEndOfFrame();
// Create the commands to grab the screen and draw it on a quad
if (_commandBuffer == null)
{
CreateCommandBuffer();
}
// Execute the commands
Graphics.ExecuteCommandBuffer(_commandBuffer);
}
}
private void CreateCommandBuffer()
{
_commandBuffer = new CommandBuffer();
_commandBuffer.name = "Shrink rendering to safe zone";
// Grab the screen to a temp render texture
int screenGrabId = Shader.PropertyToID("_ScreenGrabTempTexture");
_commandBuffer.GetTemporaryRT(screenGrabId, -1, -1, 0, FilterMode.Bilinear);
_commandBuffer.Blit(BuiltinRenderTextureType.CameraTarget, screenGrabId);
// Fill the screen with black
_commandBuffer.SetRenderTarget(BuiltinRenderTextureType.CameraTarget);
_commandBuffer.ClearRenderTarget(clearDepth: false, clearColor: true, Color.black);
// Set the quad to be pulled in by the safe zone amount
float scaleFullScreen = 2f; // The quad is (-0.5, -0.5) to (0.5, 0.5) and viewport space is (-1, -1) to (1, 1), so scaling it by 2 fills the viewport
float showHalfScreenPct = 1f - _safeZonePct; // Safe zone is the percent of the half-width to pull in (be a black bar)
float scale = showHalfScreenPct * scaleFullScreen;
_commandBuffer.SetViewProjectionMatrices(Matrix4x4.Scale(new Vector3(scale, scale, 1f)), Matrix4x4.identity);
// Draw the screen on the quad
_commandBuffer.SetGlobalTexture("_ScreenGrabTex", screenGrabId); // Set the SafeZone.shader input parameter
_commandBuffer.DrawMesh(_meshQuad, Matrix4x4.identity, _materialSafeZone);
// Release the temp render texture
_commandBuffer.ReleaseTemporaryRT(screenGrabId);
}
Hi. If I try to run the code, I get the error message " The name ‘_commandBuffer’ does not exist in the current context".
Should I define it before calling it ?
Thanks.
First, thank you for the idea. I evaluated your approach for the game I work on. Which got an xbox port lately and I wanted to use this to create a software fix for overscan. Because our UI is currently using the overlay render mode it seemed like possible way.
With your approach it would actually work, if the player is only allowed to use a gamepad. Here is the problem: The comand buffer takes the finished frame, scales and replaces it. The actual UI is therefore not scaled - what in fact should be quite obvious, but it didn’t came to mind before implementing it. So, if you want to use this solution in combination with a mouse visuals and the functional buttons do not match.
We support keyboard and mouse on console and cannot use it. We will introduce a separate camera to render all UI and then manipulate the view port to fix the overscan. Then the mouse coordinates will match the UI elements.
And for people who want to try it, you need an additional shader and a material. The shader can be a pass through vertex shader and a fragment shader that just takes the basic uvs and takes the color out of the global texture (which needs to be defined in the shader).
Sorry to necro this, but I ended up massaging this method a bit for some accessibility post-processing. I made it such that a mesh isn’t needed. Here’s the code for the CreateCommandBuffer() function
private void CreateCommandBuffer()
{
_commandBuffer = new CommandBuffer();
_commandBuffer.name = "ColorBlindFilter";
int screenGrabId = Shader.PropertyToID("_ScreenGrabTempTexture");
_commandBuffer.GetTemporaryRT(screenGrabId, -1, -1);
_commandBuffer.Blit(BuiltinRenderTextureType.CameraTarget, screenGrabId);
_commandBuffer.Blit(screenGrabId, BuiltinRenderTextureType.CameraTarget, material, showDifference ? 1 : 0);
//the show difference argument is determine the pass in the shader, you might not need that.
_commandBuffer.ReleaseTemporaryRT(screenGrabId);
}
And the material can be created in the code with a shader. You will need to have written the shader that does your post-processing, of course.
material = new Material(Shader.Find("Hidden/ChannelMixer"));
//where Hidden/Channel mixer refers to the post-processing shader
Importantly, if you’re doing this, depending on platform you might end up with an inverted render. To avoid this, make sure you use the built-in UNITY_UV_STARTS_AT_TOP preprocessor directive to invert the y uvs as needed. For example:
Since I’m using this for previewing accessibility conditions in editor, I can’t vouch for its performance. But it “runs fine on my computer” for what that’s worth.