I’m experimenting with post processing on mobile and trying to use a “Render-To-Texture and Display A Full Screen Quad” method.
After trying a few things I decided to go with Camera.Render(). It requires a disabled camera and a manual call to this rendering routine. The code looks like this:
// Setting up things - render texture, screen quad and disabling a camera
private void Start()
{
_quadMesh = CreateQuadMesh();
_renderTexture = CreateRenderTexture();
material.SetTexture("_MainTex", _renderTexture);
_camera = GetComponent<Camera>();
_camera.targetTexture = _renderTexture;
_camera.enabled = false;
}
public void LateUpdate()
{
_camera.Render();
}
//After all rendering is done, display our fullscreen quad
public void OnRenderObject()
{
material.SetTexture("_MainTex", _renderTexture);
material.SetPass(0);
Graphics.DrawMeshNow(_quadMesh, Matrix4x4.identity);
}
But the screen turns black like nothing gets rendered into a texture:
There’s a gui button just to make sure everything is rendered below final Canvas.RenderOverlays call (we’ll get there in a minute)
But an interesting thing is that if I have a Frame Debugger enabled, it renderes the scene and applies post-process effect (in this case it’s just a simple color-to-grayscale conversion) like it should
When I disable Frame Debugger or don’t manually check every separate pass - it doesn’t render anything but black again (though draw passes are there all the time)
Sometimes it shows passes like this, without actual rendering.
I tried using the approach described here Post Process Mobile Performance : Alternatives To Graphics.Blit , OnRenderImage ?
But using GPU-powered Graphics.Blit results in a moved texture (right top quadrant of the screen) and occasional “Assertion failed (!m_CurrentCamera.IsNull())” errors in the console. It’s also doesn’t apply the posteffect itself, leaving the picture colored
public void OnPreRender()
{
_renderTexture = RenderTexture.GetTemporary(Screen.width, Screen.height, 16);
_camera.targetTexture = _renderTexture;
}
private void OnPostRender()
{
_camera.targetTexture = null;
Graphics.Blit(_renderTexture, null, material, 0);
RenderTexture.ReleaseTemporary(_renderTexture);
}
Interesting thing is that if I render Graphics.DrawMeshNow inside the Coroutine with a WaitForEndOfFrame “callback”, it renders the texture correctly, but on top of the Overlay Canvas:
public void LateUpdate()
{
_camera.Render();
}
// Moved OnRenderObject to this coroutine
private IEnumerator WaitAndRender()
{
while (true)
{
yield return new WaitForEndOfFrame();
material.SetTexture("_MainTex", _renderTexture);
material.SetPass(0);
Graphics.DrawMeshNow(_quadMesh, Matrix4x4.identity);
}
}
Quad is rendered atop of the Overlay Canvas
I tried using different Unity Messages for rendering, and I’m positive that LateUpdate + OnRenderObject should work (I’ve seen a working example on one of the gamedev conferences), but it only renders inside that WaitForEndOfFrame coroutine.
I think that the problem is somewhere inside that strange Frame Debugger report with just Canvas.RenderOverlays being rendered, but I don’t know how to solve this.
I tried using Unity 5.5 and Unity 2017.2 for this - behavior is completely identical in both versions