Post Processing with Graphics.DrawMeshNow. Problems with render-to-texture

I’m experimenting with post processing on mobile and trying to use a “Render-To-Texture and Display A Full Screen Quad” method.

After trying a few things I decided to go with Camera.Render(). It requires a disabled camera and a manual call to this rendering routine. The code looks like this:

// Setting up things - render texture, screen quad and disabling a camera
private void Start()
{
    _quadMesh = CreateQuadMesh();
    _renderTexture = CreateRenderTexture();

    material.SetTexture("_MainTex", _renderTexture);

    _camera = GetComponent<Camera>();
    _camera.targetTexture = _renderTexture;
    _camera.enabled = false;
}

public void LateUpdate()
{
    _camera.Render();
}

//After all rendering is done, display our fullscreen quad
public void OnRenderObject()
{
    material.SetTexture("_MainTex", _renderTexture);
    material.SetPass(0);
    Graphics.DrawMeshNow(_quadMesh, Matrix4x4.identity);
}

But the screen turns black like nothing gets rendered into a texture:

There’s a gui button just to make sure everything is rendered below final Canvas.RenderOverlays call (we’ll get there in a minute)

But an interesting thing is that if I have a Frame Debugger enabled, it renderes the scene and applies post-process effect (in this case it’s just a simple color-to-grayscale conversion) like it should

When I disable Frame Debugger or don’t manually check every separate pass - it doesn’t render anything but black again (though draw passes are there all the time)

Sometimes it shows passes like this, without actual rendering.

I tried using the approach described here Post Process Mobile Performance : Alternatives To Graphics.Blit , OnRenderImage ?
But using GPU-powered Graphics.Blit results in a moved texture (right top quadrant of the screen) and occasional “Assertion failed (!m_CurrentCamera.IsNull())” errors in the console. It’s also doesn’t apply the posteffect itself, leaving the picture colored

public void OnPreRender()
{
    _renderTexture = RenderTexture.GetTemporary(Screen.width, Screen.height, 16);
    _camera.targetTexture = _renderTexture;
}

private void OnPostRender()
{
    _camera.targetTexture = null;
    Graphics.Blit(_renderTexture, null, material, 0);
    RenderTexture.ReleaseTemporary(_renderTexture);
}

Interesting thing is that if I render Graphics.DrawMeshNow inside the Coroutine with a WaitForEndOfFrame “callback”, it renders the texture correctly, but on top of the Overlay Canvas:

public void LateUpdate()
{
    _camera.Render();
}

// Moved OnRenderObject to this coroutine
private IEnumerator WaitAndRender()
{
    while (true)
    {
        yield return new WaitForEndOfFrame();

        material.SetTexture("_MainTex", _renderTexture);
        material.SetPass(0);
        Graphics.DrawMeshNow(_quadMesh, Matrix4x4.identity);
    }
}

Quad is rendered atop of the Overlay Canvas

I tried using different Unity Messages for rendering, and I’m positive that LateUpdate + OnRenderObject should work (I’ve seen a working example on one of the gamedev conferences), but it only renders inside that WaitForEndOfFrame coroutine.

I think that the problem is somewhere inside that strange Frame Debugger report with just Canvas.RenderOverlays being rendered, but I don’t know how to solve this.
I tried using Unity 5.5 and Unity 2017.2 for this - behavior is completely identical in both versions

1 Like

Not the answer to your question, but calling Graphics.Blit in the OnRenderImage does exactly what you described, what are you trying to achieve exactly?
UPD: saw the link that you posted to the other thread, that says

This is absolutely false. Camera renders into an internal render texture by default, no readpixels on the cpu is used.

I needed to render a custom tesselated mesh to enable different “shockwave” or “raindrop” effects by moving vertex texture coordinates, or the vertices themselves. I also needed to be able to downscale the very first render of the camera. OnRenderImage doesn’t allow 3D Scene downscaling only, it’s either everything including UI or nothing

I managed to get my setup working in the end, the problem was that I didn’t know that there has to be SOME camera inside a scene, even if it doesn’t render anything. It’s needed so that Unity knows that I really want to actually render something to the backbuffer (if it doesn’t contain any camera that renders to the backbuffer, it just doesn’t send anything there)

Do you have an idea how to access this render texture for the use in a native rendering pluing for example? Camera.targetTexture is null in case you didn’t set something before…

Thanks