@Mikeysee
I used OnRenderImage() before and “read back from framebuffer”'s time cost is too large that even greyscale postfx is slow on my mobile(iPhone4S,10FPS).
I end up not using OnRenderImage(),
but Awake(), OnPreRender() & OnPostRender().
OnAwake()
-create a new renderTexture
renderTexture
myTargetTexture = new RenderTexture((int)(Screen.width*RTTScale),(int)(Screen.height*RTTScale),24,RenderTextureFormat.ARGB32);
-call Shader.SetGlobalTexture (“_ScreenTexture”, myTargetTexture);
any shader with Sampler2D_ScreenTexture can get the current screen rendered by this camera
-create material for final render (e.g.bloom/blur/distort…anything that need the screen render texture you created)
OnPreRender()
-set camera’s target texture to that renderTexture that you created
after OnPreRender(),the camera will start render anything that included in the camera’s culling mask, to that renderTexture, not the framebuffer.
after all rendering is done, unity will call onPostRender()
OnPostRender()
-set camera’s targrt texture to null, which means any new rendering will actually write to the frameBuffer
-call Graphics.blit to do the final post fx render
graphics.blit will draw a new quad which fit your camera, and write to the “null”,which means the framebuffer.
OnPostRender()
void OnPostRender ()
{
_camera.targetTexture = null;
//postFX_material.SetPass (0);
Graphics.Blit(myTargetTexture, null, postFX_material, 0);
}
this method run 60FPS on iPhone4S and the actual performance cost is only fillrate(fragment shader complexity)
Please correct me if I am wrong. I wish to know the best performance method to do postFX in Unity also!