There’s usually nothing needed to make image effects work for multi-pass stereo rendering, those usually “just work” since each eye is treated exactly the same as a non-vr rendering would be. That shader would not work with single pass though as there is special stuff that needs to happen for that.
However I highly suggest you not use a vignette image effect with VR. There are a ton of small issues with this, not the least of which using any image effects is extremely expensive on the Daydream and not recommended. You’re better off doing this with an object attached to the camera, like a low poly sphere attached to the camera that renders with a very high render queue (ie: “Queue”=“Overlay”) and if possible with the polygons at the center of the view removed to reduce the amount of overdraw. That’ll be way cheaper to use as it doesn’t require a render texture swap, only apply to part of the screen, and won’t have as many problems with the asymmetric camera projections.
Oh man, I originally had my screen space “effects” (mainly tint/fading out to a color) through a canvas stuck in front of the camera. But then I swapped it out to using Graphics.Blit since I thought it would be cleaner and more performant.
If I have an effect that tints every pixel on the screen uniformly, is it still better to do it as in-game object/texture in front of the camera? I understand now that a vignette effect would be better off that way.
A Blit() is essentially a quad drawn over the screen, but it’s passing in a copy of the screen as it was just before Blit() gets called, which requires making a copy of the entire screen. That copy can be expensive both in the time it takes to make the copy (especially on mobile, and even more so when AA is enabled) and in then you now have an extra copy of the screen taking up memory too.
However sometimes what you’re doing isn’t possible to replicate with the basic blend modes, and sometimes the cost of the “hardware” blend can be more than the shader based blend.
The real win with something like vignetting is that you can get away with not applying the effect to the entire screen. With a custom mesh with the “center” polygons removed, any pixels that the mesh doesn’t cover is a little less work the GPU has to do, as long as you’re not using so many polygons in your vignette mesh as to be vertex limited.
It should be faster, but there’s always a chance it isn’t.