Implementing Vignette for Stereoscopic / VR?

Hi,

I’m working on a Daydream game, so no single-pass stereo rendering unfortunately (it’s very buggy on Daydream atm).

I have a screen space effects shader that includes the vignette part of this shader that I found:

Shader "Custom/Vignette" {
    Properties {
        _MainTex ("Base (RGB)", 2D) = "white" {}
        _VignettePower ("VignettePower", Range(0.0,6.0)) = 5.5
    }
    SubShader
    {
        Pass
        {

        CGPROGRAM
        #pragma vertex vert_img
        #pragma fragment frag
        #pragma fragmentoption ARB_precision_hint_fastest
        #include "UnityCG.cginc"

        uniform sampler2D _MainTex;
        uniform float _VignettePower;

        struct v2f
        {
            float2 texcoord : TEXCOORD0;
        };

        float4 frag(v2f_img i) : COLOR
        {
        float4 renderTex = tex2D(_MainTex, i.uv);
        float2 dist = (i.uv - 0.5f) * 1.25f;
        dist.x = 1 - dot(dist, dist)  * _VignettePower;
        renderTex *= dist.x;
        return renderTex;

        }

        ENDCG
        }
    }
}

Is there a way that I can adapt this to work for non-single pass stereoscopic rendering?

Thanks!

There’s usually nothing needed to make image effects work for multi-pass stereo rendering, those usually “just work” since each eye is treated exactly the same as a non-vr rendering would be. That shader would not work with single pass though as there is special stuff that needs to happen for that.

However I highly suggest you not use a vignette image effect with VR. There are a ton of small issues with this, not the least of which using any image effects is extremely expensive on the Daydream and not recommended. You’re better off doing this with an object attached to the camera, like a low poly sphere attached to the camera that renders with a very high render queue (ie: “Queue”=“Overlay”) and if possible with the polygons at the center of the view removed to reduce the amount of overdraw. That’ll be way cheaper to use as it doesn’t require a render texture swap, only apply to part of the screen, and won’t have as many problems with the asymmetric camera projections.

1 Like

Oh man, I originally had my screen space “effects” (mainly tint/fading out to a color) through a canvas stuck in front of the camera. But then I swapped it out to using Graphics.Blit since I thought it would be cleaner and more performant.

If I have an effect that tints every pixel on the screen uniformly, is it still better to do it as in-game object/texture in front of the camera? I understand now that a vignette effect would be better off that way.

Thanks

A Blit() is essentially a quad drawn over the screen, but it’s passing in a copy of the screen as it was just before Blit() gets called, which requires making a copy of the entire screen. That copy can be expensive both in the time it takes to make the copy (especially on mobile, and even more so when AA is enabled) and in then you now have an extra copy of the screen taking up memory too.

However sometimes what you’re doing isn’t possible to replicate with the basic blend modes, and sometimes the cost of the “hardware” blend can be more than the shader based blend.

The real win with something like vignetting is that you can get away with not applying the effect to the entire screen. With a custom mesh with the “center” polygons removed, any pixels that the mesh doesn’t cover is a little less work the GPU has to do, as long as you’re not using so many polygons in your vignette mesh as to be vertex limited.

It should be faster, but there’s always a chance it isn’t. :stuck_out_tongue:

1 Like

Ok, thanks for all the advice and explanations. Given that I am able to have the same effect with a texture in front of the canvas, I’ll go with that.

Thanks bgolus!