Supporting Single Pass Stereo Rendering in Image Effects

Hi Unity,

A few questions regarding this new feature in 5.4:

  • Do you have any technical note about upgrading existing image effects to support the new Single Pass Stereo Rendering in VR?

  • This option does not show up in build settings in the editor (Mac) using 5.4.0f3 and Gear VR. Any hint?

  • On Single-Pass Stereo & Image Effects - Unity Engine - Unity Discussions seems like using TRANSFORM_TEX in the vertex shader for getting the screen buffer pos is enough so what’s the difference from UnityStereoScreenSpaceUVAdjust macro?

  • Will the above macro work for the depth texture as well? I mean, I already have the inverted Y check, should this check be left as is?

Here’s the vertex code:

    struct v2f {
        float4 pos : SV_POSITION;
        float2 uv: TEXCOORD0;
        float2 depthUV : TEXCOORD1;    
    };

    v2f vert(appdata v) {
        v2f o;
        o.pos = mul(UNITY_MATRIX_MVP, v.vertex);
        o.depthUV = v.texcoord;
        o.uv = o.depthUV;
       
        #if UNITY_UV_STARTS_AT_TOP
        if (_MainTex_TexelSize.y < 0) {
            // Depth texture is inverted WRT the main texture
            o.depthUV.y = 1.0 - o.depthUV.y;
        }
        #endif
        return o;
    }

Thanks in advance.

Hi

We’re busy working on an update for the Unity docs that will describe the issues with upgrading Image Effects to support Single-Pass Stereo.

Single-Pass Stereo is currently only supported for DX11 on Windows and PS4 not for GearVR.

The UnityStereoScreenSpaceUVAdjust() macro will compile out to just passing the UV through in non Single-Pass Stereo modes where as TRANSFORM_TEX will do a very similar thing but will always apply the transform (even if the transform itself doesn’t effect the result as it’s an identity one) thus possibly costing some shader cycles in the non Single-Pass Stereo case if for example you are sharing the same shader between GearVR and desktop VR then GearVR would be wasting extra cycles if you used TRANSFORM_TEX when it was just required for Single-Pass VR.

UnityStereoScreenSpaceUVAdjust() can be used in either the vetex or pixel shader to adjust the UV it just depends on what happens to the UV when used in the Pixel Shader, is it used to read different textures (only some of which might be packed render textures) or are further maths performed on the UV etc. Just make sure to match the appropriate scale and transform value to the texture the UV will be used with.

Finally UnityStereoScreenSpaceUVAdjust(uv, texture_ST) should only be used if you are rendering your image effect with Graphics.Blit() and not some other method like drawing a quad using the low level graphics API. For the non Graphics.Blit() case you will want to use UnityStereoTransformScreenSpaceTex(uv) instead which also compiles out to a pass through in non Single-Pass Stereo but assumes that the uv will be used with a packer render texture.

Hope this helps.

Robin

1 Like

Thanks for the explanation Robin, and glad to see you’re going to update the docs.

One more question: are current Image Effects in 5.4 upgraded to use single pass stereo, for example, Global Fog?

Most of the Standard Assets Image Effects have been updated to support Single-Pass Stereo. There are a couple that we are still working on that required us to expose more data from the engine to properly fix but the hope is to get these fixed up soon.

Global Fog should I believe work however I am aware that there appears to be a bug where everything renders upside down that renders after it (so things like the GUI) when using it in Single-Pass Stereo mode that we still have to track down.

Cheers

Robin

ok, thank you.

Any news regarding documentation and possible samples?

Is there any ETA for when the cinematic image effects package will support Single-Pass Stereo?

Sorry if this question is almost too basic, but how do you enable single pass stereo rendering to work with the Vive?

You can enable it in Player Settings

Any luck with the Global Fog in Single-Pass Stereo issue? I’v submit the bug report two months ago, not receiving anything since then. (Case 815234)

1 Like

+1 for volume fog, that would be very useful

Also would be very useful if Screen Space Reflections would work with single pass stereo. Anyways, Unity 5.4 has been an awesome upgrade, only with single pass stereo and gpu instancing my framerate has sky rocketed like having a new computer! Fantastic job guys.

Global Fog is not working for me in 5.5.0f3 with Single Pass Stereo Rendering, I planned to use it in combination with Beautify as that asset is all working great with Single Pass, however my one missing component for levels now is Fog:( any updates or possible issues with the version I am using? Thank you.

Is it documented yet ? I can’t find it anywhere and I really need a documentation bit to adapt my HBAO asset for SPSR.

The post you just quoted has all the info you need to port to SPSR. If you need a proper example you can look at the code in the post-processing stack as most of the effects there do support SPSR :slight_smile:

2 Likes

any news on getting globalFog to work with single pass rendering (VR) ?

Hi @Chman ,

I’ve been able to adjust all my uv properly.

However I’m still unable to get the proper resolution for rendering image effect in SPSR. I’m using command buffers so it seems the only way I can use to get the resolution at which I should render my post fx is to read Camera.pixelWidth and Camera.pixelHeight.
The problem is that it never seems to return the proper resolution, at least not what’s really used internally for rendering…
Same goes for Unity 5.6 and 2017.1

The first inconsistency I’ve noticed is that the Camera.pixelWidth is not the same in Multi Pass than in Single Pass, when in Single Pass it is half the width of MultiPass. I found it is quite strange as the resolution per eye should be the same in both Multi and Single.

Second thing is that Camera.pixelWidth and Camera.pixelHeight is not consistent between Oculus SDK and Open VR, maybe this is expected tought I don’t see why since Oculus resolution stays the same.

The width returned always seems to be downscaled. The width and height are far lower than what’s used internally for rendering into textures, which makes sense with warping.

One way to get the proper resolution is to get it in OnRenderImage() with destination.width and destination.height (applying this resolution to my command buffer render textures fixes all problems), but this is a no go when using command buffers as it does add an un-necessary blit.

Is there something I’m missing ? a workaround ?

2 Likes

Hi,

I have used the functions described in the documents and still get a black image in right eye in single pass instanced in MockVR in Unity 2019.3

Is there anything new i am missing ?

Thanks in advance