Rendering to VR Viewport in SRP

I am trying to create a custom SRP that renders to a VR camera but cannot find any documentation on it. Currently I have implemented what the LWRP does but am getting a warning that “Scriptable Render Pipeline stereo support requires single-pass stereo.” yet single-pass is enabled and a black screen is all that is output. It is returning false when I try and get the culling parameters:

!camera.TryGetCullingParameters(IsStereoEnabled(camera), out cullingParameters)

which is causing a black screen to be output in the game window. Here is the full code of the render function:

   void RenderSingleCamera(ScriptableRenderContext context, Camera camera)
    {
        // Culling
        ScriptableCullingParameters cullingParameters;
        bool isStereo = IsStereoEnabled(camera);
        // If camera is invalid do something
        if (!camera.TryGetCullingParameters(IsStereoEnabled(camera), out cullingParameters))
            return;

        cullingParameters.shadowDistance = Mathf.Min(shadowDistance, camera.farClipPlane);
       
        // Debug sampling for frame
        cameraBuffer.BeginSample("Main Camera");
       

#if UNITY_EDITOR
        if (camera.cameraType == CameraType.SceneView)
        {
            ScriptableRenderContext.EmitWorldGeometryForSceneView(camera);
        }
#endif

        cullResults = context.Cull(ref cullingParameters);

           
        if (cullResults.visibleLights.Length > 0)
        {
            // Setup lights
            ConfigureLights();

            if (mainLightExists)
                RenderCascadedShadows(context);
            else
            {
                cameraBuffer.DisableShaderKeyword(cascadedShadowsHardKeyword);
                cameraBuffer.DisableShaderKeyword(cascadedShadowsSoftKeyword);
            }

            // Generate Shadow Map
            RenderShadowMap(context);
        }
        else // No visible lights
        {
            cameraBuffer.SetGlobalVector(lightIndicesOffsetAndCountId, Vector4.zero);
            cameraBuffer.DisableShaderKeyword(cascadedShadowsHardKeyword);
            cameraBuffer.DisableShaderKeyword(cascadedShadowsSoftKeyword);
        }
       

        // Get camera VP matricies
        context.SetupCameraProperties(camera, IsStereoEnabled(camera));

#if UNITY_EDITOR
        if(UnityEditor.Handles.ShouldRenderGizmos())
        {
            context.DrawGizmos(camera, GizmoSubset.PreImageEffects);
            context.DrawGizmos(camera, GizmoSubset.PostImageEffects);
        }
#endif

        if (IsStereoEnabled(camera))
        {
            context.StartMultiEye(camera);
        }

        /*
        // get post process stack attached to camera if there is one
        var SRPCamera = camera.GetComponent<SRPCamera>();
        SRPPostProcessingStack activeStack = SRPCamera ? SRPCamera.PostProcessingStack : defaultStack;
        */
        SRPPostProcessingStack activeStack = null;

        bool scaledRendering = renderScale < 1f && camera.cameraType == CameraType.Game;

        int renderWidth = camera.pixelWidth;
        int renderHeight = camera.pixelHeight;
        if(scaledRendering)
        {
            renderWidth = (int)(renderWidth * renderScale);
            renderHeight = (int)(renderHeight * renderScale);
        }

        bool renderToTexture = scaledRendering || activeStack;

        // Render to texture
        if (renderToTexture)
        {
            cameraBuffer.GetTemporaryRT(cameraColourTextureId, renderWidth, renderHeight, 0, FilterMode.Bilinear);
            cameraBuffer.GetTemporaryRT(cameraDepthTextureId, renderWidth, renderHeight, 24, FilterMode.Point, RenderTextureFormat.Depth);
            cameraBuffer.SetRenderTarget(cameraColourTextureId, RenderBufferLoadAction.DontCare, RenderBufferStoreAction.Store,
                                         cameraDepthTextureId, RenderBufferLoadAction.DontCare, RenderBufferStoreAction.Store);
        }

        // Clear framebuffer
        CameraClearFlags clearFlags = camera.clearFlags;
        cameraBuffer.ClearRenderTarget((clearFlags & CameraClearFlags.Depth) != 0, (clearFlags & CameraClearFlags.Color) != 0, camera.backgroundColor);
        context.ExecuteCommandBuffer(cameraBuffer);
        cameraBuffer.Clear();


        // Pass unifroms to shader
        cameraBuffer.SetGlobalVectorArray(visibleLightColoursId, visibleLightColours);
        cameraBuffer.SetGlobalVectorArray(visibleLightDirectionsOrPositionsId, visibleLightDirectionsOrPositions);
        cameraBuffer.SetGlobalVectorArray(visibleLightAttenuationsId, visibleLightAttenuations);
        cameraBuffer.SetGlobalVectorArray(visibleLightSpotDirectionsId, visibleLightSpotDirections);
        cameraBuffer.SetGlobalVectorArray(visibleLightOcclusionMasksId, visibleLightOcclusionMasks);
        globalShadowData.z = 1f - cullingParameters.shadowDistance * globalShadowData.y;
        cameraBuffer.SetGlobalVector(globalShadowDataId, globalShadowData);
        context.ExecuteCommandBuffer(cameraBuffer);
        cameraBuffer.Clear();


        // Setup render criteria
        SortingSettings sortSettings = new SortingSettings(camera);
        sortSettings.criteria = SortingCriteria.CommonOpaque;

        DrawingSettings drawSettings = new DrawingSettings(new ShaderTagId("SRPDefaultUnlit"), sortSettings);
        drawSettings.enableDynamicBatching = useDynamicBatching;
        drawSettings.enableInstancing = useInstancing;
        // Pass thorugh light indices for GPU
        drawSettings.perObjectData |= PerObjectData.LightData | PerObjectData.LightIndices;
        drawSettings.perObjectData |= PerObjectData.ReflectionProbes | PerObjectData.Lightmaps | PerObjectData.LightProbe | PerObjectData.ShadowMask | PerObjectData.OcclusionProbe | PerObjectData.OcclusionProbeProxyVolume;

        FilteringSettings filterSettings = new FilteringSettings(RenderQueueRange.opaque);
        context.DrawRenderers(cullResults, ref drawSettings, ref filterSettings);

        context.DrawSkybox(camera);

        if(activeStack)
        {
            activeStack.RenderAfterOpaque(postProcessingBuffer, cameraColourTextureId, cameraDepthTextureId, renderWidth, renderHeight);
            context.ExecuteCommandBuffer(postProcessingBuffer);
            postProcessingBuffer.Clear();
            cameraBuffer.SetRenderTarget(cameraColourTextureId, RenderBufferLoadAction.Load, RenderBufferStoreAction.Store,
                                        cameraDepthTextureId, RenderBufferLoadAction.Load, RenderBufferStoreAction.Store);
            context.ExecuteCommandBuffer(postProcessingBuffer);
            postProcessingBuffer.Clear();
        }

        // Draw Transparent Objects
        sortSettings.criteria = SortingCriteria.CommonTransparent;
        filterSettings.renderQueueRange = RenderQueueRange.transparent;
        context.DrawRenderers(cullResults, ref drawSettings, ref filterSettings);

        // Draw non-unlit shaded objects
        DrawDefaultPipeline(context, camera);

        if (renderToTexture)
        {
            // Post processing
            if (activeStack)
            {
                activeStack.RenderAfterTransparent(postProcessingBuffer, cameraColourTextureId, cameraDepthTextureId, renderWidth, renderHeight);
                context.ExecuteCommandBuffer(postProcessingBuffer);
                postProcessingBuffer.Clear();
            }
            else
                cameraBuffer.Blit(cameraColourTextureId, BuiltinRenderTextureType.CameraTarget);

            cameraBuffer.ReleaseTemporaryRT(cameraColourTextureId);
            cameraBuffer.ReleaseTemporaryRT(cameraDepthTextureId);
        }

        cameraBuffer.EndSample("Main Camera");
        context.ExecuteCommandBuffer(cameraBuffer);
        cameraBuffer.Clear();


        if (IsStereoEnabled(camera))
        {
            context.StopMultiEye(camera);
            context.StereoEndRender(camera);
        }


        context.Submit();

       
       
        // Release Shadow Map
        if(shadowMap)
        {
            RenderTexture.ReleaseTemporary(shadowMap);
            shadowMap = null;
        }
        if (cascadeShadowMap)
        {
            RenderTexture.ReleaseTemporary(cascadeShadowMap);
            cascadeShadowMap = null;
        }
    }

Fixed it, the cameras had an invalid viewport rectangle.

I’m running into same issue. What was your fix?

I was missing “cmd.SetRenderTarget(BuiltinRenderTextureType.CurrentActive);” for multi-pass.

But single-pass wont even enable with the MockHMD and on the Quest only the left eye is rendering on Single-pass. Multi-pass working fine on Quest.

Ref for others. After you call “SetupCameraProperties” choose what to do depending on the following…

For MultiPass to work on PC you need to make sure you set the view port

cmd.SetViewport(camera.pixelRect);

For SinglePass, SinglePass Instanced or MultiView to work on the PC, Quest, etc you need to be using this clear method instead

CoreUtils.SetRenderTarget(cmd, BuiltinRenderTextureType.CurrentActive, ClearFlag.All, camera.backgroundColor.linear);
1 Like

I would just like to thank you, this really helped me!

Any idea what the difference between CommandBuffer.SetRenderTarget and CoreUtils.SetRenderTarget is?

If you notice CoreUtils.SetRenderTarget actually takes in a CommandBuffer object. So its probably doing CommandBuffer.SetRenderTarget with some extra logic is my guess. I don’t see anything in the docs that really states what the difference in this method is but it must have some logic CPU or GPU wise that takes into account rendering two eyes on a single surface.

I’ll probably Open Source my SRP later where I’ve figured out some things. You can also just copy URP into a Unity project directly and then debug / view what they do in some special places. I think this may be where I saw it.

Also modern mobile GPUs and APIs use something called Render Passes & Tile rendering.
These take in clear colors and RenderTarget states used in tile rendering.

Notice how CoreUtils.SetRenderTarget takes in a clear color etc. Its likely its setting up some stuff required for tile rendering in Unity’s system. Maybe SinglePass rendering types flip a switch in Unity so it starts using RenderPasses and thus needs some extra stuff to be set.

Speculation but I’ve been doing native D3D12 and Vulkan stuff so it seems like something that might make sense.