Command Buffer Blit flipping render textures in scene view or game view

Hi All,
I have been working on a project for generating pixel perfect sharp shadows. I am generating shadow volumes, and using a custom render feature in URP11 to inject the volumes into the _ScreenSpaceShadowmapTexture.

It works swimmingly, and I am able to get it to render fine in the game view.
The problem lies here:
When I blit the texture into the existing _ScreenSpaceShadowmapTexture, I use a custom blit shader that flips the y coordinate like this.

if (_ProjectionParams.x < 0)
    o.texcoord.y = 1 - o.texcoord.y;

This works out great in the game view, but it flips the texture in the scene view.
I know this is not the biggest problem, but it is quite annoying, and I would like to fix it.

Here is my render pass code for reference:
Render Pass

using System.Collections.Generic;
using UnityEngine;
using UnityEngine.Rendering;
using UnityEngine.Rendering.Universal;

namespace StencilShadowGenerator.Core.RenderFeature
{
    class ShadowVolumeRenderPass : ScriptableRenderPass
    {
        private readonly Material _occluderMaterial;
        private readonly Material _shadowMaterial;
        private readonly Material _blitMaterial;
        private readonly ShaderTagId _volumeShader;
        private readonly List<ShaderTagId> _occluderShaders;
       
        private RenderTargetIdentifier _shadowMap;
        private RenderTargetHandle _tempTarget;
        private ShadowVolumeRenderingSettings _settings;
        private FilteringSettings _filteringSettings;

        public ShadowVolumeRenderPass(ShadowVolumeRenderingSettings settings)
        {
            _settings = settings;
           
            // set up render textures
            _tempTarget = RenderTargetHandle.CameraTarget;
            _shadowMap = new RenderTargetIdentifier(Shader.PropertyToID("_ScreenSpaceShadowmapTexture"));
           
            // set up materials
            _occluderMaterial = new Material(Shader.Find("Hidden/ShadowVolumes/White"));
            _shadowMaterial = new Material(Shader.Find("Hidden/ShadowVolumes/ShadowRender"));
            _blitMaterial = new Material(Shader.Find("Hidden/ShadowVolumes/BlitFlip"));

            // set up shader tags
            _volumeShader = new ShaderTagId("ShadowVolume");
            _occluderShaders = new List<ShaderTagId>
            {
                new ShaderTagId("UniversalForward"),
                new ShaderTagId("UniversalForwardOnly"),
                new ShaderTagId("LightweightForward"),
                new ShaderTagId("SRPDefaultUnlit")
            };

            // set up render pass and filter settings
            renderPassEvent = RenderPassEvent.BeforeRenderingOpaques;
            _filteringSettings = new FilteringSettings(RenderQueueRange.opaque);
        }

        public override void OnCameraSetup(CommandBuffer cmd, ref RenderingData renderingData)
        {
            RenderTextureDescriptor cameraTextureDescriptor = renderingData.cameraData.cameraTargetDescriptor;
            cameraTextureDescriptor.depthBufferBits = 0;
            cmd.GetTemporaryRT(_tempTarget.id, cameraTextureDescriptor, FilterMode.Point);
            ConfigureTarget(_tempTarget.Identifier());
        }

        public override void Configure(CommandBuffer cmd, RenderTextureDescriptor cameraTextureDescriptor)
        {
            ConfigureClear(ClearFlag.All, Color.white);
        }

        public override void Execute(ScriptableRenderContext context, ref RenderingData renderingData)
        {
            if (!_shadowMaterial) return;
           
            CommandBuffer cmd = CommandBufferPool.Get();
            using (new ProfilingScope(cmd, new ProfilingSampler("Shadow Volume Rendering")))
            {
                // prepare and clear buffer
                context.ExecuteCommandBuffer(cmd);
                cmd.Clear();
               
                // set matrices
                Camera camera = renderingData.cameraData.camera;
                cmd.SetViewProjectionMatrices(camera.worldToCameraMatrix, camera.projectionMatrix);
               
                // draw occluders
                DrawingSettings occluderSettings = CreateDrawingSettings(_occluderShaders,
                    ref renderingData, SortingCriteria.CommonOpaque);
                occluderSettings.overrideMaterial = _occluderMaterial;
                context.DrawRenderers(renderingData.cullResults, ref occluderSettings, ref _filteringSettings);
               
                // draw shadow volume stencil
                DrawingSettings volumeSettings = CreateDrawingSettings(_volumeShader,
                    ref renderingData, SortingCriteria.CommonOpaque);
                context.DrawRenderers(renderingData.cullResults, ref volumeSettings, ref _filteringSettings);
               
                // draw shadow material using fullscreen quad
                cmd.SetViewProjectionMatrices(Matrix4x4.identity, Matrix4x4.identity);
                cmd.DrawMesh(RenderingUtils.fullscreenMesh, Matrix4x4.identity, _shadowMaterial);
                cmd.SetViewProjectionMatrices(camera.worldToCameraMatrix, camera.projectionMatrix);
               
                // blit to shadow texture
                cmd.Blit(_tempTarget.Identifier(), _shadowMap, _blitMaterial);
            }

            context.ExecuteCommandBuffer(cmd);
            CommandBufferPool.Release(cmd);
        }

        public override void OnCameraCleanup(CommandBuffer cmd)
        {
            cmd.ReleaseTemporaryRT(_tempTarget.id);
        }
    }
}

Here is an example of the differences. Left is scene view, right is game view.

bump

There’s a #define for it.

#if UNITY_UV_STARTS_AT_TOP
o.texcoord.y = 1 - o.texcoord.y;
#endif

Here is the result when using that define tag instead of my flip code.
It seems to have the same exact result :frowning:

Another thing you can try is because you’re in a render pass, you can call Blit(cmd, ...) instead of cmd.Blit(...). This will cause it to call ScriptableRenderer.SetRenderTarget before the cmd.Blit().

If that doesn’t work, I’d just hack it. Add a shader constant for “_EditorYFlip” or something and set that if !Application.isPlaying (should have almost zero runtime overhead since the driver will compile it to a preshader, but if you want to be extra sure, use a macro define and shader variant instead).

Hmmm, that’s unfortunate.
the Blit() method has the same result. Its definitely strange that it works opposite in editor and game view.
Does this potentially change depending on the platform that its built for? I sure hope not lol.

Also if I were to manually change the flip settings when running, it will just be flipped in the scene view while the game is running because Application.isPlaying is true even in scene view. This is not really optimal. I am making this as a tool for others and would definitely prefer to not have such a strange effect.

It seems a bit awkward that this is not something that is well documented or accounted for. Is there a way for me to tag/ask a unity dev about this? I might also just file a bug report.

There’s camera.cameraType == CameraType.SceneView. But that shouldn’t be needed; it should just work the same for all camera types.

Yup; that sounds like the right thing to do.

Out of curiosity, what are you doing for edge extrusion? My unfinished stencil shadow feature is doing that part in a compute shader. The one on the asset store already is just rendering every edge as a quad and letting the vertex shader make the non-light-facing edges degenerate (which breaks on some skinned meshes, but works on mobile and older hardware).

7602370--943639--upload_2021-10-25_16-52-16.jpg
ahhhh what a beautiful thing. cool looking shadows with ugly looking code lol

I have had a few approaches. The most optimal seems to be similar to the one on the asset store. I was using the job system/burst compiler for a bit, but I removed that for now. If you look at the commit history you might find it lol.
https://github.com/rhedgeco/UnityShadowVolumeGenerator

Currently I do this incredibly inefficient nested loop to match edges up on a mesh.
I first split each triangle so that no vertices are shared, and then I generate quads in between all the edges. It definitely costs more in memory (its actually not too bad), but its SUPER cheap at runtime when all I have to do is displace vertices that face away from the light source using the vertex shader.

Edit:
What im REALLY waiting for is the ability to batch draw calls using cmd.drawmesh so that I can draw each shadow volume in a loop by just accessing the generated shadow mesh. Currently it has to be an object that exists in the scene with a renderer component so that I can use context.DrawrRenderers().

7602370--943639--upload_2021-10-25_16-52-16.jpg

That rendering method works fine for static meshes, but can break if you have skinned meshes with skinny arms and legs (like me). The problem is that per-vertex normals aren’t perfectly accurate anymore; you actually need the normals of triangles on either side of the edge which depend on 4 different vertex positions. That’s what eventually got me to do it in a compute shader (old games did it all on the CPU, but obviously that’s not scalable).

But as long as you don’t have skinned meshes (or all your skinned meshes are 2005-Unreal-Engine levels of swoll), the vertex shader method works, and lets you take adavantage of all the batching/culling in the render pipelines already. And it looks like your method is targeted at mobile/etc, so compute shaders might not be available or fast enough.

Edge matching can be sped up with a hash or by sorting them (Burst NativeMultiHashMap<> is kinda terrible, so I ended up using the latter): https://gitlab.com/burningmime/urpg/-/blob/master/Packages/shadow/src/internal/FindEdgesJob.cs

It’s fun seeing different approaches to this, and it’s cool you took the time to write yours up as an actual tutorial. Although it actually can be used for soft shadows by using a screen-space blur that takes depth/normal of center pixel into account. Performance is not terrible, especially when compared to techniques where you do multiple PCF samples, but of course you won’t get contact hardening.

I appreciate the support and tips!
And yeah as it stands it REALLY doesn’t support skinned meshes considering I just create a renderer with a custom created mesh at runtime. Ill probably change that soon tho lol.

Maybe it could still work with skinned meshes. If the problem is that normals get messed up during deformation, couldn’t you just run through and just apply a new normal calculation? Then you can skip the conversion to an entirely different shadow volume generation method. I have no idea though since I haven’t got there yet.

In my mind you already have access to the triangle list, and the order of the vertices determines face direction. So you could just do a cross product with the vertices and calculate perfect face normals for each one right?

Exactly! But you need to do that every frame, based on the skinned positions of the triangles.

Well I guess I’ll investigate the speed of reprocessing normals every frame, and we will have a speed competition :wink: