Convert object UV to screen space UV to apply distortion in a specific area

Hello community,

I’m stuck trying to implement the following effect:

  • I have a quad on the scene with material and shader assigned
  • in the shader I stretch and distort UVs in the object space (0,1 range)
  • then I need to grab screen color behind the quad and apply these UV distortion to this pixels (taking into account position, rotation and scale on the quad)

I’m using Shader Graph and “Screen Color” node to get the pixel color from the background. If I plug my distorted UVs directly to this node then (obviously) I have the whole screen rendered in this quad (normalized screen space UVs directly mapped to normalized object space UVs).

I guess I need somehow convert or remap my distorted Object UVs to the screen space UVs and then plug them to Screen Color node. But I cannot figure out how to do that. Please help!

First, this is a Shader Graph question, and there’s a Shader Graph specific forum you should ask these questions in.

Ignoring that, you cannot use the UVs of an object for screen space distortion. Using the object’s UVs as an input to the Scene Color node just means you’ll see the contents of the screen displayed on that object. You need to use the Screen Position node, and distort those. If you’re using the object’s UVs to derive the distortion, that might still work out okay as long as the object (and the camera) aren’t rotated.

1 Like

thanks bgolus! Sorry for shader graph, I just used it to quickly try different approaches. For me this is more general issue not related to shader graph. It would be the same if I’d write shader in text editor.

I thought that I could multiply UVs by MVP matrix to convert it from object to Clip space but apparently it doesn’t work.

I tried to use Screen Position as you suggested and the background renders correctly, but distortion is applied in screen space instead of object space. Basically it is the same problem, but I this case need to convert distortion from screen to object space…

This what I’m trying to achieve:

You actually kind of had the right idea. The only thing you missed is you’re not transforming a “UV” from object to clip space, but a synthetic local object space to clip space.

Shader "Unlit/ScreenSpaceUVDistortion"
{
    Properties
    {
        _Distortion ("Texture", 2D) = "white" {}
        _DistortionStrength ("Strength", Range(0,1)) = 0.5
    }
    SubShader
    {
        Tags { "Queue"="Transparent" "RenderType"="Transparent" }
        LOD 100

        GrabPass {}

        Pass
        {
            CGPROGRAM
            #pragma vertex vert
            #pragma fragment frag

            #include "UnityCG.cginc"

            struct appdata
            {
                float4 vertex : POSITION;
                float2 uv : TEXCOORD0;
            };

            struct v2f
            {
                float4 pos : SV_POSITION;
                float2 uv : TEXCOORD0;
                float4 grabPos : TEXCOORD1;
            };

            sampler2D _GrabTexture;

            sampler2D _Distortion;
            float _DistortionStrength;

            v2f vert (appdata v)
            {
                v2f o;
                o.pos = UnityObjectToClipPos(v.vertex);
                o.uv = v.uv;
                o.grabPos = ComputeGrabScreenPos(o.pos);
                return o;
            }

            half4 frag (v2f i) : SV_Target
            {
                float4 distortionPos = float4(tex2D(_Distortion, i.uv).xy - 0.5, 0.0, 1.0);
                float4 clipSpaceDistortion = UnityObjectToClipPos(distortionPos);
                float4 grabDistortion = ComputeGrabScreenPos(clipSpaceDistortion);
                grabDistortion.xy /= grabDistortion.w;

                float2 grabUV = i.grabPos.xy / i.grabPos.w;
                grabUV = lerp(grabUV, grabDistortion, _DistortionStrength);

                half4 col = tex2D(_GrabTexture, grabUV);
                return col;
            }
            ENDCG
        }
    }
}

6120854--666926--upload_2020-7-22_23-31-33.png
6120854--666932--upload_2020-7-22_23-50-38.jpg
6120854--666935--upload_2020-7-22_23-51-3.png

The “UV texture” should be thought of as an encoded object space texture. The code assumes you’re using the default quad mesh, which is aligned to the xy coordinates in a -0.5 to 0.5 range. Since the texture is a 0.0 to 1.0 value, we subtract 0.5 and it’s now an “object space” position.

6 Likes

Its hard to explain how grateful I am :slight_smile:
Thank you so much bgolus! You are the best. As always :wink:

@bgolus any thoughts on why this would behave differently on android? I’m using the shader graph implementation. I see that there are differences between clip space for dirextx and opengl, but I would assumed the shader compiler would spit out proper versions.

Yeah, there’s some weirdness with screen space UVs between DirectX and OpenGL. The BIRP version should work properly between both APIs, but the shader graph might not since I’m doing the “ComputeGrabScreenPos” function manually, and without the additional API checks. See if it looks correct on OpenGL with that multiply by 1.0, 1.0 removed.

If it does, make a new Vector2 that uses 1.0 for x, and the Camera Node’s Z Buffer Sign as the Y and multiply by that.

1 Like

@bgolus that was it! Thanks so much!