I am trying to write a very simple shader that displays a texture on an object in screen space.
Not unlike the legacy render pipeline, I convert object space vertex coordinate to clip space, then to viewport space and use the latter as UV for sampling my texture.
The behaviour is interesting. If I zoom far enough away, it works! Picture on the right is from super far away, picture on the left is closeup.
In the middle of the left picture, there’s a cube, a sphere and a quad with that same material. If it was working, they would be undistinguishable from the plane, but you can actually notice them here. All objects are Unity’s defaults.
Does anyone have an idea of what could be going on?
First, why are you dividing the output.vertex by output.vertex.w when calling ComputeScreenPos? That function explicitly requires that to not be done for it to work. You should be passing in the unmodified output.vertex into it.
Second, that function returns a float4 value for a reason. You need to pass it to the fragment shader as a float4*, not as a float2.
Really you only need the xy and w values, which is why some Unity shaders reuse the z component for other stuff.
Third, you need to do the divide by w in the fragment shader when calling SAMPLE_TEXTURE2D.
half3 col = SAMPLE_TEXTURE2D(_BaseMap, sampler_BaseMap, input.screenPos.xy / input.screenPos.w);
Ah! Thank you so much!
I got confused by this method I have found by searching for the function in a new project:
That’s in the shader Universal Render Pipeline/2D/Sprite-Lit-Default.
So, if you know, I’d be super curious why would they divide positionCS by w in the the vertex function, not in the frag function, considering the weird output of doing this??