per-object screen space uv issue

Hey everyone,

I am currently trying to sample a texture in screen space. This works well :

float4 positionCS = vertexInput.positionCS / vertexInput.positionCS.w;
screenPos = ComputeScreenPos(positionCS).xy;
float aspect = _ScreenParams.x / _ScreenParams.y;
screenPos.x = screenPos.x * aspect;

But I would like to be able to constrain uv position and scale based on object’s position and distance from camera. I found some example but I also faced some issues and for the moment I don’t see how to fix them. Here’s the code :

float4 positionCS = vertexInput.positionCS / vertexInput.positionCS.w;
screenPos = ComputeScreenPos(positionCS).xy;
float aspect = _ScreenParams.x / _ScreenParams.y;
screenPos.x = screenPos.x * aspect;

float4 originCS = TransformObjectToHClip(float3(0.0, 0.0, 0.0));
originCS = originCS / originCS.w;
float2 originSPos = ComputeScreenPos(originCS).xy;
originSPos.x = originSPos.x * aspect;
screenPos = screenPos - originSPos;

// You can match object's distance like this
float3 cameraPosWS = GetCameraPositionWS();
float3 originPosWS = TransformObjectToWorld(float4(0.0, 0.0, 0.0, 1.0));
float d = distance(float4(0.0, 0.0, 0.0, 0.0), cameraPosWS - originPosWS);
screenPos *= d;

And here’s the issue I am facing. You can notice that when the object is near screen edges the texture starts to move. Is there a way to avoid that ?

hairyspryarawana

I am using URP but this doesn’t really matter.

This seems to be related to FOV and associated distortion but I don’t see a way to get rid of that for the moment.

There’s not really a solution for this artifact. Because of the perspective projection, as objects get further to the sides you can see more of the back side of the object, and in screen space the object is getting stretched out so the screen space distance between the object’s center point and the furthest extents on the object are increasing.

Extreme example with a 140 degree fov.

But… the code you have above is also slightly wrong, so it’s worse than it should be! You don’t want to be multiplying the screen positing by the distance, you want to multiply it by the depth. The easiest way to get that is to transform your object center world space position into view space and use the view space -z. That’s a negative because on the GPU view space is -Z forward, so -viewPos.z will get you a positive value for things in front of the camera. You could also try abs(viewPos.z).

*edit: the depth is also the originCS.w in your example! That’ll work better since that’s also correct for orthographic views where you don’t want to divide by the depth (originCS.w is 1.0 in that case).

Here’s an example of the same setup using the distance like your shader code rather than the depth.


Notice how the screen space grid is event changing scale at the corners! This is what you’re seeing, so it’s doubly bad.

Thanks a lot for the technical and visual explanations @bgolus !

After fixing the issue, the artifact is less visible but as expected still there. I found this post about Reducing stretch in high-FOV games using barrel distortion, might be worth the try what do you think ?

Won’t help here at all, not by itself at least.

That’s applying a distortion to the final rendered image to get something that “feels” less distorted for a static image. In can also make people sick in motion. Lots of games already do this to some subtle degree to get a specific visual style, and all VR rendering does something this to correct for the distortion the physical lenses in the headsets do (it also reduces bandwidth requirements for the display).

When you’re computing the screen space position, that’s being calculated to the original linear projection / pinhole camera that all modern GPUs use to render with.

If you use both all it means is you get a distorted screen space texture. This is a bad example because the math is wrong, but it gives you an idea of the distortion you’d see. This is just taking the above image and doing an Photoshop spherize on a larger square canvas.
5932841--634628--upload_2020-6-2_16-40-8.jpg
The spheres now remain circles on screen, but see how the screen space texture starts to bend?

The solution to this is to do the “screen space” texturing in some other space, like view direction or spherical space, but there’s lots of problems there too.

The easiest option is to do something akin to camera facing UVs, where it’s using the vector from the camera position to the object center to determine the “screen space” UVs. But those distort like crazy unless you’re using a fish eye or barrel distortion post process.

Thanks again @bgolus for the detail answer. I think that this is not enough visible to push things forward and try to fix this, at least for now.