But I would like to be able to constrain uv position and scale based on object’s position and distance from camera. I found some example but I also faced some issues and for the moment I don’t see how to fix them. Here’s the code :
There’s not really a solution for this artifact. Because of the perspective projection, as objects get further to the sides you can see more of the back side of the object, and in screen space the object is getting stretched out so the screen space distance between the object’s center point and the furthest extents on the object are increasing.
But… the code you have above is also slightly wrong, so it’s worse than it should be! You don’t want to be multiplying the screen positing by the distance, you want to multiply it by the depth. The easiest way to get that is to transform your object center world space position into view space and use the view space -z. That’s a negative because on the GPU view space is -Z forward, so -viewPos.z will get you a positive value for things in front of the camera. You could also try abs(viewPos.z).
*edit: the depth is also the originCS.w in your example! That’ll work better since that’s also correct for orthographic views where you don’t want to divide by the depth (originCS.w is 1.0 in that case).
Here’s an example of the same setup using the distance like your shader code rather than the depth.
That’s applying a distortion to the final rendered image to get something that “feels” less distorted for a static image. In can also make people sick in motion. Lots of games already do this to some subtle degree to get a specific visual style, and all VR rendering does something this to correct for the distortion the physical lenses in the headsets do (it also reduces bandwidth requirements for the display).
When you’re computing the screen space position, that’s being calculated to the original linear projection / pinhole camera that all modern GPUs use to render with.
If you use both all it means is you get a distorted screen space texture. This is a bad example because the math is wrong, but it gives you an idea of the distortion you’d see. This is just taking the above image and doing an Photoshop spherize on a larger square canvas.
The spheres now remain circles on screen, but see how the screen space texture starts to bend?
The solution to this is to do the “screen space” texturing in some other space, like view direction or spherical space, but there’s lots of problems there too.
The easiest option is to do something akin to camera facing UVs, where it’s using the vector from the camera position to the object center to determine the “screen space” UVs. But those distort like crazy unless you’re using a fish eye or barrel distortion post process.