I have a problem with spheres becoming extremely elongated as they approach the edge of the screen. If I put a halo around the sphere the halo does not become warped but the sphere does as the camera pans around. Can anyone point me in the direction of what’s going on here? Why is the halo not affected by the perspective distortion?
***I’ll answer this myself. So, if you are having this issue where the mesh of your prefab looks distorted near the edge of the screen just reduce your field of view. Most effective range appears to be around 20-30 degree vertical FoV. I suppose if you absolutely have to have a wide FoV you can apply a counter distortion post processing effect. Otherwise just go with a small FoV and maybe increase the distance to the far clipping plane depending on what you are trying to do.
What you’re seeing is the expected result of a linear perspective projection, sometimes called a pinhole camera model.
All GPUs render using a linear projection, be it a linear perspective, orthographic, or anything in-between. Except for raytracing, all hardware rendering is explicitly designed around it and rasterization, the primary method GPUs use for rendering geometry, requires it.
If you want spheres to remain circular in appearance you either need to stick to a very low fov or an orthographic camera. Or you need to use post processing to warp the linear perspective render. This usually requires rendering at a much higher resolution, or rendering multiple adjoining views to cover the area the warped view can see and keep the post warped resolution high enough to not look blurred. This is what VR does because it needs to warp the rendered to correct for the warping the physical lenses in the headset cause.
In the video that I’m trying to post here but embeds are disabled and the forum isn’t really helping by autoembedding the link, they are saying the following:
“This is not simply a screen based effect- we’re not just warping a 2d render. This effect can only be done by having control over the geometry and applying transformation across the whole geometry, which is much more efficient and better quality”
Ah, right. I forgot some do it by warping the vertex positions. It’s similar to something like the curved world shaders out there.
But in the GPU itself is still just rendering a linear projection. And while I’ll agree with the video that it’s “more efficient”, you need to have enough geometry tessellation in the scene to do the warping properly and to avoid PS1 like texture UV issues, and you can’t have t-junctions in your models without the possibility of seams popping open or geometry intersecting. You’ll also never be able to accurately reconstruct the world position from the camera depth texture, even if you know the exact warping algorithm, because only the vertex positions are warped, not the surface. That’s still linearly interpolated and there’s no way for the depth texture or even the fragment shader to perfectly account for that, not without some much more expensive stuff.
It’s a cool technique, and while seemingly novel, it actually used to be a common technique for rendering point light shadow maps in the early days of real time shadow rendering. One that fell out of favor as accuracy and fore mentioned mesh limitations became a bigger concern.