How can I calculate the pixel to world unit ratio per-fragment? I thought at first it could be as simple as calculating the camera frustrum plane size in world units at the current depth and then using the screen size in pixels to figure it out but I think it may be a naive approach since I’m sure things like camera FOV and screen size ratio need to be factored in.
What I’m trying to do with this ratio is to normalize the scale of my displacement shader since the further away from the camera you get ideally the distortion should be weaker because the resolution of the surface gets smaller relative to screen space.
I think that’s right, but I haven’t tested it. I think if you use that to multiply a texture UV it’ll stay constantly scaled regardless of distance for example. For a pixel displacement you’ll want to divide the displacement amount by that pixel to world scale.
Yep, that was a good catch. I missed the .z at the end of that line. Otherwise it’s exactly what the COMPUTE_EYEDEPTH macro is.
Yep. It will be a very large number. I did have it a little wrong though to handle aspect ratio properly, beyond the obvious issue you noticed. Here’s a shader using the code to scale up a texture UV by distance so it’s always 1 pixel per texel.
@bgolus Yeah that seems right, because the higher the screen resolution is then worldUnitsPerPixel should get smaller. But it must be some huge number or some really small <1 number because it smears the UVs.
Ok so it’s actually a really small number, so I need to somehow remap it between 1 and 0 and multiply my refractUVs by it OR I need to remap it to 1 to some number and divide my UVs by it. But if it represents units per pixel then the upper limit would be infinity right? Not sure how to scale it…
Ok, been thinking on it. Is the solution that I need to find the rate of change of the worldUnitsPerPixel as depth changes and then somehow apply that to my displacement UV scaling?
@bgolus While messing with this scaling problem I discovered that it looks like your formula doesn’t factor in the camera FOV? I expected the worldUnitsPerPixel to be lower near the edge of the screen with high FOVs (like 150 for example) but it seems to be the same on each fragment?
EDIT: Oops, I should have said it does factor in FOV but not the curvature of the lens?
So you need to think about what the various values mean.
What do the UVs represent? The UV for the grab pass is a 0 to 1 range from one side of the screen to the other.
What does a displacement of “1” mean? Is that one pixel, or 1 screen width, or 1 world space unit? What does that mean for being scaled by distance? By world space per pixel? Do the pixels even matter if you’re already in UV space?
The unity_CameraProjection is there to factor in camera FOV. The thing is real time rendering uses a flat projection, there’s no distortion in the corners like a real camera with a spherical lens would have. The “worldUnitsPerPixel” is constant at a fixed depth. Note, I’m using the term depth here explicitly. Depth and distance are different things.
@bgolus I did a bit of research and now I understand that Unity only has flat projection and that actually creates distortion near the edges at high FOV values. So if that’s true, you mentioned distance vs depth, does that mean I should actually use the distance from the fragment in view space to where it gets projected to on the near clipping plane instead of depth? If that’s even possible I mean.
I also understand what you’re saying w/r/t the displacement UVs. I need to decide what a displacement of “1” means. I don’t know how to express other than: If I have a sphere with a diameter of 1 world units then a pixel with a displacement of (1, 0) should sample the pixel 1.66 world units to its right (relative to the camera). Here’s a picture of what I mean:
So I would need to figure out how much to adjust the current sceneUV based on the worldUnitsPerPixel and the screen resolution, right? Then I guess multiply it by 1.66? I’m going to need to work through this one…
No. Depth is the correct thing to use. However I assume you’re trying to replicate refraction, in which case it’s all a big hack to do the offset grabpass thing anyway, so maybe it would give you results you find more pleasing.
The screen resolution then isn’t a factor at all. You’re dealing with UVs and world space. Never does the actual pixel count need to be thought about, apart from correcting for aspect ratio in the offset.
// displacement offset direction
float2 offsetVector = normalize(viewNormal).xy; // or how ever you're calculating this
offsetVector.y *= _ProjectionParams.x // note, y might be upside down from view space to UVs, this should flip it?
// get world space offset in UV space
float2 refractUVOffset = (offsetVector * offsetWorldDistance) / (depth * unity_CameraProjection._m11);
// correct for aspect ratio
refractUVOffset.x * = _ScreenParam.x / _ScreenParam.y; // might have this backwards, or should be refractUVOffset.y? I can never remember.
// add offset to grab tex UVs
sceneUVs += refractUVOffset;
The _ProjectionParams.x line might need to be replaced with this instead. I can never remember which to do when until I try it.
@bgolus So taking your advice, specifically the line float2 refractUVOffset = (offsetVector * offsetWorldDistance) / (depth * unity_CameraProjection._m11); gives me the desired behavior if I’m at roughly 16:9 and at 60 FOV (Editor default). Unfortunately changing the width or the FOV causes the refractUVOffset amount to be too large or too small.
I think I’m missing something that relates the FOV to the screen size ratio? I need to somehow normalize the scaling so at any width/FOV I get the same effect as at 16:9 and 60 degrees.
@bgolus Ok I think I figured out what’s missing. refractUVOffset.y *= _ScreenParams.x / _ScreenParams.y; turns the RenderTexture from a square into the proper shape of the screen so that displacement is correctly proportional.
However it doesn’t account for how the FOV affects the scale of objects in that RenderTexture. Changing the aspect ratio in the Game view doesn’t seem to affect the perspective, it sort of just crops off the image. So I think I don’t need to relate FOV to aspect ratio in this additional correction but then again I don’t really know how FOV affects the projection in Unity.
So to move forward I think I need to understand: What exactly does unity_CameraProjection._m11 represent?
Honestly when ever I’m doing something like this I have to spend some time remembering what it all is too.
However the basic answer is _m00 and _m11 are parts of the projection matrix that are responsible for the horizontal and vertical FOV. They are not the FOV in of themselves, but they hold some useful calculations derived from the FOV.
Lets step back a little bit and think about what you’re trying to do. You want to know big 1 world unit is in screen space at a given depth. To know that you need to know the FOV. If you know the FOV and the depth you can find out how wide the view is with some basic trig. This is the TOA of your old right triangle SOHCAHTOA trig.
tan(angle) = opposite / adjacent
We want the width (opposite) of the screen at a given depth (adjacent) with a specific FOV (angle), so we can solve that with:
tan(FOV / 2) * depth * 2 = screen width
Once you have that screen width, 1 / screenwidth gives you the distance of 1 unit in screenspace, at least along one axis. The problem is the FOV is only correct for either the width or height, and you have to know which. Also tan() is kind of expensive in a shader.
In that link they describe it as 1 / tan((FOV / 2) * (pi / 180)) because they’re converting from degrees to radians, which I’m glossing over. They’re also using a square aspect ratio in that example so the FOV is equal for both the horizontal and vertical axis.
So that’s the basics. Now for answering your most recent questions … I have no idea what that second line is actually doing, or why that would work for you. However it might be stretching because _m11 might be negative (Unity does lots of odd stuff with flipping the projection to deal with different platforms), and the power of 2 is going to make that value positive. That’s just a guess though.