Shader MVP matrices: what does vertex position mean at each step?

I’ve been scouring the web trying to find an explanation of what the position of a vertex actually means before and after each projection in the standard MVP chain. After reading and trying some things out, I understand it in a general sense:

  • Initially, position is a multiple of the bounds of an object, in local object space
  • After M, position is in world coordinates
  • After V, position is in camera coordinates
  • After P, position is in screen coordinates

What I’m trying to figure out for each of these is what exactly the units mean. Here’s what I understand so far - if anything is incorrect or you can fill in the gaps, I’d appreciate any help :slight_smile:

Initial

Units are -0.5 to 0.5 along the 3 axes, with -0.5 and 0.5 corresponding to the bounds of the object. No surprises there.

Post-M

Units are in Unity’s world coordinate system. No surprises there.

Post-V

Units are in some kind of camera coordinate system. X and Y appear to be square, and a quad with an assigned X and Y range of 1 in MV spaces doesn’t appear to quite fill up the screen from bottom to top. No idea what units Z is in.

Post-P

X and Y units are in viewport coordinates, with -1 to 1 being the range horizontally and vertically? No idea what units Z is in.

Once you’re in post-P space, it appears you can use _ScreenParams to get the number of pixels and convert from viewport to pixel coordinates in XY.


So in a general sense, I’m looking for information on what the coordinates mean in each space, and what can be done with them.

I’m trying to create some custom billboarding. Here are a couple examples of what I hope to do with this information:

  • Create a billboard that appears the same size regardless of distance from the camera.
  • Create a billboard whose size is dependent on distance from the camera, but not linearly. E.g. a billboard that appears to be 0 pixels wide at 1000 game units, and 100 pixels wide at 10 game units, increasing along some kind of curve.

OpenGL stuff (like the graphics “Red Book”) do a pretty good job explaining that stuff. It’s standard – nothing to do with Unity.

Initial are the raw model coords, straight from the modelling program. Can be anything, but obviously should be touching/surrounding 000. The tip of an animated orc’s nose is always (0,3,0.4) for every orc, every frame.

After MV is “world units” in the camera’s local coordinate system. The same as Cam.main.InverseTransform(P);. If the camera is 10 meters away facing you, it will be (0,0,-10) (at some step, negative Z is in front of you.)

P accounts for the view angle and converts to generic viewPort as you say. Depending on the system that’s often -1 to 1 (so 00 is centered.) Hardware will convert to pixels. The same way, is now “normalized” depth.


Standard way to make something the same size at depth is to set the matrix not to shrink it. In Unity, it’s simpler to just make a 2nd Ortho camera (which makes the matrix you wanted.) For depth doing funny stuff, should be able to have the vert shader check Z (in meters) or distance(xyz), also in meters, after the MV step.

I am learning transformations too and have found a strange issue. After applying MVP transformation in vertex shader I expect to get vertex coordinates in [-1;1] range, but it is true only for orthographic camera. With perspective camera it is not true. I do not know what exactly values I get, but they are much bigger than 1. Could anyone please explain me why it happens?

the answer is wrong.
after v = pvm*v0, v.x is not

the grey area is[-1, 1]

I use o.vertex = mul(UNITY_MATRIX_MVP, v.vertex); o.vertex as color to do the experiment