How to place vertex at (x,y,10.0f) in ScreenSpace (i.e. along camera.forward) in the vertex shader?

Hi there, I am in need of help to understand how to do the following. In the vertex shader I would like to set the Z position of each vertex, in ScreenSpace, to 10.0f (i.e. at 10f distance along the forward direction from the screen). This means, in abstract terms, that all vertices will be placed at a plane that is parallel to the screen and lays at distance 10.0f from the screen.

Outside shaders, one could do it with the following:

//wPos is the WorldSpace position of the vertex
Vector3 sPos = Camera.main.WorldToScreenPoint(wPos); //sPos is the ScreenSpace position
sPos = new Vector3(sPos.x, sPos.y, 10f); //set the new Z position in ScreenSpace
wPos = Camera.main.ScreenToWorldPoint(sPos); //convert back from Screen to World Space

Now, how to accomplish the samething within a vertex shader?
I tried the following, but it does not work:

    VS_INPUT VS_Main(entershader v)
    {
        VS_INPUT output = (GS_INPUT)0;
        v.vertex.z = 10.0f;
        output.pos = mul(_Object2World, v.vertex);
        return output;
    }

Also, this does not work:

    VS_INPUT VS_Main(entershader v)
    {
        VS_INPUT output = (GS_INPUT)0;
        output.pos = mul(_Object2World, v.vertex);
        output.pos.z = 10.0f;
        return output;
    }

In both cases, the vertex Z position is set to 10f in WorldSpace, not ScreenSpace. How can I solve this simple issue? I am sure this is has to be simple, but I am struggling with getting it right.

I appreciate any assistance.

_Object2World only transforms things into world space, not in camera / view / screen space. There are a few ways to do what you want though.

What you’re currently doing is transforming the position into world space, and you can get the camera position in world space as well with _WorldSpaceCameraPos. Then you can use a dot product to find the direction to the camera and move the vertices to 10 units away.

// transform model pos to world space pos
float4 worldPos = mul(_Object2World, v.vertex);
// find direction from camera to pos
float3 fromCamDir = normalize(worldPos.xyz - _WorldSpaceCameraPos.xyz)
// move 10 units away from camera
worldPos.xyz = fromCamDir * 10.f;
// transform world space pos to model pos
v.vertex = mul(_World2Object, worldPos);
// transform model pos into projection clip space pos
output.pos = mul(UNITY_MATRIX_MVP, v.vertex);

However that’ll move things to 10 units away in a radial distance, which isn’t the same as viewPos.z = 10.f. We can get the camera direction from the view matrix and then do some dot product magic to make that work, but there’s an easier way to do this.We can transform the positions into view space, adjust to the location you want, and transform back. Unity provides the model to view (local mesh location to camera relative position) transform to the shaders with UNITY_MATRIX_MV (model → view).

// transform model pos ot view space pos
float4 viewPos = mul(UNITY_MATRIX_MV, v.vertex);
// move 10 units away
viewPos.z = 10.f;
// transform from view space pos to model pos
v.vertex = mul(inverse(UNITY_MATRIX_MV), viewPos);
// transform model pos into projection clip space pos
output.pos = mul(UNITY_MATRIX_MVP, v.vertex);

However you don’t really need to transform that back into model space, and that inverse() is kind of expensive. So you could do this:

// transform model pos to view space pos
float4 viewPos = mul(UNITY_MATRIX_MV, v.vertex);
// move to 10 units away
viewPos.z = 10.f;
// transform model pos into projection clip space pos
output.pos = mul(UNITY_MATRIX_P, viewPos);

Or, if you really need it back in world or model space I think you can do this:

// transform model pos ot view space pos
float4 viewPos = mul(UNITY_MATRIX_MV, v.vertex);
// move 10 units away
viewPos.z = 10.f;
// transform from view space pos to model pos by using the inverse transpose with a transpose mul, might not work though
v.vertex = mul(viewPos, UNITY_MATRIX_IT_MV);

I’m not entirely sure where the code you’re running is as you’re directly outputting the world space position. Presumably you’re doing the final clip space transform in a geometry shader, though I’m not sure why you don’t just do all of the transforms there.

As a minor note it’s not 10 “feet”. In 10.0f the f is for “float”, and they’re arbitrary units not an actual real world distance, though generally things in Unity are scaled so the 1 unit is 1 meter. So 10.0f would actually be 32.8 feet…

Wow, many thanks for the very detailed and insightful answer. You are skilled and very observant too: indeed I am outputting the WorldSpace position from the vertex shader because I am creating geometry at WorldSpace in the geometry shader (I use the vertex to generate a quad on the fly). Sure, I could (maybe should) do all the transforms directly there at the geometry shader.

In any case, that still does not solve my problem: I need to put the mesh vertices at 10.f from the Screen in the Z axis (ScreenSpace) before creating the geometry at the geometry shader (no matter if the repositioning is done in the vertex or the geometry shader). I tried the last code snippet from your answer and indeed it does not work. Then, I tried the following with your second code snippet (I am disregarding the first, because indeed I don’t need the radial distance from camera, but the perpendicular distance from screen):

        // transform model pos ot view space pos
        float4 viewPos = mul(UNITY_MATRIX_MV, v.vertex);
        // move 10 units away
        viewPos.z = diststandard;
        // transform from view space pos to model pos
        v.vertex = mul(inverse(UNITY_MATRIX_MV), viewPos);

        output.pos = mul(_Object2World, v.vertex);
        //output.normal = v.normal;
        output.tex0 = float2(0, 0);
        output.color = v.color;

But then I get the error “undeclared identifier ‘inverse’” on d3d11. My next attempt was to substitute the piece

inverse(UNITY_MATRIX_MV)

By this:

UNITY_MATRIX_IT_MV

But still no luck.

Assuming that float4 viewPos = mul(UNITY_MATRIX_MV, v.vertex); is the correct way to get the ScreenSpace position (as with Camera.WorldToScreenPoint outside the shader, then I think the whole problem here is being how to get back from ScreenSpace to WorldSpace (like in Camera.ScreenToWorldPoint) within the shader. I am afraid that part is still not working right in the codes above.

Try this then:

float4 worldPos = mul(_Object2World, v.vertex);
float3 fromCamDir = normalize(worldPos.xyz - _WorldSpaceCameraPos.xyz)
float3 camForward = normalize(UNITY_MATRIX_V[2].xyz);
worldPos.xyz = fromCamDir * dot(fromCamDir, camForward) * 10.f;

Thanks again! I will try as soon as I get back to my Unity-station. In the meanwhile, a quick side-note: I wonder if the previous solutions did not work because the mul(UNITY_MATRIX_MV, v.vertex) might actually give points not at in the range (Screen.width, Screen.height).

Pixel positions haven’t even come into play yet in shaders. That all happens during the rasterization process between the vertex (or geometry) shader and fragment shader. View space is just like world space but with the (0,0,0) position at the camera and rotated. It’s exactly like taking an object in Unity and having it be a child of a camera (except it ignores camera scale).

The problem is inverse() is missing now, which isn’t surprising because it is quite expensive. The UNITY_MATRIX_IT_MV … either you did mul(UNITY_MATRIX_IT_MV, viewPos) when it needs to be mul(viewPos, UNITY_MATRIX_IT_MV) or my matrix math is just really bad and you need a transpose inverse instead of the inverse transpose for that to work (or both!).

You are correct, the command “inverse” seems to be missing in current versions, which for me does not make sense. I mean, it might be expensive, but if one has to calculate the inverse, well, it’s better to have the command at hand than not to have it. Anyways, I guess you were also correct that I had inverted UNITY_MATRIX_MV and viewPos when trying to calculating my own matrix inversion…

But back to business, I tried your suggestion above, still no luck. However, I was able to do a couple of improvements. First, I re-wrote the stuff so now everything happens in the geometry shader (before generating the quads) and directly using the data passed by the mesh, no inital transformation to world position is needed, which indeed has saved my quite a few calculations and I already thank you for calling my attention to that.

Secondly, after a few tests, I notice that the following code in the geometry shader gives the correct Z position in ScreenSpace that I need:

ComputeScreenPos(mul(UNITY_MATRIX_MVP, p[0].pos)).z

Where p[0] is just the original vertex position got from the mesh and passed by the vertex shader with no alteration whatsoever.

Now, it is all a matter of finding out how to do the opposite operation that is done by the command ‘ComputeScreenPos’.

PS: also, notice that the line of code is far from optimal - there has to be a way to directly transform p[0].pos to screenpos, without having to do the mul(UNITY_MATRIX_MVP, p[0].pos), but that’s of minor importance for the moment.