As you can see, the vertex function didn’t work as expected. In fact, it seems the texcoords are in model space and not in uv space. There must be something wrong in the vert function:
The vertex fragment shader’s output is the clip space position, what the GPU uses to determine the screen space position. Normally you’d take the object space vertex positions and convert them into clip space using the UnityObjectToClipPos() function, where as the trick here is you’re converting the UVs directly into a clip space position.
In a Surface Shader, the vertex function just lets you modify the object space vertex data. The Surface Shader’s next step outside of the custom vertex function is to immediately apply the UnityObjectToClipPos() to the v.vertex value. To do what you’re looking to do with a Surface Shader, you’d have to calculate the clip space position in object space. Unfortunately that’s harder said that done as Unity does not provide the necessary matrices to transform from clip space back to object space. There’s no easy solution to this apart from calculating the inverse projection matrix in the shader manually (which there’s no built in functions for), or do it in c# and pass the matrix to the shader.
thanks for your answer. I’m interested in the “inverse projection matrix” solution, it’s fine for me to compute it in a C# script and pass it to the shader.
If I understand wat you’re saying, in my custom vertex function I have to “prepare” the v.vertex value so that when the shader applies the UnityObjectToClipPos() to it, the resulting value is the clip space position. Am I correct?
Yes. You have to apply the inverse matrix operations to basically “undo” what the UnityObjectToClipPos() is going to do to it so the resulting position is the one you already calculated. Really you want to calculate the inverse UNITY_MATRIX_VP (view projection) matrix so you can apply exactly the inverse operations, since that function does this:
inline float4 UnityObjectToClipPos(in float3 pos)
{
// More efficient than computing M*VP matrix product
return mul(UNITY_MATRIX_VP, mul(unity_ObjectToWorld, float4(pos, 1.0)));
}
See that page’s notes about the matrix used on the GPU.
As you can see in the following image, the shader outputs vertices in the correct screen position only when the camera projection is set to orthographic. It isn’t a problem for me to use orthographic projection, but I’m wondering why it doesn’t work with perspective projection.
However, the real problem is that no lights are rendered on the model’s surface when unwrapped (and I actually need them). I suppose lighting is computed after the vertices have already been modified.
Is there a Surface Shader-based solution or a workaround for this?
Even thinkin about a fragment/vertex solution, the vertex shader will always change vertices position before the fragment shader’s execution, and this would mess up per-pixel lighting calculation… The only thing I can imagine to solve this problem is to compute both world space and clip space positions in the vertex shader (in the classic way), then unwrap the vertices and finally compute lighting using the world space positions.
Yeah, I don’t think you can do this in a surface shader while still retaining the proper world position data. Since you can only modify the object space vertex positions, and that same data is used to determine the world space, you’re dead in the water.
You’ll have to modify the generated shader code directly rather than trying to stay within a Surface Shader.
I’m not totally sure either. Might be some floating point math problems causing the object to get clipped… you could try: float4 clipPos = float4(v.uv.x * 2.0 - 1.0, v.uv.y * 2.0 - 1.0, __*0.5*__, 1.0);