Note, that’s UNITY_MATRIX_ VP not MVP. So, convert the vertex point from model space to world space, then manipulate, then convert from world space through view/proj to screen space. Boom.
So the issue was that mul(_Object2World, _World2Object) was not the identity matrix. I was able to fix it by multiplying the _World2Object Matrix by unity_Scale.w. Doing this resulted in it half working, with some objects working and some not. I fixed this by changing the scale on the objects that weren’t working by ±0.0001 (WTF?). Seems to be mostly a random unity issue.
Here is a solution that works better than PEOWlfjpwoiqjf’s solution (did get me on the right track, thanks!).
You still must multiply _World2Object by unity_Scale.w but this should NOT be applied to the bottom right value of the matrix.
The answer from @RazHollander13 above works perfectly. I add one extra thing on top of it. UnityCG.cginc has UnityWorldToClipPos, a helper function to convert the world position to the clip position.
It is handy for me because what I eventually need after manipulating the world position is a clip position.
#pragma vertex vert
#include "UnityCG.cginc"
struct appdata {
float4 vertex : POSITION;
};
struct v2f {
float4 vertex : SV_POSITION;
}
v2f vert (appdata v) {
v2f o;
// o.vertex = UnityObjectToClipPos(v.vertex); // The following code results in the same clip position
float3 wv = mul(unity_ObjectToWorld, v.vertex).xyz; // Object space to world space
o.vertex = UnityWorldToClipPos(wv); // world space to clip space
return o;
}