I rendered a scene with several test objects to a texture, in flat color (purple) test.
Now I am trying to render the same objects again… mapping that texture directly onto them… seems like it would be easy, if I can just get the screen co-ordinate of each vertex in vertex shader, and map it as UV onto my texture from step 1 un 0-1 space. This is where the trouble begins…
using MVP transform… this does not appear to be in -1 to +1 screen space, but some other space called clip space, which seems local to each object. My texture from step 1 gets mapped wholly onto to each object… no good. almost like its mapping to to the bound of each object instead of the entire camera space.
I also tried manually calculating offset of vertex world space from camera space… transforming it to view space, then dividing by Z or by Range, for my own perspective calculation… This gets pretty close to mapping step1 texture over entire screen, but comes with some kind of distortion/warping in a saddle shape…
Anyone know the correct way to get arbitrary vertices on any/all objects… screen space position -1 tp 1 or 0-1 space ? help appreciated. thanks.
example code…
void vert (inout appdata_full v, out Input o)
{
UNITY_INITIALIZE_OUTPUT(Input,o);
o.worldPos = mul (_Object2World, v.vertex);
float3 relPos = o.worldPos - _WorldSpaceCameraPos;
float dist = length (relPos);
o.locPos = mul (UNITY_MATRIX_V, o.worldPos.xyz); // just V
o.locPos = o.locPos /dist; //true space perspective, instead of /z try this if no P on the MVP transform
o.locPos *= _MetaScalar; //scalar shader param, so I can experiment with it
}