Consider the following scene:
I have a projector projecting a texture (material) onto a mesh. Now I want to deform the mesh and I want to deform the projected texture with it. But I am not sure about the order in which things are rendered when using a vertex shader. Perhaps the mesh will be deformed first and then the projected texture applied to it. This is not what I want. I hope the un-deformed mesh takes the projected texture first, use that as if it was it’s own texture, and then deform the mesh, stretching the projected texture with it. That is what I want.
Example:
Will a vertex shader give me this output if the texture is projected onto the mesh?
A projector doesn’t “project a texture”. It applies a full vertex and fragment shader like any other material. You’ll need to use the same deformation in the projected material, which means you probably don’t actually want a projector for this.
Nope, a projected texture is exactly that, its projected onto surfaces just like a real life projector.
One way to accomplish what you want would be to ‘bake’ the texture project uv’s into the mesh, that way when the mesh vertices are displaced it will stretch the ‘projected’ texture. So effectively you wouldn’t be using a projector any more as there would be no point. If the projector itself moves then you’ll have to update the baked projected uv’s too.
To do the baking you’ll need to looks up how projectors/planar projection works, you should then be able to calculate the correct uv for each vertex in your mesh and then assign those uv’s to the mesh.
Ok, so I will go for the UV approach instead of using a projector.
The source texture never moves and is always located “in front of the screen”. It is just the iPad 2 camera feed applied to a plane mesh’s texture. The plane mesh is “filmed” by an orthographic camera and this camera has layer -2 so it is effectively a non-moving skybox. The normal camera (which renders the game scene) then uses the other camera rendered image as it’s background view.
So as I understand it, the only thing I need to do is to project the vertices of the mesh I want to deform to screen coordinates. These screen coordinates are exactly the same as the UV texture coordinates I need to use (as long as the screen coordinates have the same range 0-1 and origin as the UV coordinate system, I have to check that). Then applying the video texture to the mesh I want to deform and use the projected vertices as UV coordinates. After this is done, I can deform the mesh and it will have the desired result.
I am trying to write a shader for this but so far no luck. The problem is that I don’t know what the range is of the position output of the vertex shader. Some documentation suggests it is between -1 and 1 and other documentation suggests it is between 0 and the screen resolution. I know the UV range is between 0 and 1.
I tried nVidia FX Composer 2.5 to single step through a generic vertex shader but it seems it is only able to debug fragment shaders and not vertex shaders.
If the vertex shader position range was the same as the UV range (0 to 1) then the shader would look like this:
I haven’t read the thread but the output of the vertex shader is in clip coordinates, meaning that if you divide the resulting x and y coordinate by the 4th coordinate (w), then x/w and y/w should be between -1 and 1 (covering the whole screen or window).
Ok, I got the texturing bit to work but I ran into a problem regarding object morphing.
I can morph the object in two ways:
-Change the vertex position in the vertex shader (presumably this causes the in-game collision detection to be incorrect)
-Change the vertex position in a Unity script.
The latter works only if I use a build in shader like Diffuse. If I use my own shader (posted below), the mesh I morphed in the unity script isn’t displayed properly. So presumably I have to upload the new morphed mesh to the GPU again so that the vertex shader gets the correct vertex positions as an input. Or do I have to morph the mesh both in the Unity script and the Shader for it to work correctly?
Shader here:
Shader "Custom/PlaneEffect" {
Properties {
_MainTex("Texture", 2D) = "white" { }
//Note that these variables will show up in the Unity
//editor, but we should leave those alone as these
//variables are set from within the game script.
_ScaleFacX("ScaleFacX", Float) = 0
_ScaleFacY("ScaleFacY", Float) = 0
}
SubShader{
Pass{
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#include "UnityCG.cginc"
sampler2D _MainTex;
float _ScaleFacX;
float _ScaleFacY;
struct v2f{
float4 pos : SV_POSITION;
float2 uv : TEXCOORD0;
};
float4 _MainTex_ST;
v2f vert(appdata_base v){
v2f o;
float2 screenSpacePos;
//convert position from world space to clip space
o.pos = mul(UNITY_MATRIX_MVP, v.vertex);
//Convert position from clip space to screen space.
//Screen space has range x=-1 to x=1
screenSpacePos.x = o.pos.x/o.pos.w;
screenSpacePos.y = o.pos.y/o.pos.w;
//If the video texture would not be flipped and mirrored,
//the screeen space range (-1 to 1) has to be converted to
//the UV range 0 to 1 using this formula:
//o.uv.x = (0.5f*screenSpacePos.x) + 0.5f;
//o.uv.y = (0.5f*screenSpacePos.y) + 0.5f;
//However, due to the fact that the video texture is mirrored
//and flipped, we have to take this into account:
o.uv.x = (_ScaleFacX*screenSpacePos.x) + _ScaleFacX;
o.uv.y = -(_ScaleFacY*screenSpacePos.y) + _ScaleFacY;
return o;
}
half4 frag(v2f i) : COLOR{
half4 texcol = tex2D(_MainTex, i.uv);
return texcol;
}
ENDCG
}
}
}
Basically, I had to get the UV coordinates before I deform the mesh which means I have to deform the mesh in the vertex shader, not in the Unity3d script. If I get the UV coordinates from the vertex position after it has been deformed ( which is what happens if you deform the mesh in a Unity3d script instead of the vertex shader), then the wrong UV coordinates are fetched and nothing will be visible on the screen.
Edit2:
This shader will take the original mesh and deform it (only works if the mesh is deformed from a plane with all y coordinates being 0) instead of using a math function in the shader:
/*
This version is the same as PlaneEffect.shader, except that it uses the mesh itself
instead of a math function to deform the mesh. It assumes that the mesh "original" shape
is a flat plane where all y coordinates are 0.
*/
Shader "Custom/PlaneEffect2" {
Properties {
_MainTex("Texture", 2D) = "white" { }
//Note that these variables will show up in the Unity
//editor, but but do not change these values manually.
//The values will be set at runtime. If the shape of the
//morph is to be changed, change "holeSize" only (in the
//cs script) and recalculate the remaining variables.
_ScaleFacXa("ScaleFacXa", Float) = 0
_ScaleFacYa("ScaleFacYa", Float) = 0
_ScaleFacXb("ScaleFacXb", Float) = 0
_ScaleFacYb("ScaleFacYb", Float) = 0
}
SubShader{
Pass{
Cull Back
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#include "UnityCG.cginc"
sampler2D _MainTex;
float _ScaleFacXa;
float _ScaleFacYa;
float _ScaleFacXb;
float _ScaleFacYb;
struct v2f{
float4 pos : SV_POSITION;
float2 uv : TEXCOORD0;
};
float4 _MainTex_ST;
v2f vert(appdata_base v){
v2f o;
float2 screenSpacePos;
float yOffset;
float magnitude;
float4 clipPos;
//get the y vertex position and store it
yOffset = v.vertex.y;
//now set the y vertex to 0
v.vertex.y = 0.0f;
//convert position from world space to clip space
clipPos = mul(UNITY_MATRIX_MVP, v.vertex);
//Convert position from clip space to screen space.
//Screen space has range x=-1 to x=1
screenSpacePos.x = clipPos.x / clipPos.w;
screenSpacePos.y = clipPos.y / clipPos.w;
//If the video texture would not be flipped,
//the screeen space range (-1 to 1) has to be converted to
//the UV range 0 to 1 using this formula:
//o.uv.x = (0.5f*screenSpacePos.x) + 0.5f;
//o.uv.y = (0.5f*screenSpacePos.y) + 0.5f;
//However, due to the fact that both the Vuforia and String
//video texture is flipped, we have to take this into account.
//Also, the video texture can be clipped, so we have to take this
//into account as well.
o.uv.x = (_ScaleFacXa * screenSpacePos.x) + _ScaleFacXb;
o.uv.y = -(_ScaleFacYa * screenSpacePos.y) + _ScaleFacYb;
//restore the original y offset.
v.vertex.y = yOffset;
o.pos = mul(UNITY_MATRIX_MVP, v.vertex);
return o;
}
half4 frag(v2f i) : COLOR{
half4 texcol = tex2D(_MainTex, i.uv);
return texcol;
}
ENDCG
}
}
}