It is possible to move the position of vertices in a shader, isn’t it?
Can anyone point me to some example code on how to do this?
(I have no experience with shader coding)
I’m currently doing a sort of morphing through Javascript by moving the vertices of my mesh to a value between corresponding vertices in source and target meshes. This, for each vertex:
vertPos = Vector3.Lerp( startVertices[ i ], endVertices[ i ], blendValue );
This is quite an expensive process. Would it be faster to do this in a vertex shader?
Are there any disadvantages?
Maybe I could use the Strumpy Shader Editor to make such a shader.
I’ve never used it before, though.
Does anyone know how I can get the vertex positions of my morph target meshes into the shader?
As SpookyCat says, the problem with doing this in a shader is there’s no simple way for a shader to access a list of the new vertex positions, not within Unity anyway. You’d need to encode endVertices as vertex colors within the mesh then decode them in the shader. I’d probably encode them as local space offsets from startVertices (which I assume is the original mesh shape) as you’ll get much higher precision that way. Changing vertices in a shader is simple enough, the manual has example code:
_*http://unity3d.com/support/documentation/Components/SL-VertexProgramInputs.html*_
_*http://unity3d.com/support/documentation/Components/SL-SurfaceShaderExamples.html*_
Seems a little crazy to have to encode vertex positions in vertex colors. But if it works, it works.
I guess I can use a Javascript to read the vertex positions and copy them to the vertex colors when the game starts up.
Before I go through the trouble, can anyone confirm that moving vertices around with a shader is faster than doing it through Javascript.
Are there certain things I have to watch out for?
So far I’ve been using a single mesh and morphing it to a blend of two out of six target meshes.
Since I can only encode a single target in the vertex colors, I’m think of making duplicates of the source mesh for each possible target (given the combinations used in our game, I’d only need six instances).
Alternatively, I was wondering if it wouldn’t be possible to encode the vertices of all six of my morph targets into a texture. Each pixel’s color would represent a vertex position. I could make a row of pixels for each morph target, e.g. Would that work? Can a vertex shader access colors in a texture like that?
Not sure, you’re going to loose a certain amount of precision doing this regardless. Are you certain all your values are withing +/- 5 units of the base mesh? If not ther’re going to get clipped and cause nasty errors.
How about, instead of directly storing positions in each vertex color you store the normalised direction from the base mesh vert to the displaced vert, then store the distance/magnitude as the alpha? You’ll have to compress and multiply that by a fixed amount, but at least then your compression errors will only be in the displacement direction and the overall displaced mesh should maintain it’s shape better.
Displacing vertex positions via the shader then would be just as simple.
However, I’m no GPU sage but you might be hitting some kind of weird hardware limitation here. In Unity a Color is represented by 4x32bit floating point numbers (I think). I’m not sure if that precision is maintained when it’s passed into the shader as you really don’t need a 32bit value to represent each colour component.
I think you can, text2D should work from the vertex shader. If it works you could also get around any precision errors this way by encoding two textures. One with your displacement normal and a 2nd texture with an rgba encoded high precision float for displacement distance without needing to limit yourself to +/- 5 units.
Thank you for the suggestions, Cameron! I’ll try that!
In the mean time, I have used the normals of the source to store the positions of the target and the precision was much better (perfect as far as I could see)!
So I’m thinking some “truncating” is happening when the color is transferred to the shader. Or by the way the shader reads it.
I’m already setting the normals of the morphed mesh by hand in my Javascript anyway. So they are available.
If I can now figure out how to do
Except that, of course, when I set the normals in the shader, I lose my morph target coordinates. Doh!
Hm, could I store them in the shader somehow, after reading them for the first time?