I am writing a client that supports rendering of an old game engine format. They have a character animation system which uses bones and is pretty straightforward. They also (for some cases) use a vertex animation in which they actually specify positions of each of these vertices per frame.
The very naive solution here is just to resubmit the vertices for the mesh via code but this is CPU intensive and slow and required the re-uploading of vertices. I can benchmark it but I assume I will not go with this method.
The next thing I can think of is a shader. I could send all of the vertices at once and then in the shader, use a frame index to calculate which vertex to actually sample from. Does this make sense?
Has anyone done something similar to this?