I want to map certain changes in the default ARFace model to certain functions - say, when the brows are lowered do a thing, when a smile is ‘detected’ do another thing.
Since the default face is the same model every time, I was thinking this could be done by measuring the distance between two vertices. But after poking through sample projects and the scripts it seems like the face model is generated at runtime, so I’m not even sure if a simple comparison between two static vertex numbers (i.e. if vertex 10 is more than y units away from vertex 134 it’s a smile so return true) would work - the specific vertex numbers might change so that vertex 10 isn’t always ‘left corner of the mouth’.
Is it even possible to consistently get positional data from a vertex at a specific location on the model? If it isn’t then I need to reconsider my entire approach…but I know something like it is possible, since the AR Default Face maps to way more than just the 3 regions that ARCore has. It has smile expression, mouth opening, brow up/down, etc. How is that all done?