How would I get started on tracking specific areas of the default ARFace model?

I want to map certain changes in the default ARFace model to certain functions - say, when the brows are lowered do a thing, when a smile is ‘detected’ do another thing.

Since the default face is the same model every time, I was thinking this could be done by measuring the distance between two vertices. But after poking through sample projects and the scripts it seems like the face model is generated at runtime, so I’m not even sure if a simple comparison between two static vertex numbers (i.e. if vertex 10 is more than y units away from vertex 134 it’s a smile so return true) would work - the specific vertex numbers might change so that vertex 10 isn’t always ‘left corner of the mouth’.

Is it even possible to consistently get positional data from a vertex at a specific location on the model? If it isn’t then I need to reconsider my entire approach…but I know something like it is possible, since the AR Default Face maps to way more than just the 3 regions that ARCore has. It has smile expression, mouth opening, brow up/down, etc. How is that all done?

Face mesh indices in ARKit and ARCore stay the same across different sessions. While you can currently use the approach you described, there is no guarantee that this will not change in the future.
Here is the example of your approach, I used it to cut out eye and mouth holes in ARCore:

In my experience, the ARCore face mesh is not precise enough to determine facial expressions precisely.
The ARKit Blenshapes are more suitable for this kind of task, but, of course, this feature is limited to iOS devices with TrueDepth user-facing camera.

1 Like