ARFoundation Face Tracking step-by-step

We have just announced the release of the latest version of ARFoundation that works with Unity 2018.3.

For this release, we have also updated the ARFoundation samples project with examples that show off some of the new features. This post will explain more about the Face Tracking feature that is shown off in the ARFoundation samples.

The Face Tracking example scenes will currently only work on iOS devices that have a TrueDepth camera: iPhone X, XR, XS XS Max or iPad Pro (2018).

The Face Tracking feature is exposed as part of the Face Subsystem similar to other subsystems: you can subscribe to events that will inform you when a new face has been detected, and that event will provide you with basic information about the face including the pose (position and rotation) of the face, and its TrackableId (which is a session unique id for any tracked object in the system). Using the TrackableId, you can then get more information about that particular face from the subsystem including the description of its mesh and the blendshape coefficients that describe the expression on that face. There are also events to subscribe to which allow you to detect when the face has been updated or when it has been removed.

ARFoundation as usual provides some abstraction via the ARFaceManager component. This component can be added to an ARSessionOrigin in the scene, and it will create a copy of the “FacePrefab” prefab and add it to the scene as a child of the ARSessionOrigin when it detects a face. It will also update and remove the generated “Face” GameObjects as needed when the face is updated or removed respectively.

You usually place an ARFace component on the root of the “FacePrefab” prefab above so that the generated GameObject can automatically update its position and orientation according to the data that is provided by the underlying subsystem.

You can see how this works in the FaceScene in ARFoundation-samples. You first create a prefab named FacePrefab that has ARFace component on the root of the GameObject hierarchy:

You then plug this prefab reference in the ARFaceManager component that you have added to ARSessionOrigin:

You will also need to check the “ARKit Face Tracking” checkbox in the ARKitSettings asset as explained here.

When you build this to a device that supports ARKit Face Tracking and run it, you should see the three colored axes from FacePrefab appear on your face and move around with your face and orient itself aligned to your face.

You can also make use of the face mesh in your AR app by getting the information from the ARFace component. Open up FaceMeshScene and look at the ARSessionOrigin. It is setup in the same way as before, with just a different prefab being referenced by the ARFaceManager. You can take a look at this FaceMeshPrefab:

You can see that the mesh filter on the prefab is actually empty. This is because the script component ARFaceMeshVisualizer will create a mesh and fill in the vertices and texture coordinates based on the data that it gets from ARFace component. You will also notice the MeshRenderer contains the material that this mesh will be rendered with, in this case a texture with three colored vertical bands. You can change this material to create masks and face paints etc. If this scene is built out to a device, you should be able to see your face rendered with the material specified.

You can also make use of the blendshape coefficients that are output by ARKit Face Tracking. Open up the FaceBlendsShapeScene and check out the ARSessionOrigin. It looks the same as before, except that the prefab it references is now FaceBlendShapes prefab. You can take a look at that prefab:

In this case, there is a ARFaceARKitBlendShapeVisualizer which references the SkinnedMeshRender GameObject of this prefab so that it can manipulate the blendshapes that exist on that component. This component will get the Blendshape coefficients from the ARKit SDK and manipulate the blendshapes on our sloth asset so that it replicates your expression. Build it out to a supported device and try it out!

For more detailed coverage of the face tracking support, have a look at the docs.

This was a summary of how to create your own face tracking experiences with the new ARFoundation release. Please take some time to try it out and let us know how it works out! As always, I’m happy to post any cool videos, demos or apps you create on x.com





6 Likes

Awesome! Is there any way of exposing raw TrueDepth video feed from iOS devices?
I know that ARFoundation aims to abstract device specific features away, but it just seems like such a pity to not have access to this data when it’s just sitting there and fx facetracking likely makes use of it anyway.
Thanks for the good work!

ARKit FaceTracking does not have this available to end users but abstracts it away. There are other APIs that will expose it, but it is outside of the scope right now. The depthmap may not be as useful as you think it is - they have to use a lot of filtering and smoothing in their SDK to enable these features.

It’s worked perfectly on IphoneX.
But camera is default front camera. Can I switch to back camera ?

1 Like

Arcore introduced their facetracking support. You guys have any plan on this ?

What about Android devices? The face tracking samples will not working so far?

1 Like

No, we can’t currently. It requires the depth true that is only the front camera’s feature in iPhoneX now.

DO anyone know how to identify the face parts like Eyes, nose etc ?

How can i switch between the assets that appear on the face… Im able to switch between the different textures or different assets. but not between assets and textures.

I am also trying to determine if eye transforms are supported in ARFoundation or its subsystems.

With the (now deprecated) Unity ARKit Plugin, we could use leftEyePose and rightEyePose on the ARFaceAnchor.

Is there an equivalent with ARFoundation, etc?

4 Likes

Same question here, is it possible to use eyes pose with ARFoundation?

1 Like

I find it completely irresponsible for Unity to deprecate a plugin before its successor has caught up. Eye Gaze tracking is an essential part of the AR experience, and the old ARKit Plugin did this perfectly well.

Not only is eye gaze apparently unsupported by AR Foundation, we also have total radio silence from Unity on the subject. It’s a disgraceful way to treat devs who rely on their technology, the least they could do is reply.

1 Like

We understand your frustration and want to address your concerns. I am happy to inform everyone that eye transforms and fixation points will be available in the next preview release of ARFoundation and related packages (ARKit-Face-Tracking).

A short preview of how it will work is that each ARFace has 3 new nullable properties added to them: leftEyeTransform, rightEyeTransform, and fixationPoint. Each of these are being exposed as Unity transforms so to utilize them, you simply need add your content as a child to the transform. We have also made sure to have a few samples made to assist with utilizing the “new” feature.

Unfortunately, I can’t speak to what the timeframe for the release will be but I did want to at least address the concerns surrounding this feature as it is a crucial feature for many AR applications.

1 Like

Hi, is it compatible with the iPhone 11 since it integrates a TrueDepth Camera inside? I hesitate to invest in an iPhone X but it is well expensive and rare on the Internet contrary to the iPhone 11 which is new and cheaper. Which iPhone will give the best performances?

Plus, do I need a Mac to run the live stream from the iPhone on Unity? I need to check the materials I need to do some Mocap tests (with the Xsens body suit with does not have their software on MacOS and the iPhoneX). Thank you so much for you support! It’s my first try on Mocap :slight_smile:

hello every one , i want to create avatar app using Unity and ARFoundation uisng ARCore , is that possible ?
need to use bland-shape with controller like eyes , nose , mouth , tongue and ear , but don’t know where to start !!

hello, Is it possible to add an option not remove the prefab but just hold it while tracked face is missing from camera?
I’m using it for facial capture but expose actor’s face suddenly is not acceptable…:frowning:
Maybe I could catch the Remove event and duplicate the prefab until new face appears, but looking for simpler way.

found the easiest solution. So it seems to be SkinnedMeshRenderer is disabled while camera is missing the face.
Just force it on in Update(), then face never get disappeared.

transform.GetComponent().enabled = true

1 Like

Docs Says: "
facesChanged
Raised for each new ARFace detected in the environment.
"
But in fact event is dispatched like Update Function on Android.
it continues to be dispatched up on the same face when staying my face in front of camera.

Is it bug? @davidmo_unity @jimmya @mdurand

ARFoundation 3.0.1 with FaceMesh scene from samples.

P.S. So in comparing with iOS, Android can’t remember the face. Is It Normal?

1 Like

I can run the remote on the Sloth on Unity 2019.2.9f1 but I’m trying to update to 2019.3.X with no success. Did someone make it work on 2019.3.X ?

Plus, my client app built on 2019.3.7 crashes on launch on my iPhone11. XCode error says that my device has not the ARKit enabled… but built on 2019.2.9f1 it’s working great.

Weirdly it’s working on my Mac (Server scene on Unity 2019.3.7) and my iPhone 11 (built on Unity 2019.2.9f1) whereas on my PC(Server scene on 2019.3.7) it does not. I thought the computer needed the same version used to build the Client app but on my Mac it’s working with 2 different versions.

Is there any plan for the facial ar remote project to be supported in the future? It’s a very interesting project!

@GeniusKoala - GitHub - yasirkula/UnityRuntimeInspector: Runtime Inspector and Hierarchy solution for Unity for debugging and runtime editing purposes, try out this asset. It helps a lot for debugging and seeing what’s going on in the hierarchy and inspector windows on runtime. I’ve heard that a AR remote solution from unity is under development.