Hello,
I’ve been testing FaceTracking with MARS. can someone guide me on how to apply occlusion on some of the points for example ear points, so that ear points can only shows up when we rotate head to the side(right and left).
Also, how can I improve the stability as I can see FaceTracking is stable on IOS devices and on android device FaceTracking is not that stable
When you create the facemask (From the MARS Panel), you get a child game object of that Face Mask Proxy called “Depth Mask”, that gameobject is the one that occludes your content; if you have an artist or you can do 3D modeling you can modify the depth mask mesh filter and occlude the ears more if necessary.
With regards to Face Tracking on Android and iOS; unfortunately there is not much that can be done; Face tracking on iOS is way better than Android and this comes from their platforms themselves.
Hello,
Depth Mask Mesh is not adjusting based on the face. Every person has a different face, and the contents will be occluded by the Depth Mask Mesh. we have a default mesh in MARS which is ideal I guess. but a person with a big face, this depth mask face will cover only 60% to 80% part of the big face. we can modify the mesh on 3D modeling software and we can make to fit the big face but then a problem will occur for the smaller faces. can we use the face mesh which is returned by the ARCore or ARKit? if yes can you guide me on how to achieve this?
Take a look at the ‘Face Mask’ template.
The FaceMask proxy has a ‘face action’ - I’ve included a screenshot.
The ‘Face Mesh’ mesh will be replaced by the face mesh provided by ARFoundation when running on device. This will give you the native ARKit face mesh, for example.
Android face tracking tends to naturally be shakier - we’re working on some good smoothing heuristics for that case
Hello sir,
i have used MARS in my project. currently i need help. i want to stop rendering of AR camera in MARS without disabling AR camera. i need just stop showing camera rendering of outside into unity.
one more issue I’m Facing is when reloading the scene again face tracking is not working. its only happening on IOS in android it works well.
consider scenario :
I’ve two scenes 1) 3D scene and 2)Facetracking scene(mars)
when I’m changing the scene from 1st scene to 2nd scene FaceTracking works well. but after that reloading the2nd scene again face tracking not working. its also not working If I load 1st scene and then loading 2nd scene.
I’m using Version 1.1.1 will check the same after upgrading to Version 1.2.0 … can you suggest a workaround to solve this issue so we don’t have to wait for future updates?
Here’s a custom action I’ve used in the past to auto-fit head content to heads of different sizes:
/// <summary>
/// Stretches a transform to match the expected head bounds, using common landmarks as a basis
/// </summary>
[Unity.MARS.Attributes.MonoBehaviourComponentMenu(typeof(FitToHeadAction), "Action/Fit to Head")]
public class FitToHeadAction : Actions.TransformAction, IUsesCameraOffset, IUsesMARSTrackableData<IMRFace>, ISpawnable, IRequiresTraits
{
// How much to expand / offset the mesh by to make it fit the head properly
// Use content/transform parenting to refine this further
static readonly Vector3 k_Padding = new Vector3( 0.01f, 0.0f, 0.02f);
const float k_FaceConfidenceDecay = 0.05f;
#if !FI_AUTOFILL
IProvidesCameraOffset IFunctionalitySubscriber<IProvidesCameraOffset>.provider { get; set; }
#endif
static readonly TraitRequirement[] k_RequiredTraits = { TraitDefinitions.Face };
Vector3 m_DefaultScale = Vector3.one;
Vector3 m_DefaultCenter = Vector3.zero;
Vector3 m_CurrentScale = Vector3.one;
Vector3 m_CurrentCenter = Vector3.zero;
Transform m_CameraTransform;
float m_LastFaceConfidence = 0.0f;
public void OnMatchAcquire(QueryResult queryResult)
{
InitializeLandmarkDefaults();
UpdateScale(queryResult);
}
public void OnMatchUpdate(QueryResult queryResult)
{
UpdateScale(queryResult);
}
void InitializeLandmarkDefaults()
{
m_CameraTransform = MARSUtils.MarsRuntimeUtils.GetActiveCamera(true).transform;
// Ensures the minimum landmarks are here to use this function effectively
var fallbackFacelandmarks = Landmarks.MARSFallbackFaceLandmarks.instance.GetFallbackFaceLandmarkPoses();
var fallbackMissing = false;
if (!fallbackFacelandmarks.ContainsKey(MRFaceLandmark.LeftEar))
{
Debug.LogError("Missing the ear fallback landmark!");
fallbackMissing = true;
}
if (!fallbackFacelandmarks.ContainsKey(MRFaceLandmark.NoseTip))
{
Debug.LogError("Missing the nose tip fallback landmark!");
fallbackMissing = true;
}
if (!fallbackFacelandmarks.ContainsKey(MRFaceLandmark.LeftEye))
{
Debug.LogError("Missing the eye fallback landmark!");
fallbackMissing = true;
}
if (fallbackMissing)
{
m_DefaultScale = transform.localScale;
m_DefaultCenter = transform.localPosition;
return;
}
var earLandmark = fallbackFacelandmarks[MRFaceLandmark.LeftEar];
var noseLandmark = fallbackFacelandmarks[MRFaceLandmark.NoseTip];
var eyeLandmark = fallbackFacelandmarks[MRFaceLandmark.LeftEye];
// Based on the fallback values, we get what the 'standard' scale and center are
m_DefaultScale = CalculateScale(earLandmark.position, eyeLandmark.position, noseLandmark.position);
m_DefaultCenter = CalculateCenter(earLandmark.position, eyeLandmark.position);
m_CurrentScale = m_DefaultScale;
m_CurrentCenter = m_DefaultCenter;
}
Vector3 CalculateScale(Vector3 earPosition, Vector3 eyePosition, Vector3 nosePosition)
{
// Ears are on opposite sides of the head, so we can use them to get general head width
// The eyes and nose tend to be at set ratios of head size, so using the distance between them we determine a general head height multiple
// We assume the head is roughly square shaped on the X-Z plane
var calculatedScale = new Vector3(Mathf.Abs(earPosition.x) * 2.0f,
Mathf.Abs((eyePosition.y - nosePosition.y)) * 8.0f,
Mathf.Abs(earPosition.x) * 2.0f);
calculatedScale += k_Padding * 2.0f;
return calculatedScale;
}
Vector3 CalculateCenter(Vector3 earPosition, Vector3 eyePosition)
{
// Head is centered at eye-height. Ears tend to be in the middle of the head, while eyes are up front, so we can that for depth adjustment
return new Vector3(0.0f, eyePosition.y, Mathf.Abs(earPosition.x - eyePosition.z) * 0.5f - (k_Padding.x - k_Padding.z));
}
void UpdateScale(QueryResult queryResult)
{
var assignedFace = queryResult.ResolveValue(this);
if (assignedFace == null)
return;
var headPose = assignedFace.pose;
var newScale = m_CurrentScale;
var newCenter = m_CurrentCenter;
// Pull up landmark poses and figure out new transformation values from them
if (assignedFace != null && assignedFace.LandmarkPoses != null)
{
var resultLandmarks = assignedFace.LandmarkPoses;
if (resultLandmarks.ContainsKey(MRFaceLandmark.LeftEar) && resultLandmarks.ContainsKey(MRFaceLandmark.NoseTip) && resultLandmarks.ContainsKey(MRFaceLandmark.LeftEye))
{
var earLandmark = resultLandmarks[MRFaceLandmark.LeftEar];
var noseLandmark = resultLandmarks[MRFaceLandmark.NoseTip];
var eyeLandmark = resultLandmarks[MRFaceLandmark.LeftEye];
var earPosition = headPose.ApplyInverseOffsetTo(earLandmark.position);
var eyePosition = headPose.ApplyInverseOffsetTo(eyeLandmark.position);
var nosePosition = headPose.ApplyInverseOffsetTo(noseLandmark.position);
newScale = CalculateScale(earPosition, eyePosition, nosePosition);
newCenter = CalculateCenter(earPosition, eyePosition);
}
}
// Landmark estimation tends to be very inaccurate when the head is viewed sideways. Instead, keep track of historical 'best' values and use those
// Use the most confident value for headpose w/ mars session camera
// Confident : Width/height ratio close to average
// Confident : Face is facing camera (dot)
// Confident : Decay over time
var angleConfidence = 1.0f - (Mathf.Clamp(Vector3.Angle(headPose.forward, m_CameraTransform.forward), 0.0f, 45.0f) / 45.0f);
if (m_LastFaceConfidence > 0.825f)
m_LastFaceConfidence = Mathf.Max(m_LastFaceConfidence - Time.deltaTime * k_FaceConfidenceDecay, 0.75f);
if (angleConfidence > m_LastFaceConfidence)
{
var lerpPercent = angleConfidence / (angleConfidence + m_LastFaceConfidence);
m_CurrentScale = Vector3.Lerp(m_CurrentScale, newScale, lerpPercent);
m_CurrentCenter = Vector3.Lerp(m_CurrentCenter, newCenter, lerpPercent);
m_LastFaceConfidence = angleConfidence;
}
headPose = headPose.TranslateLocal(m_CurrentCenter);
transform.SetWorldPose(this.ApplyOffsetToPose(headPose));
transform.localScale = m_CurrentScale;
}
public TraitRequirement[] GetRequiredTraits()
{
return k_RequiredTraits;
}
}
Hi there, @IF_test . I just did a test with the older versions of AR Foundation and the ARKit plugin and found that the issues we were seeing with switching scenes and starting up face tracking weren’t present. These were the versions we originally used to verify MARS face tracking features. We’re still working on updates to fix issues with later versions, but if you roll back to the 2.x version of AR Foundation, you may be able to work around the issue on your end.
Hi there! Thanks for the reminder to follow up on this. We recently found the issue, and it might be quite a simple fix! The default face template has a head bust for the occlusion mask, which has a transform offset. If you zero out that offset, the head bust will not line up anymore in Scene View but in a device build the mesh should show up in the right spot. We will be fixing the template for our next release, but you should be able to work around the issue by zeroing out the transform in your scene. Just bear in mind that future face mask templates you make with MARS 1.2.0 will have the same issue.