[RELEASED] Dlib FaceLandmark Detector

Dlib FaceLandmark Detector
https://assetstore.unity.com/packages/tools/integration/dlib-facelandmark-detector-64314

https://www.youtube.com/watch?v=pwm66AC7lFk


Requires Unity2021.3.35f1 or higher.

Works with Unity Cloud Build
ChromeOS support
iOS & Android support
Windows10 UWP support
WebGL support
Win & Mac & Linux Standalone support
Preview support in the Editor

DlibFaceLandmarkDetector can ObjectDetection and ShapePrediction using Dlib19.7 C++ Library.

Official Site | ExmpleCode | Android Demo WebGL Demo | Tutorial & Demo Video | Forum | API Reference

Features:

  • You can detect frontal human faces and face landmark (68 points, 17points, 6points) in Texture2D, WebCamTexture and Image byte array. In addition, You can detect a different objects by changing trained data file.

  • ObjectDetector is made using the now classic Histogram of Oriented Gradients (HOG) feature combined with a linear classifier, an image pyramid, and sliding window detection scheme. You can train your own detector in addition to human faces detector. If you want to train your own detector, please refer to this page.

  • ShapePredictor is created by using dlib’s implementation of the paper (One Millisecond Face Alignment with an Ensemble of Regression Trees by Vahid Kazemi and Josephine Sullivan, CVPR 2014). You can train your own models in addition to human face landmark model using dlib’s machine learning tools. If you want to train your own models, please refer to this page.

  • Advanced samples using “OpenCV for Unity” are Included. (The execution of this samples are required “OpenCV for Unity”.)

  • By utilizing the VisualScripting With DlibFaceLandmarkDetector Example, you can leverage all the methods available in DlibFaceLandmarkDetector within the Unity’s Visual Scripting development environment…VisualScripting With DlibFaceLandmarkDetector Example (GitHub)

Examples:

  • Texture2DExample
  • WebCamTextureExample
  • CatDetectionExample

Advanced Examples(require OpenCV for Unity):

  • Texture2DToMatExample
  • WebCamTextureToMatHelperExample
  • VideoCaptureExample
  • ARHeadExample
  • VideoCaptureARHeadExample
  • FrameOptimizationExample

ExampleCode using Dlib FaceLandmark Detector is available.

DlibFaceLandmarkDetector uses Dlib under Boost Software License ; see Third-Party Notices.txt file in package for details.
The Shape Predictor model files included with this asset are available for commercial use.
System Requirements:
Build Win Standalone & Preview Editor : Windows8 or later
Build Mac Standalone & Preview Editor : OSX 10.13 or later
Build Linux Standalone & Preview Editor : Ubuntu18.04 or later
Build Android : API level 21 or later
Build iOS : iOS Version 11.0 or later

More Info >>
Release Notes:
1.4.0
[Common]Changed the minimum supported version to Unity2021.3.35f1.
[Common]Separated the examples using the Built-in Render Pipeline and Scriptable
Render Pipeline.

1.3.9
[iOS]Added separate plugin files for iOS for devices and simulators.
[WebGL]Added plugin files with only simd enabled.
1.3.8
[Common]Changed to use unsafe code by default.
[Common]Optimized the amount of memory allocation, in the FaceLandmarkDetector class.
1.3.7
[Common]Changed the minimum supported version to Unity2020.3.48f1.
[WebGL]Added support for “WebAssembly 2023”.
[iOS]Changed “Target minimum iOS Version” to 11.0.
1.3.6
[WebGL]Added a plugin file with threads and simd enabled for the WebGL platform. This update removes support for the WebGL platform in Unity 2021.1 and below. (Select MenuItem[Tools/Dlib FaceLandmark Detector/Open Setup Tools/WebGL Settings])
1.3.5
[Windows]Added Support for ARM64.
[WebGL]Added Unity2023.2 or later support.
[Lumin]Removed Lumin platform support (for MagicLeapOne).
[Common]Add a button to SetupTools to automatically add scenes under the “Examples” folder to “Scenes In Build”.
1.3.4
[Common]Changed the setup procedure to use the SetupToolsWindow.
[Common]Change the namespase under “DlibFaceLandmarkDetector/Editor” folder from “DlibFaceLandmarkDetector” to “DlibFaceLandmarkDetector.Editor”.
[Common]Added “DlibFaceLandmarkDetector” folder under “StreamingAssets” folder.
[Common]Added function to automatically move the StreamingAssets folder.
[WebGL]Added Unity2022.2 or later support.
1.3.3
[Android]Added Support for ChromeOS (x86 and x86_64 architectures).
1.3.2
[Common]Added Assembly Definitions.
1.3.1
[Common]Fixed a small issue.
1.3.0
[UWP]Added ARM64 Architecture.
1.2.9
[Common]Added optimization code using NativeArray class. ( require PlayerSettings.allowUnsafeCode flag, “DLIB_USE_UNSAFE_CODE” ScriptingDefineSymbol and Unity2018.2 or later. )
[Common]Added support for Unicode file path ( objectDetectorFilePath and shapePredictorFilePath ).
[Common]Added ImageOptimizationHelper to ARHeadWebCamTextureExample.
[Common]Added some converter methods to OpenCVForUnityUtils.cs.
1.2.8
[Lumin]Added the code for MagicLeap.
1.2.7
[WebGL]Added Unity2019.1 or later support.
1.2.6
[Common]Added “sp_human_face_17.dat”, “sp_human_face_17_mobile.dat” and “sp_human_face_6.dat”.
[Common]Changed the training dataset of Shape Predictor model. Since the training dataset consists of Flickr CC0 licensed images, the Shape Predictor model files are available for commercial use.
[Common]Added BenchmarkExample.
1.2.5
[Common]Re-assined namespace.
[Common]Support for OpenCVforUnity2.3.3 or later.
1.2.4
[macOS]Removed 32bit architecture (i386) from dlibfacelandmarkdetector.bundle.
1.2.3
[Android,UWP]Fixed Utils.setDebugMode() method on the IL2CPP backend.
1.2.2
[iOS]Added a function to automatically remove the simulator architecture(i386,x86_64) at build time.
[Common] Improved DlibFaceLandmarkDetectorMenuItem.setPluginImportSettings() method.
[Common]Updated to WebCamTextureToMatHelper.cs v1.0.9.
[Common]Added support for Utils.setDebugMode() method on all platforms.
1.2.1
[Common]Updated to WebCamTextureToMatHelper.cs v1.0.8.
[Common]Updated to LowPassPointsFilter v1.0.1. Updated to KFPointsFilter v1.0.2. Updated to OFPointsFilter v1.0.2.
[Common] Added updateMipmaps and makeNoLongerReadable flag to DrawDetectResult () and DrawDetectLandmarkResult() method.
[Common]Fixed Utils.getFilePathAsync() method.(Changed #if UNITY_2017 && UNITY_2017_1_OR_NEWER to #if UNITY_2017_1_OR_NEWER.)
1.2.0
[Common]Updated to WebCamTextureToMatHelper.cs v1.0.7.
[Common]Fixed WebCamTextureExample and OpenCVForUnityUtils.cs.
[Common]Added NoiseFilterVideoCaptureExample and NoiseFilterWebCamTextureExample.
[Commo]Added useLowPassFilter option to ARHeadVideoCaptureExample and ARHeadWebCamTextureExample.
[Common]Added throwException flag to Utils.setDebugMode() method.
[Common]Added drawIndexNumbers flag to DrawFaceLandmark() method.
1.1.9
[Android]Added arm64-v8a Architecture.
1.1.8
[Common]Updated WebCamTextureExample.(support Portrait ScreenOrientation)
[Common]Updated to WebCamTextureToMatHelper.cs v1.0.4.
1.1.7
[Common]Updated “human_face_68_sp.dat” and “human_face_68_sp_for_mobile.dat”.
1.1.6
[Common]Updated to dlib19.7.
[Common]Updated to WebCamTextureToMatHelper.cs v1.0.3.
[Common]Updated “human_face_68_sp.dat” and “human_face_68_sp_for_mobile.dat”.
1.1.5
[Common]Switched to the shape predictor file trained using new datasets.
1.1.4
[Common]Updated WebCamTextureToMatHelper.cs v1.0.2
[Common]Improved Utils.getFilePathAsync().
1.1.3
[Common]Fixed to improve the pose estimation performance.
[Common] Changed DetectLandmarkArray (int left, int top, int width, int height) to DetectLandmarkArray (double left, double top, double width, double height).
[WebGL]Fixed Utils.getFilePathAsync() method.
1.1.2
[Common]Updated WebCamTextureToMatHelper.cs and OptimizationWebCamTextureToMatHelper.cs(Changed several method names.).
[Common]Changed the Example name.
1.1.1
[Common]Improved Utils.getFilePath() and Utils.getFilePathAsync().
1.1.0
[Win][Mac][Linux][UWP]Added the native plugin file enabled SSE4 or AVX compiler option.
1.0.9
[WebGL]Added WebGL Plugin for Unity5.6.
1.0.8
[Common]Changed the name of asset project.(“Sample” to “Example”)
[Common]Fixed VideoCaptureARExample and WebCamTextureARExample.
1.0.7
[Common]Fixed WebCamTextureToMatHelper.cs.(flipVertical and flipHorizontal flag)
1.0.6
[Common]Fixed OpenCVForUnityMenuItem.cs.(No valid name for platform: 11 Error)
[Common]Added OptimizationWebCamTextureToMatHelper.cs.
1.0.5
[Common]Fixed WebCamTextureToMatHelper class.
[Common]Added Utils.getVersion().
[Common]Fixed Utils.getFilePathAsync().
1.0.4
[Common]Updated shape_predictor_68_face_landmarks_for_mobile.dat.
1.0.3
[WebGL]Added WebGL(beta) support.(Unity5.3 or later)
[Common]Fixed missing script error.(WebCamTextureToMatHelper.cs)
[Common]Added shape_predictor_68_face_landmarks_for_mobile.dat.
1.0.2
[Common]Improved WebCamTextureHelper class.
1.0.1
[Common]Added OptimizationSample.
[Common]Added DetectRectDetection() method.
1.0.0
[Common]Initial Commit

3 Likes

Hi,

Hope you are doing well !

Is it possible to get it before the review ? I already purchased OpenCV Modules from Enox

Cheers,
L.

1 Like

Hi EnoxSoftware,

I bought your “OpenCV for Unity” and I’m very impressed with it. I would love to get “Dlib FaceLandmark Detector” too. Any chance that I can get a sneak preview ahead of the pending review?

Cheers!

Unfortunately, ‘Dlib FaceLandmark Detector’ has been declined from AssetStore.
I have submitted a fixed version to the asset store again. Please wait a while.

Too bad. Good luck with the update!

Dlib FaceLandmark Detector v1.0.0 is now available.

Version changes
1.0.0
Initial version

1 Like

Hello guys.I’ve got the newest version of both Open CV and D-lib .Great work. How can i improve head tracking so 3d model would move more smoothly and increase the framerate?
I do not have experience working with head tracking and augmentation.
I would appreciate if you will help me with that)

Hello, I was experimenting with the AR Sample that uses OpenCV + D-lib but i noticed that there seems to be a bug with the input mat.

When passing the image to the faceLandmarkDetector:

Mat rgbaMat = extractSubMat (webCamTextureToMatHelper.GetMat ());

OpenCVForUnityUtils.SetImage (faceLandmarkDetector, rgbaMat);

List<UnityEngine.Rect> detectResult = faceLandmarkDetector.Detect ();

The detection doesn’t work if i reduce the number of cols, but it does if i reduce the number of rows. This is quite strange since all I’m doing on the “extractSubMat” method is:

return originalMat.submat(newRange(0,originalMat.rows()),newRange(0,originalMat.cols()-100));

The previous AR sample version that doesn’t use d-lib used to work with this code. I need to be able to resize the input mat before the detection arbitrarily so that I can optimize the time it takes to search for a face while still showing to the user a high definition video input.

The steps on the next page might be effective.
Resize Frame
Skip frame
http://www.learnopencv.com/speeding-up-dlib-facial-landmark-detector/

Pixel data in “FaceLandmarkDetector.SetImage()” method must be continuous.In other words, ”Mat.isContinuous ()” must be true.I plan to add a document in the next version up.

           Mat imgMat = new Mat (imgTexture.height, imgTexture.width, CvType.CV_8UC4);

            OpenCVForUnity.Utils.texture2DToMat (imgTexture, imgMat);
            Debug.Log ("imgMat.ToString " + imgMat.ToString ());

            Mat subMat = imgMat.submat (new Range (0, imgMat.rows ()), new Range (0, imgMat.cols () - 100));

            Debug.Log ("subMat.isContinuous() " + subMat.isContinuous ());

            Mat copyMat = new Mat (subMat.rows (), subMat.cols (), CvType.CV_8UC4);

            Debug.Log ("copyMat.isContinuous() " + copyMat.isContinuous ());

            subMat.copyTo (copyMat);


            FaceLandmarkDetector faceLandmarkDetector = new FaceLandmarkDetector (DlibFaceLandmarkDetector.Utils.getFilePath ("shape_predictor_68_face_landmarks.dat"));

            OpenCVForUnityUtils.SetImage (faceLandmarkDetector, copyMat);
     

            List<UnityEngine.Rect> detectResult = faceLandmarkDetector.Detect ();
         
            foreach (var rect in detectResult) {
                Debug.Log ("face : " + rect);
             
                OpenCVForUnityUtils.DrawFaceRect (copyMat, rect, new Scalar (255, 0, 0, 255), 2);
             
             
                List<Vector2> points = faceLandmarkDetector.DetectLandmark (rect);
             
                Debug.Log ("face points count : " + points.Count);
                if (points.Count > 0) {
                    OpenCVForUnityUtils.DrawFaceLandmark (copyMat, points, new Scalar (0, 255, 0, 255), 2);
                 
                }
            }

         
            faceLandmarkDetector.Dispose ();


            Texture2D texture = new Texture2D (copyMat.cols (), copyMat.rows (), TextureFormat.RGBA32, false);

            OpenCVForUnity.Utils.matToTexture2D (copyMat, texture);

            gameObject.GetComponent<Renderer> ().material.mainTexture = texture;

OptimizationSample (Resize Frame+Skip frame)

using UnityEngine;
using System.Collections;
using System.Collections.Generic;
using System;
using System.Runtime.InteropServices;

#if UNITY_5_3 || UNITY_5_3_OR_NEWER
using UnityEngine.SceneManagement;
#endif
using OpenCVForUnity;
using DlibFaceLandmarkDetector;

namespace DlibFaceLandmarkDetectorSample
{
    /// <summary>
    /// Face Landmark Detection from WebCamTextureToMat Sample.
    /// </summary>
    [RequireComponent(typeof(WebCamTextureToMatHelper))]
    public class OptimizationSample : MonoBehaviour
    {
   
        /// <summary>
        /// The colors.
        /// </summary>
        Color32[] colors;

        /// <summary>
        /// The texture.
        /// </summary>
        Texture2D texture;

        /// <summary>
        /// The web cam texture to mat helper.
        /// </summary>
        WebCamTextureToMatHelper webCamTextureToMatHelper;

        /// <summary>
        /// The face landmark detector.
        /// </summary>
        FaceLandmarkDetector faceLandmarkDetector;

        /// <summary>
        /// The FAC e_ DOWNSAMPL e_ RATI.
        /// </summary>
        public int FACE_DOWNSAMPLE_RATIO = 2;

        /// <summary>
        /// The SKI p_ FRAME.
        /// </summary>
        public int SKIP_FRAMES = 2;

        /// <summary>
        /// The count.
        /// </summary>
        int count;

        /// <summary>
        /// The rgba mat_small.
        /// </summary>
        Mat rgbaMat_small;

        /// <summary>
        /// The detect result.
        /// </summary>
        List<UnityEngine.Rect> detectResult;

        // Use this for initialization
        void Start ()
        {
            faceLandmarkDetector = new FaceLandmarkDetector (DlibFaceLandmarkDetector.Utils.getFilePath ("shape_predictor_68_face_landmarks.dat"));

            webCamTextureToMatHelper = gameObject.GetComponent<WebCamTextureToMatHelper> ();
            webCamTextureToMatHelper.Init ();
        }

        /// <summary>
        /// Raises the web cam texture to mat helper inited event.
        /// </summary>
        public void OnWebCamTextureToMatHelperInited ()
        {
            Debug.Log ("OnWebCamTextureToMatHelperInited");

            Mat webCamTextureMat = webCamTextureToMatHelper.GetMat ();

            colors = new Color32[webCamTextureMat.cols () * webCamTextureMat.rows ()];
            texture = new Texture2D (webCamTextureMat.cols (), webCamTextureMat.rows (), TextureFormat.RGBA32, false);

            rgbaMat_small = new Mat ();
            detectResult = new List<UnityEngine.Rect> ();
            count = 0;

            gameObject.transform.localScale = new Vector3 (webCamTextureMat.cols (), webCamTextureMat.rows (), 1);
            Debug.Log ("Screen.width " + Screen.width + " Screen.height " + Screen.height + " Screen.orientation " + Screen.orientation);
                                   
            float width = gameObject.transform.localScale.x;
            float height = gameObject.transform.localScale.y;
                                   
            float widthScale = (float)Screen.width / width;
            float heightScale = (float)Screen.height / height;
            if (widthScale < heightScale) {
                Camera.main.orthographicSize = (width * (float)Screen.height / (float)Screen.width) / 2;
            } else {
                Camera.main.orthographicSize = height / 2;
            }

            gameObject.GetComponent<Renderer> ().material.mainTexture = texture;

        }

        /// <summary>
        /// Raises the web cam texture to mat helper disposed event.
        /// </summary>
        public void OnWebCamTextureToMatHelperDisposed ()
        {
            Debug.Log ("OnWebCamTextureToMatHelperDisposed");

        }

        // Update is called once per frame
        void Update ()
        {

            if (webCamTextureToMatHelper.isPlaying () && webCamTextureToMatHelper.didUpdateThisFrame ()) {

                Mat rgbaMat = webCamTextureToMatHelper.GetMat ();

                // Resize image for face detection
                Imgproc.resize (rgbaMat, rgbaMat_small, new Size (), 1.0 / FACE_DOWNSAMPLE_RATIO, 1.0 / FACE_DOWNSAMPLE_RATIO, Imgproc.INTER_LINEAR);


                OpenCVForUnityUtils.SetImage (faceLandmarkDetector, rgbaMat_small);


                // Detect faces on resize image
                if (count % SKIP_FRAMES == 0) {
                    detectResult = faceLandmarkDetector.Detect ();
                }
               
                foreach (var rect in detectResult) {

                    List<Vector2> points = faceLandmarkDetector.DetectLandmark (rect);

                    if (points.Count > 0) {
                        List<Vector2> originalPoints = new List<Vector2> (points.Count);
                        foreach (var point in points) {
                            originalPoints.Add(new Vector2(point.x * FACE_DOWNSAMPLE_RATIO, point.y * FACE_DOWNSAMPLE_RATIO));
                        }

                        OpenCVForUnityUtils.DrawFaceLandmark (rgbaMat, originalPoints, new Scalar (0, 255, 0, 255), 2);
                    }

                    UnityEngine.Rect originalRect = new UnityEngine.Rect(rect.x * FACE_DOWNSAMPLE_RATIO, rect.y * FACE_DOWNSAMPLE_RATIO, rect.width * FACE_DOWNSAMPLE_RATIO, rect.height * FACE_DOWNSAMPLE_RATIO);
                    OpenCVForUnityUtils.DrawFaceRect (rgbaMat, originalRect, new Scalar (255, 0, 0, 255), 2);
                }

                Imgproc.putText (rgbaMat, "Original: (" + rgbaMat.width () + "," + rgbaMat.height () + ") DownScale; (" + rgbaMat_small.width () + "," + rgbaMat_small.height () + ") SkipFrames: " + SKIP_FRAMES, new Point (5, rgbaMat.rows () - 10), Core.FONT_HERSHEY_SIMPLEX, 1.0, new Scalar (255, 255, 255, 255), 2, Imgproc.LINE_AA, false);

                OpenCVForUnity.Utils.matToTexture2D (rgbaMat, texture, colors);

                count++;
            }

        }
   
        /// <summary>
        /// Raises the disable event.
        /// </summary>
        void OnDisable ()
        {
            webCamTextureToMatHelper.Dispose ();

            faceLandmarkDetector.Dispose ();
        }

        /// <summary>
        /// Raises the back button event.
        /// </summary>
        public void OnBackButton ()
        {
            #if UNITY_5_3 || UNITY_5_3_OR_NEWER
            SceneManager.LoadScene ("DlibFaceLandmarkDetectorSample");
            #else
            Application.LoadLevel ("DlibFaceLandmarkDetectorSample");
            #endif
        }

        /// <summary>
        /// Raises the play button event.
        /// </summary>
        public void OnPlayButton ()
        {
            webCamTextureToMatHelper.Play ();
        }

        /// <summary>
        /// Raises the pause button event.
        /// </summary>
        public void OnPauseButton ()
        {
            webCamTextureToMatHelper.Pause ();
        }

        /// <summary>
        /// Raises the stop button event.
        /// </summary>
        public void OnStopButton ()
        {
            webCamTextureToMatHelper.Stop ();
        }

        /// <summary>
        /// Raises the change camera button event.
        /// </summary>
        public void OnChangeCameraButton ()
        {
            webCamTextureToMatHelper.Init (null, webCamTextureToMatHelper.requestWidth, webCamTextureToMatHelper.requestHeight, !webCamTextureToMatHelper.requestIsFrontFacing);
        }
       
    }
}

Hopefully someone could clarify a couple of things for me.
First of all its not very clear if this is a standalone asset or if OpenCV for Unity is also requiired, if so its gonna be too expensive for me!
Second I see the demo App is 152M, this would make it about 120M too big for me! What would you say is the minimum number of megabytes this asset would add to a build if all the examples were stripped out and it just ran face detection a bit like the one in the post above?

First
You can use all the features of ”Dlib FaceLandmark Detector” without “OpenCV for Unity”.

Second
Because Dlib’s default trained file(shape_predictor_68_face_landmarks.dat) is 100M, Demo App size is over 150M.
You can reduce the app size if you use smaller files.

Thanks for the info there… Do you know if the .dat file is basically in XML format? When I did my thing I was able to make the files massively smaller by stripping out all the XML stuff and just storing arrays of numbers which could be reconstructed later when they were read… Maybe yours does something similar- i.e. reads the file into an array, and that array itself could just be saved and be much smaller?

Also do you know if there are any trained files available that have a lot less landmarks, and if not how difficult is it to ‘train’ one?

Hello, I just wanted to thank you for creating such a useful tool, it has saved me a lot of time on my current project. I would like to know what changes were made in the newest update: Version: 1.0.1 (Aug 02, 2016), is there a link you can direct me to for details about plugin updates that you’ve made? thanks again!

Unfortunately, I couldn’t find another trained file.

Also, ”frontal_cat_face.svm" and “shape_predictor_68_cat_face_landmarks.dat” are the files that I trained.
Please refer to Read.pdf(https://github.com/EnoxSoftware/DlibFaceLandmarkDetector/blob/master/ReadMe.pdf) for detail.

Dlib FaceLandmark Detector v1.0.1 is now available.

Version changes
1.0.1
[Common]Added OptimizationSample.
[Common]Added DetectRectDetection() method. You are able to obtain detailed data of the detected face.(detection_confidence, weight_index)

Dont suppose ‘OptimizationSample’ is a sample that optimizes file size?! :slight_smile:
I cant afford to buy it to look and I there is no point in me buying it unless the file size is MUCH smaller LOL…

Let me know if you have done something on that front :slight_smile:

I just tried the Android demo. What is the cat face detection example in the demo supposed to do? All you show is some cat image with lines on it. That could also be a static image. Is there a live demo available? From what I’ve learned cat detection isn’t easy and the OpenCV demos don’t work. That’s why I’d rather try on a real example before I buy your asset.

Hi, I was wondering can you post code on how to change the web cam size? I’m working with the dlib+opencv webcam AR example. I’d like the web cam view to be a smaller, and it would really help if there was an easy way to resize it, is this possible? thanks