Search Unity

[RELEASED] Dlib FaceLandmark Detector

Discussion in 'Assets and Asset Store' started by EnoxSoftware, Jun 4, 2016.

  1. C0deCat

    C0deCat

    Joined:
    May 18, 2013
    Posts:
    27
    Can you please write more about this?
     
  2. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,564
    https://stackoverflow.com/questions/12949793/rotate-videocapture-in-opencv-on-android
    Code (CSharp):
    1. public static Mat rotate(Mat src, double angle)
    2. {
    3.     Mat dst = new Mat();
    4.     if(angle == 180 || angle == -180) {
    5.         Core.flip(src, dst, -1);
    6.     } else if(angle == 90 || angle == -270) {
    7.         Core.flip(src.t(), dst, 1);
    8.     } else if(angle == 270 || angle == -90) {
    9.         Core.flip(src.t(), dst, 0);
    10.     }
    11.  
    12.     return dst;
    13. }
     
  3. ina

    ina

    Joined:
    Nov 15, 2010
    Posts:
    1,085
    Some issues with Unity 2017.2.0f3 and the latest Dlib + OpenCV unity assets:


    Assets/DlibFaceLandmarkDetectorWithOpenCVExample/ARHeadVideoCaptureExample/ARHeadVideoCaptureExample.cs(262,23): error CS0143: The class `UnityEngine.XR.WSA.WebCam.VideoCapture' has no constructors defined

    Which version of Unity has this been tested to work with?
     
  4. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,564
    Thank you very much for reporting.
    I have done build test with Unity 2017.2.0f3 using UnityCloudBuild.

    Does this error occur on the UWP platform?
     
  5. ina

    ina

    Joined:
    Nov 15, 2010
    Posts:
    1,085
    This occurs on Mac OS X Sierra 16G1036 with build settings set to iOS
     
  6. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,564
    I could not reproduce the problem.
    Unity 2017.2.0f3
    OpenCV for Unity 2.2.3
    DlibFaceLandmarkDetector 1.1.7

    Code (CSharp):
    1. Assets/DlibFaceLandmarkDetectorWithOpenCVExample/ARHeadVideoCaptureExample/ARHeadVideoCaptureExample.cs(262,23): error CS0143: The class `UnityEngine.XR.WSA.WebCam.VideoCapture' has no constructors defined
    Perhaps it seems that the VideoCapture class of Unity and the VideoCapture class of OpenCV conflict.
    Could you add namespace?
    Code (CSharp):
    1.             capture = new OpenCVForUnity.VideoCapture ();
    2.             capture.open (dance_avi_filepath);
     
  7. ina

    ina

    Joined:
    Nov 15, 2010
    Posts:
    1,085
    It looks like it was a Unity Asset Store update error. This error no longer occurs when I deleted the old version of the asset from ~/Library and tried downloading the entire asset from scratch. https://answers.unity.com/questions/690729/how-do-you-remove-packages-already-downloaded-from.html
     
  8. ina

    ina

    Joined:
    Nov 15, 2010
    Posts:
    1,085
    Some issues with submitting apps built with the latest Dlib + OpenCV to iOS App Store (Unity 2017.2.0f3)



    Had thought setting import settings to iOS should limit the other architectures from building? But, the errors above still occur

     
    Last edited: Dec 14, 2017
  9. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,564
    Have you already tried this solution?
    https://forum.unity.com/threads/released-opencv-for-unity.277080/page-29#post-3309944

    I plan to add this procedure to the document of the next version.
     
  10. tcmeric

    tcmeric

    Joined:
    Dec 21, 2016
    Posts:
    190
  11. tcmeric

    tcmeric

    Joined:
    Dec 21, 2016
    Posts:
    190
    The webcam always seems to upside down on my local system (PC, Win7, Unity 2017.3, newest version of the asset from the store), using the example scene.

    If I flip my web cam upside down, it sets the correct orientation in unity and the face can then be tracked.

    However, if I do this.

    Code (CSharp):
    1.                 foreach (var rect in detectResult) {
    2.                     points = faceLandmarkDetector.DetectLandmark (rect).ToArray();
    3.                 }

    Code (CSharp):
    1.                 trackedGameObject.transform.position = new Vector3 (points[10].x, points[10].y, 0);
    2.                 Debug.Log("XY " + points[10].x + points[10].y);

    When I move my head left or right, the tracked game object moves left or right, but it moves inversely up and down. (Down is up, and up is down).
     
  12. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,564
    I’ll consider it.
     
  13. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,564
    Please flip the pixel array manually.
    Code (CSharp):
    1.                 webCamTexture.GetPixels32 (colors);
    2.  
    3.                 //flip vertically
    4.                 colors = Flip (colors, webCamTexture.width, webCamTexture.height);
    5.  
    6.                 faceLandmarkDetector.SetImage<Color32> (colors, webCamTexture.width, webCamTexture.height, 4, flip);
    Code (CSharp):
    1.         public Color32[] Flip (Color32[] input, int width, int height)
    2.         {
    3.             Color32[] result = new Color32[input.Length];
    4.          
    5.             for (int y = 0; y < height; y++) {            
    6.                 for (int x = 0; x < width; x++) {
    7.                     result [x + ((height - 1) - y) * width] = input [x + y * width];
    8.                 }
    9.             }
    10.             return result;
    11.         }
     
  14. tcmeric

    tcmeric

    Joined:
    Dec 21, 2016
    Posts:
    190
    Consider sending me a refund, thanks.
     
    ina likes this.
  15. ina

    ina

    Joined:
    Nov 15, 2010
    Posts:
    1,085
    have you implemented filters to make this less jittery?
     
  16. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,564
    WebcamTextureExample supports portrait mode with the latest version of DlibFaceLandmarkDetector.

    1.1.8
    [Common]Updated WebCamTextureExample.(support Portrait ScreenOrientation)
    [Common]Updated to WebCamTextureToMatHelper.cs v1.0.4.
     
  17. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,564
  18. ina

    ina

    Joined:
    Nov 15, 2010
    Posts:
    1,085
    the results are much stabler but also does not provide the high accuracy around eyes
     
  19. ShortorKiss

    ShortorKiss

    Joined:
    Jan 24, 2018
    Posts:
    1
    I want to change the resolution of the WebCamTextureToMatHelper script, but the CPU is very high and the FPS is very low.How to maintain stable FPS in 640x480 resolution.
     
  20. gftrotta

    gftrotta

    Joined:
    Sep 19, 2014
    Posts:
    5
    Dear all,
    I'm trying to run ARHeadExample but I have an issue with the camera: only when it is flipped vertically works, otherwise black Quad is visualized.
    What happen?

    I'm trying to run WebCamTextureToMatHelperExample and everything works.

    Somebody can help me, please?
     
  21. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,564
    Could you tell me about your test environment?
    OpenCV for Unity version :
    DlibFaceLandmarkDetector version :
    Unity version :
    Build Platform :
     
    gftrotta likes this.
  22. gftrotta

    gftrotta

    Joined:
    Sep 19, 2014
    Posts:
    5
    OpenCV for Unity version : 2.2.6
    DlibFaceLandmarkDetector version : 1.1.9
    Unity version : 2017.1.1f1
    Build Platform : Windows 10 Pro x86_64

    Thanks a lot
     
    Last edited: Feb 7, 2018
  23. gftrotta

    gftrotta

    Joined:
    Sep 19, 2014
    Posts:
    5
    When I don't check flip vertically an error occurs:


    ArgumentOutOfRangeException: Argument is out of range.
    Parameter name: index
    System.Collections.Generic.List`1[UnityEngine.Vector2].get_Item (Int32 index) (at /Users/builduser/buildslave/mono/build/mcs/class/corlib/System.Collections.Generic/List.cs:633)
    DlibFaceLandmarkDetectorExample.ARHeadExample.Update () (at Assets/DlibFaceLandmarkDetectorWithOpenCVExample/ARHeadExample/ARHeadExample.cs:382)

    When I check flip vertically, this error not occurs but nothing works, maybe because the head is flipped?

    Thanks.
     
  24. gftrotta

    gftrotta

    Joined:
    Sep 19, 2014
    Posts:
    5
    The path of sp_human_face_68.dat is incorrect from Utils.cs: string destPath = Path.Combine (Application.streamingAssetsPath, filepath);

    From the Unity doc: https://docs.unity3d.com/Manual/StreamingAssets.html

    And so the solution is: string destPath = Application.dataPath + "/DLibFaceLandmarkDetector/StreamingAssets/" + filepath;

    Bye
     
  25. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,564
    Please move the “DlibFaceLandmarkDetector/StreamingAssets/” folder to the “Assets/” folder.
    Setup.PNG
     
  26. SpiderJones

    SpiderJones

    Joined:
    Mar 29, 2014
    Posts:
    244
    Hi, at work we have purchased OpenCV for unity and Dlib FaceLandmark Detector. We want to take a Texture2D and place a 3d model over the users face, the same thing that is done in your ARHeadExample, but with out the camera. I've tried to isolate the transform placement code in the ARHeadExample script, but it seems coupled and dependent on a WebCamTextureToMatHelper object. Can you share an example showing specifically how to get the position, scale and rotation from the landmark points in the context of a Texture2d? Thanks!
     
  27. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,564
    Please replace ARHeadVideoCaptureExample.cs with this code.
    Code (CSharp):
    1. using UnityEngine;
    2. using System.Collections;
    3. using System.Collections.Generic;
    4. using System;
    5. using UnityEngine.UI;
    6.  
    7. #if UNITY_5_3 || UNITY_5_3_OR_NEWER
    8. using UnityEngine.SceneManagement;
    9. #endif
    10. using OpenCVForUnity;
    11. using DlibFaceLandmarkDetector;
    12.  
    13. namespace DlibFaceLandmarkDetectorExample
    14. {
    15.     /// <summary>
    16.     /// AR Head VideoCapture Example
    17.     /// This example was referring to http://www.morethantechnical.com/2012/10/17/head-pose-estimation-with-opencv-opengl-revisited-w-code/
    18.     /// and use effect asset from http://ktk-kumamoto.hatenablog.com/entry/2014/09/14/092400.
    19.     /// </summary>
    20.     public class ARHeadVideoCaptureExample : MonoBehaviour
    21.     {
    22.         /// <summary>
    23.         /// Determines if displays face points.
    24.         /// </summary>
    25.         public bool displayFacePoints;
    26.      
    27.         /// <summary>
    28.         /// The display face points toggle.
    29.         /// </summary>
    30.         public Toggle displayFacePointsToggle;
    31.      
    32.         /// <summary>
    33.         /// Determines if displays display axes.
    34.         /// </summary>
    35.         public bool displayAxes;
    36.      
    37.         /// <summary>
    38.         /// The display axes toggle.
    39.         /// </summary>
    40.         public Toggle displayAxesToggle;
    41.      
    42.         /// <summary>
    43.         /// Determines if displays head.
    44.         /// </summary>
    45.         public bool displayHead;
    46.      
    47.         /// <summary>
    48.         /// The display head toggle.
    49.         /// </summary>
    50.         public Toggle displayHeadToggle;
    51.      
    52.         /// <summary>
    53.         /// Determines if displays effects.
    54.         /// </summary>
    55.         public bool displayEffects;
    56.      
    57.         /// <summary>
    58.         /// The display effects toggle.
    59.         /// </summary>
    60.         public Toggle displayEffectsToggle;
    61.      
    62.         /// <summary>
    63.         /// The axes.
    64.         /// </summary>
    65.         public GameObject axes;
    66.      
    67.         /// <summary>
    68.         /// The head.
    69.         /// </summary>
    70.         public GameObject head;
    71.      
    72.         /// <summary>
    73.         /// The right eye.
    74.         /// </summary>
    75.         public GameObject rightEye;
    76.      
    77.         /// <summary>
    78.         /// The left eye.
    79.         /// </summary>
    80.         public GameObject leftEye;
    81.      
    82.         /// <summary>
    83.         /// The mouth.
    84.         /// </summary>
    85.         public GameObject mouth;
    86.      
    87.         /// <summary>
    88.         /// The AR camera.
    89.         /// </summary>
    90.         public Camera ARCamera;
    91.      
    92.         /// <summary>
    93.         /// The AR game object.
    94.         /// </summary>
    95.         public GameObject ARGameObject;
    96.      
    97.         /// <summary>
    98.         /// Determines if request the AR camera moving.
    99.         /// </summary>
    100.         public bool shouldMoveARCamera;
    101.      
    102.         /// <summary>
    103.         /// The mouth particle system.
    104.         /// </summary>
    105.         ParticleSystem[] mouthParticleSystem;
    106.      
    107.         /// <summary>
    108.         /// The colors.
    109.         /// </summary>
    110.         Color32[] colors;
    111.      
    112.         /// <summary>
    113.         /// The cameraparam matrix.
    114.         /// </summary>
    115.         Mat camMatrix;
    116.      
    117.         /// <summary>
    118.         /// The distortion coeffs.
    119.         /// </summary>
    120.         MatOfDouble distCoeffs;
    121.  
    122.         /// <summary>
    123.         /// The matrix that inverts the Y axis.
    124.         /// </summary>
    125.         Matrix4x4 invertYM;
    126.      
    127.         /// <summary>
    128.         /// The matrix that inverts the Z axis.
    129.         /// </summary>
    130.         Matrix4x4 invertZM;
    131.      
    132.         /// <summary>
    133.         /// The transformation matrix.
    134.         /// </summary>
    135.         Matrix4x4 transformationM = new Matrix4x4 ();
    136.  
    137.         /// <summary>
    138.         /// The transformation matrix for AR.
    139.         /// </summary>
    140.         Matrix4x4 ARM;
    141.      
    142.         /// <summary>
    143.         /// The 3d face object points.
    144.         /// </summary>
    145.         MatOfPoint3f objectPoints;
    146.      
    147.         /// <summary>
    148.         /// The image points.
    149.         /// </summary>
    150.         MatOfPoint2f imagePoints;
    151.      
    152.         /// <summary>
    153.         /// The rvec.
    154.         /// </summary>
    155.         Mat rvec;
    156.      
    157.         /// <summary>
    158.         /// The tvec.
    159.         /// </summary>
    160.         Mat tvec;
    161.      
    162.         /// <summary>
    163.         /// The rot mat.
    164.         /// </summary>
    165.         Mat rotMat;
    166.      
    167.         //        /// <summary>
    168.         //        /// The video capture.
    169.         //        /// </summary>
    170.         //        VideoCapture capture;
    171.      
    172.         /// <summary>
    173.         /// The rgb mat.
    174.         /// </summary>
    175.         Mat rgbMat;
    176.      
    177.         /// <summary>
    178.         /// The texture.
    179.         /// </summary>
    180.         Texture2D texture;
    181.      
    182.         /// <summary>
    183.         /// The face landmark detector.
    184.         /// </summary>
    185.         FaceLandmarkDetector faceLandmarkDetector;
    186.      
    187.         /// <summary>
    188.         /// The sp_human_face_68_dat_filepath.
    189.         /// </summary>
    190.         string sp_human_face_68_dat_filepath;
    191.      
    192.         /// <summary>
    193.         /// The dance_avi_filepath.
    194.         /// </summary>
    195.         string dance_avi_filepath;
    196.  
    197.         #if UNITY_WEBGL && !UNITY_EDITOR
    198.         Stack<IEnumerator> coroutines = new Stack<IEnumerator> ();
    199.         #endif
    200.      
    201.         // Use this for initialization
    202.         void Start ()
    203.         {
    204.             displayFacePointsToggle.isOn = displayFacePoints;
    205.             displayAxesToggle.isOn = displayAxes;
    206.             displayHeadToggle.isOn = displayHead;
    207.             displayEffectsToggle.isOn = displayEffects;
    208.          
    209.          
    210.             #if UNITY_WEBGL && !UNITY_EDITOR
    211.             var getFilePath_Coroutine = GetFilePath ();
    212.             coroutines.Push (getFilePath_Coroutine);
    213.             StartCoroutine (getFilePath_Coroutine);
    214.             #else
    215.             sp_human_face_68_dat_filepath = DlibFaceLandmarkDetector.Utils.getFilePath ("sp_human_face_68.dat");
    216.             dance_avi_filepath = OpenCVForUnity.Utils.getFilePath ("dance.avi");
    217.             Run ();
    218.             #endif
    219.         }
    220.      
    221.         #if UNITY_WEBGL && !UNITY_EDITOR
    222.         private IEnumerator GetFilePath ()
    223.         {
    224.             var getFilePathAsync_sp_human_face_68_dat_filepath_Coroutine = DlibFaceLandmarkDetector.Utils.getFilePathAsync ("sp_human_face_68.dat", (result) => {
    225.                 sp_human_face_68_dat_filepath = result;
    226.             });
    227.             coroutines.Push (getFilePathAsync_sp_human_face_68_dat_filepath_Coroutine);
    228.             yield return StartCoroutine (getFilePathAsync_sp_human_face_68_dat_filepath_Coroutine);
    229.  
    230.             var getFilePathAsync_dance_avi_filepath_Coroutine = OpenCVForUnity.Utils.getFilePathAsync ("dance.avi", (result) => {
    231.                 dance_avi_filepath = result;
    232.             });
    233.             coroutines.Push (getFilePathAsync_dance_avi_filepath_Coroutine);
    234.             yield return StartCoroutine (getFilePathAsync_dance_avi_filepath_Coroutine);
    235.  
    236.             coroutines.Clear ();
    237.          
    238.             Run ();
    239.         }
    240.         #endif
    241.      
    242.         private void Run ()
    243.         {
    244.             //set 3d face object points.
    245.             objectPoints = new MatOfPoint3f (
    246.                 new Point3 (-34, 90, 83),//l eye (Interpupillary breadth)
    247.                 new Point3 (34, 90, 83),//r eye (Interpupillary breadth)
    248.                 new Point3 (0.0, 50, 120),//nose (Nose top)
    249.                 new Point3 (-26, 15, 83),//l mouse (Mouth breadth)
    250.                 new Point3 (26, 15, 83),//r mouse (Mouth breadth)
    251.                 new Point3 (-79, 90, 0.0),//l ear (Bitragion breadth)
    252.                 new Point3 (79, 90, 0.0)//r ear (Bitragion breadth)
    253.             );
    254.             imagePoints = new MatOfPoint2f ();
    255.             rotMat = new Mat (3, 3, CvType.CV_64FC1);
    256.          
    257.             faceLandmarkDetector = new FaceLandmarkDetector (sp_human_face_68_dat_filepath);
    258.          
    259.             rgbMat = new Mat ();
    260.          
    261. //            capture = new VideoCapture ();
    262. //            capture.open (dance_avi_filepath);
    263. //          
    264. //            if (capture.isOpened ()) {
    265. //                Debug.Log ("capture.isOpened() true");
    266. //            } else {
    267. //                Debug.Log ("capture.isOpened() false");
    268. //            }
    269. //          
    270. //          
    271. //            Debug.Log ("CAP_PROP_FORMAT: " + capture.get (Videoio.CAP_PROP_FORMAT));
    272. //            Debug.Log ("CV_CAP_PROP_PREVIEW_FORMAT: " + capture.get (Videoio.CV_CAP_PROP_PREVIEW_FORMAT));
    273. //            Debug.Log ("CAP_PROP_POS_MSEC: " + capture.get (Videoio.CAP_PROP_POS_MSEC));
    274. //            Debug.Log ("CAP_PROP_POS_FRAMES: " + capture.get (Videoio.CAP_PROP_POS_FRAMES));
    275. //            Debug.Log ("CAP_PROP_POS_AVI_RATIO: " + capture.get (Videoio.CAP_PROP_POS_AVI_RATIO));
    276. //            Debug.Log ("CAP_PROP_FRAME_COUNT: " + capture.get (Videoio.CAP_PROP_FRAME_COUNT));
    277. //            Debug.Log ("CAP_PROP_FPS: " + capture.get (Videoio.CAP_PROP_FPS));
    278. //            Debug.Log ("CAP_PROP_FRAME_WIDTH: " + capture.get (Videoio.CAP_PROP_FRAME_WIDTH));
    279. //            Debug.Log ("CAP_PROP_FRAME_HEIGHT: " + capture.get (Videoio.CAP_PROP_FRAME_HEIGHT));
    280.          
    281. //            capture.grab ();
    282. //            capture.retrieve (rgbMat, 0);
    283.             Texture2D imgTexture = Resources.Load ("lena") as Texture2D;
    284.             rgbMat = new Mat (imgTexture.height, imgTexture.width, CvType.CV_8UC4);
    285.             OpenCVForUnity.Utils.texture2DToMat (imgTexture, rgbMat);
    286.  
    287.             int frameWidth = rgbMat.cols ();
    288.             int frameHeight = rgbMat.rows ();
    289.             colors = new Color32[frameWidth * frameHeight];
    290.             texture = new Texture2D (frameWidth, frameHeight, TextureFormat.RGBA32, false);
    291.             gameObject.transform.localScale = new Vector3 ((float)frameWidth, (float)frameHeight, 1);
    292. //            capture.set (Videoio.CAP_PROP_POS_FRAMES, 0);
    293.          
    294.             gameObject.GetComponent<Renderer> ().material.mainTexture = texture;
    295.          
    296.             Debug.Log ("Screen.width " + Screen.width + " Screen.height " + Screen.height + " Screen.orientation " + Screen.orientation);
    297.          
    298.          
    299.             float width = (float)frameWidth;
    300.             float height = (float)frameHeight;
    301.          
    302.             float imageSizeScale = 1.0f;
    303.             float widthScale = (float)Screen.width / width;
    304.             float heightScale = (float)Screen.height / height;
    305.             if (widthScale < heightScale) {
    306.                 Camera.main.orthographicSize = (width * (float)Screen.height / (float)Screen.width) / 2;
    307.                 imageSizeScale = (float)Screen.height / (float)Screen.width;
    308.             } else {
    309.                 Camera.main.orthographicSize = height / 2;
    310.             }
    311.          
    312.          
    313.             //set cameraparam
    314.             int max_d = (int)Mathf.Max (width, height);
    315.             double fx = max_d;
    316.             double fy = max_d;
    317.             double cx = width / 2.0f;
    318.             double cy = height / 2.0f;
    319.             camMatrix = new Mat (3, 3, CvType.CV_64FC1);
    320.             camMatrix.put (0, 0, fx);
    321.             camMatrix.put (0, 1, 0);
    322.             camMatrix.put (0, 2, cx);
    323.             camMatrix.put (1, 0, 0);
    324.             camMatrix.put (1, 1, fy);
    325.             camMatrix.put (1, 2, cy);
    326.             camMatrix.put (2, 0, 0);
    327.             camMatrix.put (2, 1, 0);
    328.             camMatrix.put (2, 2, 1.0f);
    329.             Debug.Log ("camMatrix " + camMatrix.dump ());
    330.          
    331.          
    332.             distCoeffs = new MatOfDouble (0, 0, 0, 0);
    333.             Debug.Log ("distCoeffs " + distCoeffs.dump ());
    334.          
    335.          
    336.             //calibration camera
    337.             Size imageSize = new Size (width * imageSizeScale, height * imageSizeScale);
    338.             double apertureWidth = 0;
    339.             double apertureHeight = 0;
    340.             double[] fovx = new double[1];
    341.             double[] fovy = new double[1];
    342.             double[] focalLength = new double[1];
    343.             Point principalPoint = new Point (0, 0);
    344.             double[] aspectratio = new double[1];
    345.          
    346.             Calib3d.calibrationMatrixValues (camMatrix, imageSize, apertureWidth, apertureHeight, fovx, fovy, focalLength, principalPoint, aspectratio);
    347.          
    348.             Debug.Log ("imageSize " + imageSize.ToString ());
    349.             Debug.Log ("apertureWidth " + apertureWidth);
    350.             Debug.Log ("apertureHeight " + apertureHeight);
    351.             Debug.Log ("fovx " + fovx [0]);
    352.             Debug.Log ("fovy " + fovy [0]);
    353.             Debug.Log ("focalLength " + focalLength [0]);
    354.             Debug.Log ("principalPoint " + principalPoint.ToString ());
    355.             Debug.Log ("aspectratio " + aspectratio [0]);
    356.          
    357.          
    358.             //To convert the difference of the FOV value of the OpenCV and Unity.
    359.             double fovXScale = (2.0 * Mathf.Atan ((float)(imageSize.width / (2.0 * fx)))) / (Mathf.Atan2 ((float)cx, (float)fx) + Mathf.Atan2 ((float)(imageSize.width - cx), (float)fx));
    360.             double fovYScale = (2.0 * Mathf.Atan ((float)(imageSize.height / (2.0 * fy)))) / (Mathf.Atan2 ((float)cy, (float)fy) + Mathf.Atan2 ((float)(imageSize.height - cy), (float)fy));
    361.          
    362.             Debug.Log ("fovXScale " + fovXScale);
    363.             Debug.Log ("fovYScale " + fovYScale);
    364.          
    365.          
    366.             //Adjust Unity Camera FOV https://github.com/opencv/opencv/commit/8ed1945ccd52501f5ab22bdec6aa1f91f1e2cfd4
    367.             if (widthScale < heightScale) {
    368.                 ARCamera.fieldOfView = (float)(fovx [0] * fovXScale);
    369.             } else {
    370.                 ARCamera.fieldOfView = (float)(fovy [0] * fovYScale);
    371.             }
    372.  
    373.  
    374.             invertYM = Matrix4x4.TRS (Vector3.zero, Quaternion.identity, new Vector3 (1, -1, 1));
    375.             Debug.Log ("invertYM " + invertYM.ToString ());
    376.  
    377.             invertZM = Matrix4x4.TRS (Vector3.zero, Quaternion.identity, new Vector3 (1, 1, -1));
    378.             Debug.Log ("invertZM " + invertZM.ToString ());
    379.          
    380.          
    381.             axes.SetActive (false);
    382.             head.SetActive (false);
    383.             rightEye.SetActive (false);
    384.             leftEye.SetActive (false);
    385.             mouth.SetActive (false);
    386.          
    387.             mouthParticleSystem = mouth.GetComponentsInChildren<ParticleSystem> (true);
    388.  
    389.  
    390.  
    391.             OpenCVForUnityUtils.SetImage (faceLandmarkDetector, rgbMat);
    392.  
    393.             //detect face rects
    394.             List<UnityEngine.Rect> detectResult = faceLandmarkDetector.Detect ();
    395.  
    396.             if (detectResult.Count > 0) {
    397.  
    398.                 //detect landmark points
    399.                 List<Vector2> points = faceLandmarkDetector.DetectLandmark (detectResult [0]);
    400.  
    401.                 if (displayFacePoints)
    402.                     OpenCVForUnityUtils.DrawFaceLandmark (rgbMat, points, new Scalar (0, 255, 0), 2);
    403.  
    404.                 imagePoints.fromArray (
    405.                     new Point ((points [38].x + points [41].x) / 2, (points [38].y + points [41].y) / 2),//l eye (Interpupillary breadth)
    406.                     new Point ((points [43].x + points [46].x) / 2, (points [43].y + points [46].y) / 2),//r eye (Interpupillary breadth)
    407.                     new Point (points [30].x, points [30].y),//nose (Nose top)
    408.                     new Point (points [48].x, points [48].y),//l mouth (Mouth breadth)
    409.                     new Point (points [54].x, points [54].y), //r mouth (Mouth breadth)
    410.                     new Point (points [0].x, points [0].y),//l ear (Bitragion breadth)
    411.                     new Point (points [16].x, points [16].y)//r ear (Bitragion breadth)
    412.                 );
    413.  
    414.                 // Estimate head pose.
    415.                 if (rvec == null || tvec == null) {
    416.                     rvec = new Mat (3, 1, CvType.CV_64FC1);
    417.                     tvec = new Mat (3, 1, CvType.CV_64FC1);
    418.                     Calib3d.solvePnP (objectPoints, imagePoints, camMatrix, distCoeffs, rvec, tvec);
    419.                 }
    420.  
    421.                 double tvec_z = tvec.get (2, 0) [0];
    422.  
    423.                 if (double.IsNaN (tvec_z) || tvec_z < 0) { // if tvec is wrong data, do not use extrinsic guesses.
    424.                     Calib3d.solvePnP (objectPoints, imagePoints, camMatrix, distCoeffs, rvec, tvec);
    425.                 } else {
    426.                     Calib3d.solvePnP (objectPoints, imagePoints, camMatrix, distCoeffs, rvec, tvec, true, Calib3d.SOLVEPNP_ITERATIVE);
    427.                 }
    428.  
    429.                 //                    Debug.Log (tvec.dump());
    430.  
    431.                 if (!double.IsNaN (tvec_z)) {
    432.  
    433.                     if (Mathf.Abs ((float)(points [43].y - points [46].y)) > Mathf.Abs ((float)(points [42].x - points [45].x)) / 6.0) {
    434.                         if (displayEffects)
    435.                             rightEye.SetActive (true);
    436.                     }
    437.  
    438.                     if (Mathf.Abs ((float)(points [38].y - points [41].y)) > Mathf.Abs ((float)(points [39].x - points [36].x)) / 6.0) {
    439.                         if (displayEffects)
    440.                             leftEye.SetActive (true);
    441.                     }
    442.                     if (displayHead)
    443.                         head.SetActive (true);
    444.                     if (displayAxes)
    445.                         axes.SetActive (true);
    446.  
    447.  
    448.  
    449.                     float noseDistance = Mathf.Abs ((float)(points [27].y - points [33].y));
    450.                     float mouseDistance = Mathf.Abs ((float)(points [62].y - points [66].y));
    451.                     if (mouseDistance > noseDistance / 5.0) {
    452.                         if (displayEffects) {
    453.                             mouth.SetActive (true);
    454.                             foreach (ParticleSystem ps in mouthParticleSystem) {
    455.                                 var em = ps.emission;
    456.                                 em.enabled = true;
    457.                                 #if UNITY_5_5_OR_NEWER
    458.                                 var main = ps.main;
    459.                                 main.startSizeMultiplier = 40 * (mouseDistance / noseDistance);
    460.                                 #else
    461.                                 ps.startSize = 40 * (mouseDistance / noseDistance);
    462.                                 #endif
    463.                             }
    464.                         }
    465.                     } else {
    466.                         if (displayEffects) {
    467.                             foreach (ParticleSystem ps in mouthParticleSystem) {
    468.                                 var em = ps.emission;
    469.                                 em.enabled = false;
    470.                             }
    471.                         }
    472.                     }
    473.  
    474.                     Calib3d.Rodrigues (rvec, rotMat);
    475.  
    476.                     transformationM.SetRow (0, new Vector4 ((float)rotMat.get (0, 0) [0], (float)rotMat.get (0, 1) [0], (float)rotMat.get (0, 2) [0], (float)tvec.get (0, 0) [0]));
    477.                     transformationM.SetRow (1, new Vector4 ((float)rotMat.get (1, 0) [0], (float)rotMat.get (1, 1) [0], (float)rotMat.get (1, 2) [0], (float)tvec.get (1, 0) [0]));
    478.                     transformationM.SetRow (2, new Vector4 ((float)rotMat.get (2, 0) [0], (float)rotMat.get (2, 1) [0], (float)rotMat.get (2, 2) [0], (float)tvec.get (2, 0) [0]));
    479.                     transformationM.SetRow (3, new Vector4 (0, 0, 0, 1));
    480.  
    481.                     // right-handed coordinates system (OpenCV) to left-handed one (Unity)
    482.                     ARM = invertYM * transformationM;
    483.  
    484.                     // Apply Z axis inverted matrix.
    485.                     ARM = ARM * invertZM;
    486.  
    487.                     if (shouldMoveARCamera) {
    488.  
    489.                         ARM = ARGameObject.transform.localToWorldMatrix * ARM.inverse;
    490.  
    491.                         ARUtils.SetTransformFromMatrix (ARCamera.transform, ref ARM);
    492.                     } else {
    493.  
    494.                         ARM = ARCamera.transform.localToWorldMatrix * ARM;
    495.  
    496.                         ARUtils.SetTransformFromMatrix (ARGameObject.transform, ref ARM);
    497.                     }
    498.                 }
    499.             } else {
    500.                 rightEye.SetActive (false);
    501.                 leftEye.SetActive (false);
    502.                 head.SetActive (false);
    503.                 mouth.SetActive (false);
    504.                 axes.SetActive (false);
    505.             }
    506.  
    507.             Imgproc.putText (rgbMat, "W:" + rgbMat.width () + " H:" + rgbMat.height () + " SO:" + Screen.orientation, new Point (5, rgbMat.rows () - 10), Core.FONT_HERSHEY_SIMPLEX, 0.5, new Scalar (255, 255, 255), 1, Imgproc.LINE_AA, false);
    508.  
    509.             OpenCVForUnity.Utils.matToTexture2D (rgbMat, texture, colors);
    510.  
    511.         }
    512.      
    513.         //        // Update is called once per frame
    514.         //        void Update ()
    515.         //        {
    516.         //            if (capture == null)
    517.         //                return;
    518.         //
    519.         //            //Loop play
    520.         //            if (capture.get (Videoio.CAP_PROP_POS_FRAMES) >= capture.get (Videoio.CAP_PROP_FRAME_COUNT))
    521.         //                capture.set (Videoio.CAP_PROP_POS_FRAMES, 0);
    522.         //
    523.         //            if (capture.grab ()) {
    524.         //
    525.         //                capture.retrieve (rgbMat, 0);
    526.         //
    527.         //                Imgproc.cvtColor (rgbMat, rgbMat, Imgproc.COLOR_BGR2RGB);
    528.         //                //Debug.Log ("Mat toString " + rgbMat.ToString ());
    529.         //
    530.         //
    531.         //                OpenCVForUnityUtils.SetImage (faceLandmarkDetector, rgbMat);
    532.         //
    533.         //                //detect face rects
    534.         //                List<UnityEngine.Rect> detectResult = faceLandmarkDetector.Detect ();
    535.         //
    536.         //                if (detectResult.Count > 0) {
    537.         //
    538.         //                    //detect landmark points
    539.         //                    List<Vector2> points = faceLandmarkDetector.DetectLandmark (detectResult [0]);
    540.         //
    541.         //                    if (displayFacePoints)
    542.         //                        OpenCVForUnityUtils.DrawFaceLandmark (rgbMat, points, new Scalar (0, 255, 0), 2);
    543.         //
    544.         //                    imagePoints.fromArray (
    545.         //                        new Point ((points [38].x + points [41].x) / 2, (points [38].y + points [41].y) / 2),//l eye (Interpupillary breadth)
    546.         //                        new Point ((points [43].x + points [46].x) / 2, (points [43].y + points [46].y) / 2),//r eye (Interpupillary breadth)
    547.         //                        new Point (points [30].x, points [30].y),//nose (Nose top)
    548.         //                        new Point (points [48].x, points [48].y),//l mouth (Mouth breadth)
    549.         //                        new Point (points [54].x, points [54].y), //r mouth (Mouth breadth)
    550.         //                        new Point (points [0].x, points [0].y),//l ear (Bitragion breadth)
    551.         //                        new Point (points [16].x, points [16].y)//r ear (Bitragion breadth)
    552.         //                    );
    553.         //
    554.         //                    // Estimate head pose.
    555.         //                    if (rvec == null || tvec == null) {
    556.         //                        rvec = new Mat (3, 1, CvType.CV_64FC1);
    557.         //                        tvec = new Mat (3, 1, CvType.CV_64FC1);
    558.         //                        Calib3d.solvePnP (objectPoints, imagePoints, camMatrix, distCoeffs, rvec, tvec);
    559.         //                    }
    560.         //
    561.         //                    double tvec_z = tvec.get (2, 0) [0];
    562.         //
    563.         //                    if (double.IsNaN (tvec_z) || tvec_z < 0) { // if tvec is wrong data, do not use extrinsic guesses.
    564.         //                        Calib3d.solvePnP (objectPoints, imagePoints, camMatrix, distCoeffs, rvec, tvec);
    565.         //                    } else {
    566.         //                        Calib3d.solvePnP (objectPoints, imagePoints, camMatrix, distCoeffs, rvec, tvec, true, Calib3d.SOLVEPNP_ITERATIVE);
    567.         //                    }
    568.         //
    569.         ////                    Debug.Log (tvec.dump());
    570.         //                  
    571.         //                    if (!double.IsNaN (tvec_z)) {
    572.         //                      
    573.         //                        if (Mathf.Abs ((float)(points [43].y - points [46].y)) > Mathf.Abs ((float)(points [42].x - points [45].x)) / 6.0) {
    574.         //                            if (displayEffects)
    575.         //                                rightEye.SetActive (true);
    576.         //                        }
    577.         //                      
    578.         //                        if (Mathf.Abs ((float)(points [38].y - points [41].y)) > Mathf.Abs ((float)(points [39].x - points [36].x)) / 6.0) {
    579.         //                            if (displayEffects)
    580.         //                                leftEye.SetActive (true);
    581.         //                        }
    582.         //                        if (displayHead)
    583.         //                            head.SetActive (true);
    584.         //                        if (displayAxes)
    585.         //                            axes.SetActive (true);
    586.         //                      
    587.         //                      
    588.         //                      
    589.         //                        float noseDistance = Mathf.Abs ((float)(points [27].y - points [33].y));
    590.         //                        float mouseDistance = Mathf.Abs ((float)(points [62].y - points [66].y));
    591.         //                        if (mouseDistance > noseDistance / 5.0) {
    592.         //                            if (displayEffects) {
    593.         //                                mouth.SetActive (true);
    594.         //                                foreach (ParticleSystem ps in mouthParticleSystem) {
    595.         //                                    var em = ps.emission;
    596.         //                                    em.enabled = true;
    597.         //#if UNITY_5_5_OR_NEWER
    598.         //                                    var main = ps.main;
    599.         //                                    main.startSizeMultiplier = 40 * (mouseDistance / noseDistance);
    600.         //#else
    601.         //                                    ps.startSize = 40 * (mouseDistance / noseDistance);
    602.         //#endif
    603.         //                                }
    604.         //                            }
    605.         //                        } else {
    606.         //                            if (displayEffects) {
    607.         //                                foreach (ParticleSystem ps in mouthParticleSystem) {
    608.         //                                    var em = ps.emission;
    609.         //                                    em.enabled = false;
    610.         //                                }
    611.         //                            }
    612.         //                        }
    613.         //
    614.         //                        Calib3d.Rodrigues (rvec, rotMat);
    615.         //
    616.         //                        transformationM.SetRow (0, new Vector4 ((float)rotMat.get (0, 0) [0], (float)rotMat.get (0, 1) [0], (float)rotMat.get (0, 2) [0], (float)tvec.get (0, 0) [0]));
    617.         //                        transformationM.SetRow (1, new Vector4 ((float)rotMat.get (1, 0) [0], (float)rotMat.get (1, 1) [0], (float)rotMat.get (1, 2) [0], (float)tvec.get (1, 0) [0]));
    618.         //                        transformationM.SetRow (2, new Vector4 ((float)rotMat.get (2, 0) [0], (float)rotMat.get (2, 1) [0], (float)rotMat.get (2, 2) [0], (float)tvec.get (2, 0) [0]));
    619.         //                        transformationM.SetRow (3, new Vector4 (0, 0, 0, 1));
    620.         //                      
    621.         //                        // right-handed coordinates system (OpenCV) to left-handed one (Unity)
    622.         //                        ARM = invertYM * transformationM;
    623.         //                      
    624.         //                        // Apply Z axis inverted matrix.
    625.         //                        ARM = ARM * invertZM;
    626.         //                      
    627.         //                        if (shouldMoveARCamera) {
    628.         //
    629.         //                            ARM = ARGameObject.transform.localToWorldMatrix * ARM.inverse;
    630.         //                          
    631.         //                            ARUtils.SetTransformFromMatrix (ARCamera.transform, ref ARM);
    632.         //                        } else {
    633.         //
    634.         //                            ARM = ARCamera.transform.localToWorldMatrix * ARM;
    635.         //                          
    636.         //                            ARUtils.SetTransformFromMatrix (ARGameObject.transform, ref ARM);
    637.         //                        }
    638.         //                    }
    639.         //                } else {
    640.         //                    rightEye.SetActive (false);
    641.         //                    leftEye.SetActive (false);
    642.         //                    head.SetActive (false);
    643.         //                    mouth.SetActive (false);
    644.         //                    axes.SetActive (false);
    645.         //                }
    646.         //              
    647.         //                Imgproc.putText (rgbMat, "W:" + rgbMat.width () + " H:" + rgbMat.height () + " SO:" + Screen.orientation, new Point (5, rgbMat.rows () - 10), Core.FONT_HERSHEY_SIMPLEX, 0.5, new Scalar (255, 255, 255), 1, Imgproc.LINE_AA, false);
    648.         //              
    649.         //                OpenCVForUnity.Utils.matToTexture2D (rgbMat, texture, colors);
    650.         //            }
    651.         //        }
    652.  
    653.         /// <summary>
    654.         /// Raises the destroy event.
    655.         /// </summary>
    656.         void OnDestroy ()
    657.         {
    658.             if (camMatrix != null)
    659.                 camMatrix.Dispose ();
    660.             if (distCoeffs != null)
    661.                 distCoeffs.Dispose ();
    662.          
    663.             if (faceLandmarkDetector != null)
    664.                 faceLandmarkDetector.Dispose ();
    665.  
    666.             #if UNITY_WEBGL && !UNITY_EDITOR
    667.             foreach (var coroutine in coroutines) {
    668.                 StopCoroutine (coroutine);
    669.                 ((IDisposable)coroutine).Dispose ();
    670.             }
    671.             #endif
    672.         }
    673.  
    674.         /// <summary>
    675.         /// Raises the back button click event.
    676.         /// </summary>
    677.         public void OnBackButtonClick ()
    678.         {
    679.             #if UNITY_5_3 || UNITY_5_3_OR_NEWER
    680.             SceneManager.LoadScene ("DlibFaceLandmarkDetectorExample");
    681.             #else
    682.             Application.LoadLevel ("DlibFaceLandmarkDetectorExample");
    683.             #endif
    684.         }
    685.  
    686.         /// <summary>
    687.         /// Raises the display face points toggle value changed event.
    688.         /// </summary>
    689.         public void OnDisplayFacePointsToggleValueChanged ()
    690.         {
    691.             if (displayFacePointsToggle.isOn) {
    692.                 displayFacePoints = true;
    693.             } else {
    694.                 displayFacePoints = false;
    695.             }
    696.         }
    697.  
    698.         /// <summary>
    699.         /// Raises the display axes toggle value changed event.
    700.         /// </summary>
    701.         public void OnDisplayAxesToggleValueChanged ()
    702.         {
    703.             if (displayAxesToggle.isOn) {
    704.                 displayAxes = true;
    705.             } else {
    706.                 displayAxes = false;
    707.                 axes.SetActive (false);
    708.             }
    709.         }
    710.  
    711.         /// <summary>
    712.         /// Raises the display head toggle value changed event.
    713.         /// </summary>
    714.         public void OnDisplayHeadToggleValueChanged ()
    715.         {
    716.             if (displayHeadToggle.isOn) {
    717.                 displayHead = true;
    718.             } else {
    719.                 displayHead = false;
    720.                 head.SetActive (false);
    721.             }
    722.         }
    723.  
    724.         /// <summary>
    725.         /// Raises the display effects toggle value changed event.
    726.         /// </summary>
    727.         public void OnDisplayEffectsToggleValueChanged ()
    728.         {
    729.             if (displayEffectsToggle.isOn) {
    730.                 displayEffects = true;
    731.             } else {
    732.                 displayEffects = false;
    733.                 rightEye.SetActive (false);
    734.                 leftEye.SetActive (false);
    735.                 mouth.SetActive (false);
    736.             }
    737.         }
    738.     }
    739. }
    ARHeadTexture2DExample.PNG
     
    SpiderJones likes this.
  28. SpiderJones

    SpiderJones

    Joined:
    Mar 29, 2014
    Posts:
    244
    Thanks! This worked perfectly.

    I see this variable -> float mouseDistance = Mathf.Abs ((float)(points [62].y - points [66].y));

    Is that suppose to be the mouth distance? Should it be "float mouthDistance"? Confused as to why this is called "mouse".

    Thanks!
     
    EnoxSoftware likes this.
  29. foundway

    foundway

    Joined:
    May 30, 2013
    Posts:
    14
    I also have this issue. Does anyone encounter this as well? We suspect it was because Android's battery optimization but it makes no difference even if we turn that off. We also notice touching the volume buttons increase frame rate as well.
     
    Otto_Oliveira likes this.
  30. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,564
    Could you tell me the name of the devices where this problem occurred?
     
  31. ibompuis

    ibompuis

    Joined:
    Sep 13, 2012
    Posts:
    100
    Hi EnoxSoftware,

    Do you plane to add OpticalFlow + KalmanFilter feature ?

    thx
     
    ina likes this.
  32. ina

    ina

    Joined:
    Nov 15, 2010
    Posts:
    1,085
    i feel like filtering is necessary because it is way too shaky right now
     
  33. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,564
    https://github.com/zhucebuliaolongc...master/face_landmark_detection_of_km_dlib.cpp
    You can try Dlib + OpticalFlow + KalmanFilter with this code.
    However, I think that improvement is necessary a little more.
    Code (CSharp):
    1. using UnityEngine;
    2. using System.Collections;
    3. using System.Collections.Generic;
    4. using System;
    5. using System.Runtime.InteropServices;
    6. #if UNITY_5_3 || UNITY_5_3_OR_NEWER
    7. using UnityEngine.SceneManagement;
    8. #endif
    9. using OpenCVForUnity;
    10. using DlibFaceLandmarkDetector;
    11. namespace DlibFaceLandmarkDetectorExample
    12. {
    13.     /// <summary>
    14.     /// WebCamTextureToMatHelper Example
    15.     /// </summary>
    16.     [RequireComponent (typeof(WebCamTextureToMatHelper))]
    17.     public class OfKmWebCamTextureToMatHelperExample : MonoBehaviour
    18.     {
    19.         /// <summary>
    20.         /// The texture.
    21.         /// </summary>
    22.         Texture2D texture;
    23.         /// <summary>
    24.         /// The webcam texture to mat helper.
    25.         /// </summary>
    26.         WebCamTextureToMatHelper webCamTextureToMatHelper;
    27.         /// <summary>
    28.         /// The face landmark detector.
    29.         /// </summary>
    30.         FaceLandmarkDetector faceLandmarkDetector;
    31.         /// <summary>
    32.         /// The sp_human_face_68_dat_filepath.
    33.         /// </summary>
    34.         string sp_human_face_68_dat_filepath;
    35.         #if UNITY_WEBGL && !UNITY_EDITOR
    36.         Stack<IEnumerator> coroutines = new Stack<IEnumerator> ();
    37.         #endif
    38.         List<Point> kalman_points;
    39.         List<Point> predict_points;
    40.         // Kalman Filter Setup (68 Points Test)
    41.         const int stateNum = 272;
    42.         const int measureNum = 136;
    43.         KalmanFilter KF;
    44.         Mat state;
    45.         Mat processNoise;
    46.         Mat measurement;
    47.         Mat prevgray, gray;
    48.         List<Point> prevTrackPts;
    49.         List<Point> nextTrackPts;
    50.         MatOfPoint2f mOP2fPrevTrackPts;
    51.         MatOfPoint2f mOP2fNextTrackPts;
    52.         MatOfByte status;
    53.         MatOfFloat err;
    54.         // Use this for initialization
    55.         void Start ()
    56.         {
    57.             webCamTextureToMatHelper = gameObject.GetComponent<WebCamTextureToMatHelper> ();
    58.             #if UNITY_WEBGL && !UNITY_EDITOR
    59.             var getFilePath_Coroutine = DlibFaceLandmarkDetector.Utils.getFilePathAsync ("sp_human_face_68.dat", (result) => {
    60.                 coroutines.Clear ();
    61.                 sp_human_face_68_dat_filepath = result;
    62.                 Run ();
    63.             });
    64.             coroutines.Push (getFilePath_Coroutine);
    65.             StartCoroutine (getFilePath_Coroutine);
    66.             #else
    67.             sp_human_face_68_dat_filepath = DlibFaceLandmarkDetector.Utils.getFilePath ("sp_human_face_68.dat");
    68.             Run ();
    69.             #endif
    70.         }
    71.         private void Run ()
    72.         {
    73.             faceLandmarkDetector = new FaceLandmarkDetector (sp_human_face_68_dat_filepath);
    74.             webCamTextureToMatHelper.Initialize ();
    75.             // Initialize measurement points
    76.             kalman_points = new List<Point> ();
    77.             for (int i = 0; i < 68; i++) {
    78.                 kalman_points.Add (new Point (0.0, 0.0));
    79.             }
    80.             // Initialize prediction points
    81.             predict_points = new List<Point> ();
    82.             for (int i = 0; i < 68; i++) {
    83.                 predict_points.Add (new Point (0.0, 0.0));
    84.             }
    85.             KF = new KalmanFilter (stateNum, measureNum, 0, CvType.CV_32F);
    86.             state = new Mat (stateNum, 1, CvType.CV_32FC1);
    87.             processNoise = new Mat (stateNum, 1, CvType.CV_32F);
    88.             measurement = Mat.zeros (measureNum, 1, CvType.CV_32F);
    89.             //            Debug.Log ("measurement " + measurement.ToString ());
    90.             // Generate a matrix randomly
    91.             Core.randn (state, 0, 0.0);
    92.             // Generate the Measurement Matrix
    93.             KF.set_transitionMatrix (Mat.zeros (stateNum, stateNum, CvType.CV_32F));
    94.             for (int i = 0; i < stateNum; i++) {
    95.                 for (int j = 0; j < stateNum; j++) {
    96.                     if (i == j || (j - measureNum) == i) {
    97.                         KF.get_transitionMatrix ().put (i, j, new float[]{ 1.0f });
    98.                     } else {
    99.                         KF.get_transitionMatrix ().put (i, j, new float[]{ 0.0f });
    100.                     }
    101.                 }
    102.             }
    103.             //!< measurement matrix (H) 观测模型
    104.             Core.setIdentity (KF.get_measurementMatrix ());
    105.             //!< process noise covariance matrix (Q)
    106.             Core.setIdentity (KF.get_processNoiseCov (), Scalar.all (1e-5));
    107.             //!< measurement noise covariance matrix (R)
    108.             Core.setIdentity (KF.get_measurementNoiseCov (), Scalar.all (1e-1));
    109.             //!< priori error estimate covariance matrix (P'(k)): P'(k)=A*P(k-1)*At + Q)*/  A代表F: transitionMatrix
    110.             Core.setIdentity (KF.get_errorCovPost (), Scalar.all (1));
    111.             Core.randn (KF.get_statePost (), 0, 0.0);
    112.             // Initialize Optical Flow
    113.             prevgray = new Mat ();
    114.             gray = new Mat ();
    115.             prevTrackPts = new List<Point> ();
    116.             nextTrackPts = new List<Point> ();
    117.             for (int i = 0; i < 68; i++) {
    118.                 prevTrackPts.Add (new Point (0, 0));
    119.             }
    120.             //            for (int i = 0; i < 68; i++) {
    121.             //                nextTrackPts.Add (new Point (0, 0));
    122.             //            }
    123.             mOP2fPrevTrackPts = new MatOfPoint2f ();
    124.             mOP2fNextTrackPts = new MatOfPoint2f ();
    125.             status = new MatOfByte ();
    126.             err = new MatOfFloat ();
    127.         }
    128.         /// <summary>
    129.         /// Raises the web cam texture to mat helper initialized event.
    130.         /// </summary>
    131.         public void OnWebCamTextureToMatHelperInitialized ()
    132.         {
    133.             Debug.Log ("OnWebCamTextureToMatHelperInitialized");
    134.             Mat webCamTextureMat = webCamTextureToMatHelper.GetMat ();
    135.             texture = new Texture2D (webCamTextureMat.cols (), webCamTextureMat.rows (), TextureFormat.RGBA32, false);
    136.             gameObject.GetComponent<Renderer> ().material.mainTexture = texture;
    137.             gameObject.transform.localScale = new Vector3 (webCamTextureMat.cols (), webCamTextureMat.rows (), 1);
    138.             Debug.Log ("Screen.width " + Screen.width + " Screen.height " + Screen.height + " Screen.orientation " + Screen.orientation);
    139.                                  
    140.             float width = webCamTextureMat.width ();
    141.             float height = webCamTextureMat.height ();
    142.                                  
    143.             float widthScale = (float)Screen.width / width;
    144.             float heightScale = (float)Screen.height / height;
    145.             if (widthScale < heightScale) {
    146.                 Camera.main.orthographicSize = (width * (float)Screen.height / (float)Screen.width) / 2;
    147.             } else {
    148.                 Camera.main.orthographicSize = height / 2;
    149.             }
    150.         }
    151.         /// <summary>
    152.         /// Raises the web cam texture to mat helper disposed event.
    153.         /// </summary>
    154.         public void OnWebCamTextureToMatHelperDisposed ()
    155.         {
    156.             Debug.Log ("OnWebCamTextureToMatHelperDisposed");
    157.         }
    158.         /// <summary>
    159.         /// Raises the web cam texture to mat helper error occurred event.
    160.         /// </summary>
    161.         /// <param name="errorCode">Error code.</param>
    162.         public void OnWebCamTextureToMatHelperErrorOccurred (WebCamTextureToMatHelper.ErrorCode errorCode)
    163.         {
    164.             Debug.Log ("OnWebCamTextureToMatHelperErrorOccurred " + errorCode);
    165.         }
    166.         // Update is called once per frame
    167.         void Update ()
    168.         {
    169.             if (webCamTextureToMatHelper.IsPlaying () && webCamTextureToMatHelper.DidUpdateThisFrame ()) {
    170.                 Mat rgbaMat = webCamTextureToMatHelper.GetMat ();
    171.                 OpenCVForUnityUtils.SetImage (faceLandmarkDetector, rgbaMat);
    172.                 //detect face rects
    173.                 List<UnityEngine.Rect> detectResult = faceLandmarkDetector.Detect ();
    174.                 List<Vector2> points = null;
    175.                 if (detectResult.Count == 1) {
    176.                     //detect landmark points
    177.                     points = faceLandmarkDetector.DetectLandmark (detectResult [0]);
    178.                 }
    179.                 if (prevgray.total () == 0) {
    180.                     Debug.Log ("prevgray:" + prevgray.ToString ());
    181.                     Imgproc.cvtColor (rgbaMat, prevgray, Imgproc.COLOR_RGBA2GRAY);
    182.                     if (points != null) {
    183.                         for (int i = 0; i < points.Count; i++) {
    184.                             prevTrackPts [i].x = points [i].x;
    185.                             prevTrackPts [i].y = points [i].y;
    186.                         }
    187.                     }
    188.                 }
    189.                 // Update Kalman Filter Points
    190.                 if (points != null) {
    191.                     for (int i = 0; i < points.Count; i++) {
    192.                         kalman_points [i].x = points [i].x;
    193.                         kalman_points [i].y = points [i].y;
    194.                     }
    195.                 }
    196.                 // Kalman Prediction
    197.                 Mat prediction = KF.predict ();
    198.                 // std::vector<cv::Point2f> predict_points;
    199.                 //                Debug.Log ("prediction " + prediction.ToString ());
    200.                 float[] tmpPrediction = new float[prediction.total ()];
    201.                 prediction.get (0, 0, tmpPrediction);
    202.                 for (int i = 0; i < 68; i++) {
    203.                     predict_points [i].x = tmpPrediction [i * 2];
    204.                     predict_points [i].y = tmpPrediction [i * 2 + 1];
    205.                 }
    206.                 //                for (int i = 0; i < 68; i++) {
    207.                 //                    predict_points [i].x = (float)prediction.get (i * 2, 0) [0];
    208.                 //                    predict_points [i].y = (float)prediction.get (i * 2 + 1, 0) [0];
    209.                 //                }
    210.                 prediction.Dispose ();
    211.                 if (points != null) {
    212.                     Imgproc.cvtColor (rgbaMat, gray, Imgproc.COLOR_RGBA2GRAY);
    213.                     if (prevgray.total () > 0) {
    214.                         mOP2fPrevTrackPts.fromList (prevTrackPts);
    215.                         mOP2fNextTrackPts.fromList (nextTrackPts);
    216.                         Video.calcOpticalFlowPyrLK (prevgray, gray, mOP2fPrevTrackPts, mOP2fNextTrackPts, status, err);
    217.                         prevTrackPts = mOP2fPrevTrackPts.toList ();
    218.                         nextTrackPts = mOP2fNextTrackPts.toList ();
    219.                         // if the face is moving so fast, use dlib to detect the face
    220.                         double diff = calDistanceDiff (prevTrackPts, nextTrackPts);
    221.                         Debug.Log ("variance:" + diff);
    222.                         if (diff > 1.0) {
    223.                             Debug.Log ("DLIB");
    224.                             for (int i = 0; i < points.Count; i++) {
    225.                                 Imgproc.circle (rgbaMat, new Point (points [i].x, points [i].y), 2, new Scalar (255, 0, 0, 255), -1);
    226.                                 nextTrackPts [i].x = points [i].x;
    227.                                 nextTrackPts [i].y = points [i].y;
    228.                             }
    229.                         } else if (diff <= 1.0 && diff > 0.005) {
    230.                             // In this case, use Optical Flow
    231.                             Debug.Log ("Optical Flow");
    232.                             for (int i = 0; i < nextTrackPts.Count; i++) {
    233.                                 Imgproc.circle (rgbaMat, nextTrackPts [i], 2, new Scalar (0, 0, 255, 255), -1);
    234.                             }
    235.                         } else {
    236.                             // In this case, use Kalman Filter
    237.                             Debug.Log ("Kalman Filter");
    238.                             for (int i = 0; i < predict_points.Count; i++) {
    239.                                 Imgproc.circle (rgbaMat, predict_points [i], 2, new Scalar (0, 255, 0, 255), -1);
    240.                                 nextTrackPts [i].x = predict_points [i].x;
    241.                                 nextTrackPts [i].y = predict_points [i].y;
    242.                             }
    243.                         }
    244.                     }
    245.                     //                    std::swap(prevTrackPts, nextTrackPts);
    246.                     Swap (ref prevTrackPts, ref nextTrackPts);
    247.                     //                    std::swap(prevgray, gray);
    248.                     Swap (ref prevgray, ref gray);
    249.                 }
    250.                 // Update Measurement
    251.                 float[] tmpMeasurement = new float[measurement.total ()];
    252.                 for (int i = 0; i < 136; i++) {
    253.                     if (i % 2 == 0) {
    254.                         tmpMeasurement [i] = (float)kalman_points [i / 2].x;
    255.                     } else {
    256.                         tmpMeasurement [i] = (float)kalman_points [(i - 1) / 2].y;
    257.                     }
    258.                 }
    259.                 measurement.put (0, 0, tmpMeasurement);
    260.                 //                for (int i = 0; i < 136; i++) {
    261.                 //                    if (i % 2 == 0) {
    262.                 ////                        Debug.Log ("measurement " + measurement.ToString ());
    263.                 //                        measurement.put (i, 0, new float[]{ (float)(kalman_points [i / 2].x) });
    264.                 //                    } else {
    265.                 //                        measurement.put (i, 0, new float[]{ (float)(kalman_points [(i - 1) / 2].y) });
    266.                 //                    }
    267.                 //                }
    268.                 // Update the Measurement Matrix
    269.                 measurement += KF.get_measurementMatrix () * state;
    270.                 KF.correct (measurement);
    271.              
    272. //                foreach (var rect in detectResult) {
    273. //
    274. //                    //detect landmark points
    275. //                    List<Vector2> points = faceLandmarkDetector.DetectLandmark (rect);
    276. //
    277. //                    //draw landmark points
    278. //                    OpenCVForUnityUtils.DrawFaceLandmark (rgbaMat, points, new Scalar (0, 255, 0, 255), 2);
    279. //
    280. //                    //draw face rect
    281. //                    OpenCVForUnityUtils.DrawFaceRect (rgbaMat, rect, new Scalar (255, 0, 0, 255), 2);
    282. //                }
    283.                 Imgproc.putText (rgbaMat, "W:" + rgbaMat.width () + " H:" + rgbaMat.height () + " SO:" + Screen.orientation, new Point (5, rgbaMat.rows () - 10), Core.FONT_HERSHEY_SIMPLEX, 0.5, new Scalar (255, 255, 255, 255), 1, Imgproc.LINE_AA, false);
    284.                 OpenCVForUnity.Utils.matToTexture2D (rgbaMat, texture, webCamTextureToMatHelper.GetBufferColors ());
    285.             }
    286.         }
    287.         /// <summary>
    288.         /// Raises the destroy event.
    289.         /// </summary>
    290.         void OnDestroy ()
    291.         {
    292.             if (webCamTextureToMatHelper != null)
    293.                 webCamTextureToMatHelper.Dispose ();
    294.             if (faceLandmarkDetector != null)
    295.                 faceLandmarkDetector.Dispose ();
    296.             if (KF != null)
    297.                 KF.Dispose ();
    298.             if (state != null)
    299.                 state.Dispose ();
    300.             if (processNoise != null)
    301.                 processNoise.Dispose ();
    302.             if (measurement != null)
    303.                 measurement.Dispose ();
    304.             if (prevgray != null)
    305.                 prevgray.Dispose ();
    306.             if (gray != null)
    307.                 gray.Dispose ();
    308.             if (mOP2fPrevTrackPts != null)
    309.                 mOP2fPrevTrackPts.Dispose ();
    310.             if (mOP2fNextTrackPts != null)
    311.                 mOP2fNextTrackPts.Dispose ();
    312.             if (status != null)
    313.                 status.Dispose ();
    314.             if (err != null)
    315.                 err.Dispose ();
    316.          
    317.             #if UNITY_WEBGL && !UNITY_EDITOR
    318.             foreach (var coroutine in coroutines) {
    319.                 StopCoroutine (coroutine);
    320.                 ((IDisposable)coroutine).Dispose ();
    321.             }
    322.             #endif
    323.         }
    324.         /// <summary>
    325.         /// Raises the back button click event.
    326.         /// </summary>
    327.         public void OnBackButtonClick ()
    328.         {
    329.             #if UNITY_5_3 || UNITY_5_3_OR_NEWER
    330.             SceneManager.LoadScene ("DlibFaceLandmarkDetectorExample");
    331.             #else
    332.             Application.LoadLevel ("DlibFaceLandmarkDetectorExample");
    333.             #endif
    334.         }
    335.         /// <summary>
    336.         /// Raises the play button click event.
    337.         /// </summary>
    338.         public void OnPlayButtonClick ()
    339.         {
    340.             webCamTextureToMatHelper.Play ();
    341.         }
    342.         /// <summary>
    343.         /// Raises the pause button click event.
    344.         /// </summary>
    345.         public void OnPauseButtonCkick ()
    346.         {
    347.             webCamTextureToMatHelper.Pause ();
    348.         }
    349.         /// <summary>
    350.         /// Raises the stop button click event.
    351.         /// </summary>
    352.         public void OnStopButtonClick ()
    353.         {
    354.             webCamTextureToMatHelper.Stop ();
    355.         }
    356.         /// <summary>
    357.         /// Raises the change camera button click event.
    358.         /// </summary>
    359.         public void OnChangeCameraButtonClick ()
    360.         {
    361.             webCamTextureToMatHelper.Initialize (null, webCamTextureToMatHelper.requestedWidth, webCamTextureToMatHelper.requestedHeight, !webCamTextureToMatHelper.requestedIsFrontFacing);
    362.         }
    363.         // This function is to calculate the variance
    364.         double calDistanceDiff (List<Point> curPoints, List<Point> lastPoints)
    365.         {
    366.             double variance = 0.0;
    367.             double sum = 0.0;
    368.             List<double> diffs = new List<double> ();
    369.             if (curPoints.Count == lastPoints.Count) {
    370.                 for (int i = 0; i < curPoints.Count; i++) {
    371.                     double diff = Math.Sqrt (Math.Pow (curPoints [i].x - lastPoints [i].x, 2.0) + Math.Pow (curPoints [i].y - lastPoints [i].y, 2.0));
    372.                     sum += diff;
    373.                     diffs.Add (diff);
    374.                 }
    375.                 double mean = sum / diffs.Count;
    376.                 for (int i = 0; i < curPoints.Count; i++) {
    377.                     variance += Math.Pow (diffs [i] - mean, 2);
    378.                 }
    379.                 return variance / diffs.Count;
    380.             }
    381.             return variance;
    382.         }
    383.         static void Swap<T> (ref T a, ref T b)
    384.         {
    385.             var t = a;
    386.             a = b;
    387.             b = t;
    388.         }
    389.     }
    390. }
     
  34. ibompuis

    ibompuis

    Joined:
    Sep 13, 2012
    Posts:
    100
    Thanks, Tried to integrate on your WebCamTexureExample and WebCamTextureFaceMaskExample but I'v this error:

    NullReferenceException: Object reference not set to an instance of an object
    FaceMaskExample.WebCamTextureFaceMaskExample.OnWebCamTextureToMatHelperInitialized () (at Assets/FaceMaskExample/WebCamTextureFaceMaskExample/WebCamTextureFaceMaskExample.cs:221)
    UnityEngine.Events.InvokableCall.Invoke () (at /Users/builduser/buildslave/unity/build/Runtime/Export/UnityEvent.cs:165)
    UnityEngine.Events.UnityEvent.Invoke () (at /Users/builduser/buildslave/unity/build/Runtime/Export/UnityEvent_0.cs:58)
    FaceMaskExample.WebCamTextureToMatHelper+<_Initialize>c__Iterator0.MoveNext () (at Assets/FaceMaskExample/Scripts/Utils/WebCamTextureToMatHelper.cs:294)
    UnityEngine.SetupCoroutine.InvokeMoveNext (IEnumerator enumerator, IntPtr returnValueAddress) (at /Users/builduser/buildslave/unity/build/Runtime/Export/Coroutines.cs:17)

    and

    ArgumentNullException: Argument cannot be null.
    Parameter name: texture2D
    OpenCVForUnity.Utils.matToTexture2D (OpenCVForUnity.Mat mat, UnityEngine.Texture2D texture2D, UnityEngine.Color32[] bufferColors) (at Assets/OpenCVForUnity/org/opencv/unity/Utils.cs:293)
    FaceMaskExample.OfKmWebCamTextureToMatHelperExample.Update () (at Assets/DlibFaceLandmarkDetector/Scripts/OfKmWebCamTextureToMatHelperExample.cs:285)

    Just added the script to Quad Gameobject.

    Thanks,
    Ilan
     
  35. Antidoto

    Antidoto

    Joined:
    Sep 12, 2013
    Posts:
    11
    Hi EnoxSoftware,
    I need to develop a simple makeup software changing the color of the lipstick and the brush color between the eyes and eyebrows. what is the best way to achieve this? paint over the webcam or tracking the png over the areas? how can i get a stretched rectangle like the mask in this areas?

    Thank you for helping us!
     
  36. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,564
    It is necessary to add OpticalFlow and KalmanFilter code into WebCamTextureFaceMaskExample.cs.
     
  37. aRayon

    aRayon

    Joined:
    Oct 27, 2016
    Posts:
    4
    Hello!

    First of all, thanks for this great product, it's amazing!

    After several months of use and learning I have some questions that I would like to solve...

    For my project I need to detect facial expressions of a face in front of the webcam, always looking at the camera with a predefined distance.

    Thinking about its use in mobile I want to get the maximum performance. This is what I am doing so far:

    - I'm cropping the webcam matrix in order to only work with a reduced area.
    - I'm working in grayscale.
    - I resize the resolution of the webcam before detecting the ROI of the face.
    - I make the detection of the face (Object detector) every X frames or even only the first frame (and in case the face moves a little I simulate the ROI following the position of the nose in the previous frame).
    - After all these steps, I apply the shape prediction with Dlib.

    I would like to try more ways of optimization and I have read that in C++ library you can do something like "scanner.set_max_pyramid_levels" which can increase the performance by excluding certain sizes of faces, which I would not mind. Can this be done from Unity?

    On the other hand, I have certain problems with the detection of the mouth when it opens a lot and / or when the user has beard. I suppose that to improve this situation I should create my own dataset with more faces of this type. Training a model is something easy to do? Can it be done with OpenCV + Dlib + Unity?. For this case, what would be better, to create an object detector or a predictor shape?

    I would not mind if before using the predictor shape the user had to train their own model using something similar to your Face recognition (
    ). Could this serve me to avoid having to create my own datasets and improve the recognition of this type of faces?

    Thank you so much for everything!
     
  38. SpiderJones

    SpiderJones

    Joined:
    Mar 29, 2014
    Posts:
    244
    I'm using the ARHeadVideoCaptureExample.cs you created, the version that uses a static PNG. Like I said, it works perfectly. Do you have an example where the OpenCVCamera's Projection is Orthographic and not Perspective? I've tried setting it to Orthographic and then setting its Size position to the same value as the Main Camera, this get's the Head model close, but the head model positions and rotation are a little off. Do you have any tips? Thanks.
     
  39. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,564
    I think optimization you added is effective for performance improvement.

    DlibFaceLandmarkDetector has no such function. However, OpenCV's detectMultiScale method can set minSize.
    https://github.com/EnoxSoftware/Dli...ationExample/FrameOptimizationExample.cs#L237

    New model files can be trained using dlib's tools.
    Please refer to this page.
    http://stackoverflow.com/questions/36711905/dlib-train-shape-predictor-ex-cpp?rq=1
    http://stackoverflow.com/questions/...ape-predictor-for-194-landmarks-helen-dataset
    Also, It is recommended to use OpenCV's detectMultiScale method which is faster processing for face detection.
     
  40. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,564
    Unfortunately, I do not have an Orthographic Projection Example.
    I think you probably need to change camMatrix.
    https://github.com/EnoxSoftware/Dli...xample/ARHeadVideoCaptureExample.cs#L309-L325
     
  41. SpiderJones

    SpiderJones

    Joined:
    Mar 29, 2014
    Posts:
    244
  42. LoloTilak

    LoloTilak

    Joined:
    Jan 30, 2018
    Posts:
    3
    ina likes this.
  43. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,564
    There are currently no plans to add the CNN-based face detector.
    An example of face detection using the DNN module is included in OpenCV for Unity.
    https://github.com/EnoxSoftware/Ope...CaffeExample/ResnetSSDFaceDetectionExample.cs

    DlibFaceLandmarkDetector support for 5-point face landmarking model.
    https://github.com/davisking/dlib-models/blob/master/shape_predictor_5_face_landmarks.dat.bz2
     
  44. VHMOliveira

    VHMOliveira

    Joined:
    May 19, 2017
    Posts:
    3
    Hello everyone, I am new to opencv and would like to know how I could get the positions of the points of the face during RealTime FaceRecognition, and what would be the best training module to leave more precise while maintaining speed.
     
  45. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,564
    What you want to do is combine RealTimeFaceRecognitionExample and DlibFaceDetector?
     
  46. VHMOliveira

    VHMOliveira

    Joined:
    May 19, 2017
    Posts:
    3
    Uhum, I want to combine the two
     
  47. nandonandito

    nandonandito

    Joined:
    Nov 24, 2016
    Posts:
    43
    hi @EnoxSoftware how can i change the facemask using my own image? there is any tutorial or complete documentation for that? thankyou..
     
  48. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,564
    To add a new face, you need to add the face information you want to add to "Assets \ FaceMaskExample \ Scripts \ Data \ ExampleDataSet.cs".If you add a face that can not be automatically detected with a face detector like a panda or an animated face, you need to manually add "faceRects" and "landmarkPoints" data.You also need to add the image of the face you want to add to the "Resource" folder.
     
  49. nandonandito

    nandonandito

    Joined:
    Nov 24, 2016
    Posts:
    43
    where can i get the landmarkpoint? im using this https://naturalintelligence.github.io/imglab/ to get landmarkpoints but i get error like these

    "ArgumentException: Invalid face landmark points Parameter name: points FaceMaskExample.WebCamTextureFaceMaskAdditionalExample.AddForeheadPoints (System.Collections.Generic.List`1 points) (at Assets/FaceMaskExample/WebCamTextureFaceMaskAdditionalExample/WebCamTextureFaceMaskAdditionalExample.cs:829)"

    and my mask is not human but animal .. and the error like thats
     
    Last edited: Apr 9, 2018
  50. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,564
    Face image addition procedure
    1. Add a new face image("new_face.png") to the "Resource" folder.
    2. Add face information to "FaceMaskExample / Scripts / Data / ExampleDataSet.cs".
      • Add "new_face" to the filenames array.
      • Add face Rect information(new Rect(x, y, w, h)) to faceRcts array.
      • Add face landmark information(new Vector2(x, y)) to landmarkPoints List. Please refer to the following image for the order of face points.
    dlibfacepoint.png