Search Unity

  1. Unity 2020.2 has been released.
    Dismiss Notice
  2. Good news ✨ We have more Unite Now videos available for you to watch on-demand! Come check them out and ask our experts any questions!
    Dismiss Notice

[RELEASED] Dlib FaceLandmark Detector

Discussion in 'Assets and Asset Store' started by EnoxSoftware, Jun 4, 2016.

  1. elhongo

    elhongo

    Joined:
    Aug 13, 2015
    Posts:
    35
    Hi,

    I am using Unity's Universal Render Pipeline in Unity 2019.3.0f1 and I am unable to see the ARHeadWebCamTextureExample camera video when running in Editor, nor Android.

    Is DLib compatible with URP? Can I do something for this example to work?
     
  2. RyanNguyen

    RyanNguyen

    Joined:
    Jul 20, 2016
    Posts:
    8
    Hi!
    How can I save the result of the training ?
    What do I need to do to train AI smarter ?
    How to create a *.dat file like " sp_human_face_68.dat " ?
    Thanks.
     
  3. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,301
    For details of how to train custom model files, please refer to "DlibFaceLandmarkDetector/DlibFaceLandmarkDetectorTrainingDataset.txt".
     
  4. bihi10

    bihi10

    Joined:
    Jul 5, 2018
    Posts:
    5
    I moved Streamingassets folder but still getting
    Invalid face landmark points
    Error
     
  5. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,301
    Could you tell me which test environment you tried?
    DlibFaceLandmarkDetector version :
    Unity version :
     
  6. bihi10

    bihi10

    Joined:
    Jul 5, 2018
    Posts:
    5
    Thank you it's Solved
    Am looking for any way to add effects to face parts like glasses and so on
    do you have any idea?
     
  7. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,301
    I think it is possible by replacing ARHead(GameObject) of ARHeadWebCamTextureExample with 3D object glasses.
     
  8. bihi10

    bihi10

    Joined:
    Jul 5, 2018
    Posts:
    5
    Thank you so much
    now am stuck for 4 days in taking screenshot of the game scene because it only captures the ui or my face
     
    Last edited: Jan 5, 2020
  9. wcchoe

    wcchoe

    Joined:
    Nov 22, 2013
    Posts:
    5
    Hello Thank you for making a great tool. Unfortunately, I could not bring a Face Data(StreamingAssetssp_human_face_68.dat)
    This phenomenon is common when there is a non-English path.
    ex) D:/한글폴더/Assets/StreamingAssetssp_human_face_68.dat
    Please help me. I'm on the verge of jumping into the river
     
  10. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,301
    You can set the relative file path from the project folder.
    //dlibShapePredictorFilePath = Utils.getFilePath (dlibShapePredictorFileName);
    dlibShapePredictorFilePath = "./Assets/StreamingAssets/sp_human_face_68.dat";

    https://github.com/EnoxSoftware/DlibFaceLandmarkDetector/issues/8
     
  11. PabloLaMont_1

    PabloLaMont_1

    Joined:
    May 17, 2015
    Posts:
    3
    Hi! so I'm using the ARHeadWebCamTextureExample and I have a question, could I make the example run in multiple faces? I was looking at this part and in the part where the points are read, you pass the detectionResults[0] meaning you pass the first face, I tried changing this to a loop but it didn't work (mainly because of the local variables down in the code) facelandmarkDetection is returning the numbers of faces correctly is just the 3d object mask that is not working THANKS!
    upload_2020-2-19_15-46-28.png
     
  12. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,301
    Could you try the attached example scene?
    This is an example of using a very simple way to support multiple AR heads.
     

    Attached Files:

  13. PabloLaMont_1

    PabloLaMont_1

    Joined:
    May 17, 2015
    Posts:
    3

    :eek: ! I didn't saw that one in the packages, thanks a lot!
     
  14. luisanton

    luisanton

    Joined:
    Aug 25, 2009
    Posts:
    325
    I just upgraded to version 1.3.0 and the plugin throws an error when loading the .dat file on iOS, but it works on Android. I tried with the WebCamTextureExample and I get the same error:

    Failed to load (removed path, ignore)/Assets/StreamingAssets/sp_human_face_68.dat
    UnityEngine.Debug:LogError(Object)
    DlibFaceLandmarkDetector.FaceLandmarkDetector:.ctor(String) (at Assets/DlibFaceLandmarkDetector/Scripts/FaceLandmarkDetector.cs:66)
    DlibFaceLandmarkDetectorExample.WebCamTextureExample:Run() (at Assets/DlibFaceLandmarkDetector/Examples/WebCamTextureExample/WebCamTextureExample.cs:170)
    DlibFaceLandmarkDetectorExample.WebCamTextureExample:Start() (at Assets/DlibFaceLandmarkDetector/Examples/WebCamTextureExample/WebCamTextureExample.cs:159)

    Am I missing something? I'll check the update notes just in case...
     
  15. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,301
    Please move the “DlibFaceLandmarkDetector/StreamingAssets/” folder to the “Assets/” folder.
     
  16. luisanton

    luisanton

    Joined:
    Aug 25, 2009
    Posts:
    325
    No, sorry, I do have my StreamingAssets folder in the right place, and as I said, it works on PC and Android. It's on iOS where the model is not being properly loaded...
     
  17. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,301
    Thank you very much for reporting.
    Could you tell me the environment you tried?
    Unity version :
    Xcode version :
    iOS version :
     
  18. luisanton

    luisanton

    Joined:
    Aug 25, 2009
    Posts:
    325
    I'm not even compiling to iOS yet, it's not working on a Mac when switching Build Settings to iOS on Unity 2019.2.15 but it works if set to PC/Mac
     
  19. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,301
    I successfully built using Unity Version 2019_2_15f1 and Xcode 11.3.1. It also ran without problems on the iPhone8 device.
    Which version of Xcode did you use?
    ios_cloudbuild.PNG
     
  20. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,301
    I think the attachments will help you.
    ARFaceWithFaceMaskExample.PNG
     

    Attached Files:

  21. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,301
    Thank you very much for reporting.
    Could you try the ARHeadWithFaceMaskExample2.3.8 for OpenCVForUnity2.3.8?
    1. Setup FaceMaskExample1.0.8.
    2. Import ARHeadWithFaceMaskExample2.3.8.unitypackage
     

    Attached Files:

  22. Pixel2015

    Pixel2015

    Joined:
    Mar 10, 2016
    Posts:
    35
    Hi just a question that was probably asked and I overlook it. Is it possible to implement gaze tracking as well?
     
  23. idunnuhow

    idunnuhow

    Joined:
    Apr 16, 2014
    Posts:
    23
    @EnoxSoftware

    Title : 【Use Only One Camera For ARHead And FaceMask】

    Hi EnoxSoftware,

    Is there anyway to use only one camera for arhead and facemask effect? The reason is I'm using render texture of camera to snapshot the image, so currently I can only snapshot the facemask effect without arhead effect.

    I'm knowing that the current algorithm for arhead only support perspective view camera to archieve the "bigger when near, smaller when far" effect. Can you advise anyway to achieve how to use only one camera?

    Thanks for support.
     
  24. idunnuhow

    idunnuhow

    Joined:
    Apr 16, 2014
    Posts:
    23
    Anyone know whats going on?
    When using laptop camera, the facemask run out position. (1.PNG)
    When using logitech webcam, the facemask are working well. (2.PNG)

    It is same version application.
     

    Attached Files:

    • 1.PNG
      1.PNG
      File size:
      163.8 KB
      Views:
      58
    • 2.PNG
      2.PNG
      File size:
      162.1 KB
      Views:
      53
  25. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,301
    I don't have an example of gaze tracking.
    I think the following examples will help you.
    https://github.com/yuxiang-gao/gaze_tracker
     
    unity_0tp_g_1vnq2Ymg likes this.
  26. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,301
    I recommend the Texture2D.ReadPixels method.
    You can use the Texture2D.ReadPixels method to obtain an image that is rendered to the screen by multi camera.
    https://docs.unity3d.com/ScriptReference/Texture2D.ReadPixels.html
     
  27. idunnuhow

    idunnuhow

    Joined:
    Apr 16, 2014
    Posts:
    23
    Thanks!!! Did you know why the facemask run out from face when using laptop camera? From logic, the facemask verticle point is base on the landmark point. I open debug mode and saw the landmark point are correct, but the 68 verticles facemask are not at correct point.

    I have double check the meshoverlay position are correct, thats why when using webcam the facemask are on correct point, but its not working on laptop camera.
     
  28. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,301
    Thank you very much for reporting.
    Could you tell me the width and height values of the video image that you can get from the camera?
    laptop camera
    width :
    height :
    logitech webcam
    width :
    height :
     
  29. idunnuhow

    idunnuhow

    Joined:
    Apr 16, 2014
    Posts:
    23
    This is a GREAT tips, okay here the setup
    laptop camera : 1920 * 1080
    webcam : 1920 * 1080

    WebcamTextureToMatHelper
    request width : 800
    request height : 450

    Here is the problem that I go check the debug, when u mention size

    Webcam: Correct size (just the height less 2 pixel, but its okay)
    WebCamTextureToMatHelper:: devicename:c922 Pro Stream Webcam name: width:800 height:448 fps:30 videoRotationAngle:0 videoVerticallyMirrored:False isFrongFacing:True

    Laptop Camera: Wrong size
    WebCamTextureToMatHelper:: devicename:USB Video Device name: width:848 height:480 fps:30 videoRotationAngle:0 videoVerticallyMirrored:False isFrongFacing:True

    Thats why it run out on laptop camera because the width and the height are not I requested, I tried go hard set but it still not giving me 800*450, what I can do for this?

    ============================================================
    LATEST UPDATE:
    If I request the size 640*480 on WebcamTextureToMatHelper

    WebCamTextureToMatHelper:: devicename:c922 Pro Stream Webcam name: width:640 height:480 fps:30 videoRotationAngle:0 videoVerticallyMirrored:False isFrongFacing:True

    WebCamTextureToMatHelper:: devicename:USB Video Device name: width:640 height:480 fps:30 videoRotationAngle:0 videoVerticallyMirrored:False isFrongFacing:True

    Its working perfect with this size (640 * 480), but not for 800 * 450, any suggestion?
     
    Last edited: Apr 19, 2020
  30. Marek_Bakalarczuk

    Marek_Bakalarczuk

    Joined:
    Dec 28, 2012
    Posts:
    101
    @EnoxSoftware can you point me, how to detect brows, lips and eyes using Dlib?
     
  31. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,301
    Does this issue occur with WebCamTextureFaceMaskExample in FaceMaskExample 1.0.8?
     
  32. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,301
    The DlibFaceLandmarkDetector can detect facial landmarks. (68, 17, 6 points)
    dlib1.2.6_features.png
     
  33. Marek_Bakalarczuk

    Marek_Bakalarczuk

    Joined:
    Dec 28, 2012
    Posts:
    101
    Yes. But can I detect it as separate "objects"?
    For example: I want to change only lips color. It would be nice just to detect only lips, and paint it. Now I need to prepare whole face texture with only lips on bottom of it. Is there is easier way?

    I bought both: OpenCVforUnity and Dlib
     
  34. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,301
    I think you'll find this article helpful.
    https://www.pyimagesearch.com/2017/04/10/detect-eyes-nose-lips-jaw-dlib-opencv-python/
     
  35. oukaitou

    oukaitou

    Joined:
    Jan 10, 2014
    Posts:
    12
    Hello

    Can I use another dataset for training? Such as 300-VW video dataset.

    Thanks
     
    Last edited: Apr 29, 2020
  36. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,301
    For details of how to train custom model files, please refer to "DlibFaceLandmarkDetector/DlibFaceLandmarkDetectorTrainingDataset.txt".
     
  37. oukaitou

    oukaitou

    Joined:
    Jan 10, 2014
    Posts:
    12
    I can only train with the dataset you provided, how can I train with other image files?
     
  38. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,301
  39. wxxhrt

    wxxhrt

    Joined:
    Mar 18, 2014
    Posts:
    153
    Hi, the documentation says:-

    The size of the “sp_human_face_68.dat” is too large.
    Please use the “sp_human_face_68_for_mobile.dat”. (the “sp_human_face_68_for_mobile.dat” is less accurate than the “sp_human_face_68.dat”, but it is a smaller size.)

    How do I do this please, deleting sp_human_face_68.dat from the Streaming Assets folder gives me an error. Is there somewhere else I have to exclude things?


    EDIT: Figured it out :)

    But the next problem is that my build is not detecting the Webcam in Chrome or Firefox. Have the WebGL plugins set up correctly. Am using the latest Dlib, OPenCV 2.3.8 and Unity 2019.3.9f1 on a Mac- any help would be greatly appreciated :) Oh and the error in the browsers' consoles is:-

    OnWebCamTextureToMatHelperErrorOccurred CAMERA_DEVICE_NOT_EXIST
    (Filename: ./Runtime/Export/Debug/Debug.bindings.h Line: 35)

    and I've tried both leaving the RequestedDeviceName blank and setting it to 0 in WebcamTextureToMatHelper.
     
    Last edited: May 6, 2020
  40. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,301
    Does the following example work fine?
    https://enoxsoftware.github.io/DlibFaceLandmarkDetector/webgl_example/index.html
     
  41. wxxhrt

    wxxhrt

    Joined:
    Mar 18, 2014
    Posts:
    153
  42. shacharoz

    shacharoz

    Joined:
    Jul 11, 2013
    Posts:
    49
    hey @EnoxSoftware ,
    assuming the face always looks to the camera, can i get its rotation on z axis (rotation left or right)?
    if it is not something that is already calculated in the dlib face tracker, do you know of a way to calculate it?
    probably between the eyes, nose and mouth, the face angle can be calculated, but i guess you already did something like this.
     
  43. unity_0tp_g_1vnq2Ymg

    unity_0tp_g_1vnq2Ymg

    Joined:
    May 21, 2020
    Posts:
    1
    Hi! I am trying to build the AR head texture example scene from Dlib face tracker using Open CV for Unity targeting web platform Web GL. We encounter 3 errors:

    It builds complete with result success, but when we open HTML file using firefox we get the following error

    Thank you!
    WuQing Screenshot (434).png
     
  44. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,301
    Could you tell me which test environment you tried?
    OpenCV for Unity version :
    DlibFaceLandmarkDetector version :
    Unity version :
    Firefox version :
     
  45. m4kvrstudios

    m4kvrstudios

    Joined:
    Jan 21, 2020
    Posts:
    10
    Hello Enox,

    I'm bought the asset and I want to use it in WebGL for facetracking. How ever the camera don't start on iphone.
    I have tried to ask for permission in javascript and persmission is granted. How ever the camera don't start and the fpsMonitor do not update.

    Any tips?

    //M4k
     
    Last edited: Nov 19, 2020
  46. xhkong

    xhkong

    Joined:
    Jul 2, 2020
    Posts:
    26
    Can you provide MARS support? Basically feeding in the facetracking landmarks to MARS and hook it up? Thanks!
     
  47. navinimha

    navinimha

    Joined:
    Nov 16, 2019
    Posts:
    1
    I am using 'ARHeadWebCamTextureExample'. There is a problem with excessive vibration when a person is stopped or moves small when running. Can I fix it? And when I open my mouth, particles don't come out, but I want to make particles come out when I'm smiling. What should I do? And how can I get the code corresponding to various expressions?
     
  48. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,301
    The NoiseFilterExample is a good reference for reducing noise.
    Also,
    The following code is used to determine if the mouth is open. By changing this code, you can change the conditions under which particles are generated.
    https://github.com/EnoxSoftware/Dli...ample/ARHeadWebCamTextureExample.cs#L602-L611
     
  49. EricVenn

    EricVenn

    Joined:
    Nov 10, 2016
    Posts:
    19
    Hi. I'm trying to replace my own avatar in the Video Capture example scene. It works without problem with head rotation, but Face Blend Shape Controller is quite strange and useless, as my avatar has different blend shapes, and structure. I don't see any tool or way to 'connect' the blend shapes references of the controller with the ones of my avatar.
     
  50. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,301
    You will need to change the code to match your avatar.
    https://github.com/EnoxSoftware/CVV.../CVVTuber/Scripts/FaceBlendShapeController.cs
     
unityunity