Search Unity

Azure Kinect Examples for Unity

Discussion in 'Assets and Asset Store' started by roumenf, Jul 24, 2019.

  1. caseyfarina

    caseyfarina

    Joined:
    Dec 22, 2016
    Posts:
    6
    Thank you so much for creating a great tool! I've tried running multiple realsense cameras with the cubemos tracking addon. It looks like only the first sensor is recognized by cubemose. Is there any way to create multiple skeletal tracks using realsense and cubemose? @rfilkov
     
  2. rfilkov

    rfilkov

    Joined:
    Sep 6, 2018
    Posts:
    70
    Replied by e-mail.
     
  3. rfilkov

    rfilkov

    Joined:
    Sep 6, 2018
    Posts:
    70
    I think this may be an issue of the Azure Kinect BT SDK, when tracking some people. Unfortunately I can't reproduce this issue here, and would need a recording made with the k4arecorder-tool, to take a closer look at it. Please e-mail me for further instructions on how to make the recording and how to send it over to me, if you'd like to provide a recording.
     
  4. novakova

    novakova

    Joined:
    Aug 27, 2016
    Posts:
    8
    Hello,

    I have a few questions ( skeleton tracking ), just to make sure I understand it correctly before I deep dive into it:

    1. Do I need a CUDA capable gpu? Are amd apu-s with integrated gpu supported, desktop cpu 5600G / 5700G, notebook cpu 4800U / 5800U?

    2. Is the RealSense D455 supported, dual, triple? Would you recommend it over the D435 version for skeleton tracking?


    Thank You!
     
  5. rfilkov

    rfilkov

    Joined:
    Sep 6, 2018
    Posts:
    70
    Hi. To your questions:
    1. I would recommend a CUDA capable GPU for better performance. CUDA is the by-default BT processing mode in the K4A-asset. But Body Tracking SDK v1.1 allows selection of the processing mode (used by the underlying onnxruntime) to be DirectML or TensorRT, as well, so other GPUs are also supported. This is a setting of the Kinect4AzureInterface-component in the scene.

    2. I have not tested D455, but it should be supported. Please note though, Intel does not provide body tracking SDK for its RealSense sensors. Instead, it recommends using the Cubemos skeleton tracking SDK. This requires an update of the RealSenseInterface-script in the K4A-asset. If you install the Cubemos SDK and need the RS-interface update, please e-mail me, tell me your invoice or order number, and I'll send it over to you. Otherwise, I would recommend D415.
     
  6. illsaveus

    illsaveus

    Joined:
    Nov 19, 2016
    Posts:
    3
    Fantastic, I'll send over an email to you right away!
     
  7. pouria77

    pouria77

    Joined:
    Aug 13, 2014
    Posts:
    25
    Hello

    We're using the Kinect Azure for a dancing demo application, in which people dance in front of a wide screen and then we'll see their cutout against some background (maximum 3 people at the same time).

    I'm using the kinect manager and background remover in my scene, and I have an issue, which is, the kinect only covers about 75% of the screen, meaning, the farthest side from either left or right that the user is being tracked are not wide enough.

    I thought the first solution would be to change the Depth Mode to WFOV (as is possible using the Azure Kinect Viewer) but I haven't found a way to change that from inside unity, and none of the other options seem to have much effect (including the depth camera resolution, in the inspector).

    Does anyone know how I can fix it?

    I have attached a photo of my settings on the Kinect manager and background remover.
    Thanks a lot
     

    Attached Files:

    Last edited: Oct 6, 2021
  8. matmat_35

    matmat_35

    Joined:
    Apr 20, 2018
    Posts:
    4
    Hello,
    i used the "Azure Kinect Examples" with the kinect v2 in Hdrp but it seems that there are no more "Smoothing" & "Velocity Smoothing" parameters in the Kinect manager like in the "Kinect v2 examples".
    These parameters were very useful...
    Best
    Mathieu
     
  9. rfilkov

    rfilkov

    Joined:
    Sep 6, 2018
    Posts:
    70
    You can set the depth-camera mode (in means of resolution and NFOV/WFOV) in the Kinect4AzureInterface-component settings. See below.

     
  10. rfilkov

    rfilkov

    Joined:
    Sep 6, 2018
    Posts:
    70
    Hi Mathieu,
    Yes, you are right! I have to bring these filters to the K4A-asset, too. Please e-mail me about this issue, so I don't forget.
     
  11. pouria77

    pouria77

    Joined:
    Aug 13, 2014
    Posts:
    25

    I tried it, but not only it didn't change the wideness, it made the camera have this weird output.. I made a video to show what I mean :



    As you see, the far left of the camera is still center of the image (our target output resolution is 4992x1080) but the strange thing is that output.

    Am I doing something wrong in the settings?
     
  12. rfilkov

    rfilkov

    Joined:
    Sep 6, 2018
    Posts:
    70
    Just tried it in the demo scene:



    Please note though, the WFOV modes work well in close distances only - up to 3m-3.5m max.
     
    Last edited: Oct 8, 2021
  13. pouria77

    pouria77

    Joined:
    Aug 13, 2014
    Posts:
    25
    Thanks for getting back to me.

    I think I need to explain my issue more clearly.

    Based on Kinect's color camera specs, if the user stays 9 feet from the camera, they should be able to move in an area of 15 feet wide that'll be covered by the kinect. I uploaded an image to demonstrate.

    In my test, the area I can cover before I go out of bounds is around 8 feet.
    I used kinect's backgroundRemovalDemo2 for these tests.

    This is demonstrated better in these videos that I made :




    My screen resolution is 4992x1080 btw.

    I tried different settings on the Color Camera mode, and depth camera mode.. trying different resolutions, none of them have much effect on the range.

    What am I missing?

    Thanks again
     

    Attached Files:

    Last edited: Oct 8, 2021
  14. BenrajSD

    BenrajSD

    Joined:
    Dec 17, 2020
    Posts:
    25
    Hi, I am facing a issue with detection on some of dress colors. With blue and green detection seems weak and with black woolen kind of jacket it is not detecting at all, any settings to be made to improve that. Also what is the light and environment settings you are suggesting for having better detection, where kinect is placed.
     
  15. BenrajSD

    BenrajSD

    Joined:
    Dec 17, 2020
    Posts:
    25
    We are facing the same, mostly on light colored faces and light is more intense
     
  16. BenrajSD

    BenrajSD

    Joined:
    Dec 17, 2020
    Posts:
    25
    One more thing @rfilkov is there is any possibility to restrict the shoulder angle, I want it to restrict till A-pose with the model, in rest pose hands are penetrating with the body. Tried with bone angle, it affect other parts. Any settings or solution to control that bone, same till T-pose, not allow to go beyond that. Muscle settings in editor won't helping, since Kinect overwrite that values.
     
  17. pouria77

    pouria77

    Joined:
    Aug 13, 2014
    Posts:
    25
    A quick question: Is there a way to use background remover with only color camera? or it has to use both depth and color camera?
     
  18. rfilkov

    rfilkov

    Joined:
    Sep 6, 2018
    Posts:
    70
    No, you can't remove the background without the depth camera. It's needed for the depth image transformation to the color camera space, and for the body tracker as well.

    To your previous videos: Let me explain a bit how the background remover and the Azure Kinect sensor work. The background-remover uses depth image transformation to the color-camera space. If you look closer, you will see the color camera resolutions have 2 aspect ratios (16:9 and 4:3). The depth camera modes are two types, too - NFOV and WFOV. NFOV modes are narrower (hence the name), have hexagonal shape and have the ability to detect farther - up to 5.4 meters for 320x288, and 3.8 meters for 640x576. The WFOV modes are wider, have oval shape, cover better the color camera image, but the range of detection is closer - 2.9 meters for 512x512 and 2.2 for 1024x1024.

    When you transform the NFOV depth image (e.g. 640x576) to the color camera, it doesn't cover it well, and you can see its hexagonal area of detection in the video. When you transform a WFOV depth image (e.g. 1024x1024), it covers the color camera image fully, but the user needs to stay really close to the sensor. This mode works with 15 FPS only, and it's not recommended for body tracking, too. So, for a THIRD time now I tell you: in your case, please use the 512x512 depth mode instead, to get a compromise - good color image coverage, decent body tracking and farther max distance. This is the mode I used in my screenshot above (outlined there).

    Last, whatever your screen resolution is, the coverage will be within the aspect ratio of the selected color camera resolution, i.e. 16:9 or 4:3. The rest of the on-screen picture (left and right) will not be covered.

    Please look at this page, if you need more info regarding the Azure Kinect image transformations.
     
  19. rfilkov

    rfilkov

    Joined:
    Sep 6, 2018
    Posts:
    70
    The muscle settings should work, if you enable the 'Apply muscle limits'-setting of the AvatarController-component.
     
  20. pouria77

    pouria77

    Joined:
    Aug 13, 2014
    Posts:
    25
    Got it. Thanks for the great explanations.

    Best
     
unityunity