Search Unity

Azure Kinect Examples for Unity

Discussion in 'Assets and Asset Store' started by roumenf, Jul 24, 2019.

  1. rfilkov

    rfilkov

    Joined:
    Sep 6, 2018
    Posts:
    87
    Hi, the gesture detection is a integral part of the K4A-asset. There are demo scenes in this regard too, both for discrete and continuous gestures. Please look at this link: Demo Scenes (ratemt.com) or at the pdf-file here. And sorry, but I don't have time to make video tutorials.
     
  2. rfilkov

    rfilkov

    Joined:
    Sep 6, 2018
    Posts:
    87
    No, it's not your laptop settings. The avatar demo scenes use body tracking, while the blob detection and point cloud demos don't. The body tracking uses CUDA/GPU as inference engine, hence the GPU load. If you have the latest version of the K4A-asset, feel free to try the lite BT model instead. To do it, please open 'Kinect4AzureInterface.cs' in the 'AzureKinectExamples/KinectScripts/Interfaces'-folder and at its beginning replace 'BODY_TRACKING_MODEL_FILE = "dnn_model_2_0_op11.onnx";' with 'BODY_TRACKING_MODEL_FILE = "dnn_model_2_0_lite_op11.onnx";'.
     
    underkitten likes this.
  3. seldemirov

    seldemirov

    Joined:
    Nov 6, 2018
    Posts:
    48
    Thanks for the answer. I am very glad that the gestures have been implemented.
     
    rfilkov likes this.
  4. o2co2

    o2co2

    Joined:
    Aug 9, 2017
    Posts:
    45
    AvatarController.cs control how to turn 3D to 2D?
    Thank you for helping me.


     
  5. klseah

    klseah

    Joined:
    May 8, 2017
    Posts:
    2
    Hi,

    I'm really loving all the example scenes you have provided. Quick question: in AvatarsDemo4, is it possible to have two different avatars for 2 different users detected?

    Thanks again for a great asset.
     
  6. rfilkov

    rfilkov

    Joined:
    Sep 6, 2018
    Posts:
    87
    Please enable the 'Ignore Z-Coordinates'-setting of KinectManager-component in the scene.
     
  7. rfilkov

    rfilkov

    Joined:
    Sep 6, 2018
    Posts:
    87
    Not out of the box. You should modify the code of the UserAvatarMatcher-component, declare a 2nd avatarModel-variable (and assign your 2nd avatar object to it in the scene), and then instantiate one of the two models n the CreateUserAvatar()-method of the script, according to your requirements.
     
  8. o2co2

    o2co2

    Joined:
    Aug 9, 2017
    Posts:
    45
    Thank you for your reply, set the 'Ignore Z-Coordinates', rotation is wrong, I hope to get your help, thank you.
     
  9. klseah

    klseah

    Joined:
    May 8, 2017
    Posts:
    2
    Ok. Thank you!
     
  10. rfilkov

    rfilkov

    Joined:
    Sep 6, 2018
    Posts:
    87
    Ah, sorry. Please open 'Kinect4AzureInterface.cs' in 'AzureKinectExamples/KinectScripts/Interfaces'-folder, find the CalcBodyJointOrients()-method and comment it out. Then enable the 'Ignore Z-coordinates'-setting of the KinectManager-component in the scene, and if needed disable the 'Bone orientation constraints'-setting, too.
     
  11. o2co2

    o2co2

    Joined:
    Aug 9, 2017
    Posts:
    45
    Thanks, but it still doesn't work properly.
     
  12. o2co2

    o2co2

    Joined:
    Aug 9, 2017
    Posts:
    45
    upload_2021-4-6_16-31-38.png
    upload_2021-4-6_16-33-5.png
     
  13. rfilkov

    rfilkov

    Joined:
    Sep 6, 2018
    Posts:
    87
  14. o2co2

    o2co2

    Joined:
    Aug 9, 2017
    Posts:
    45
    3D rotation is correct. Modify 2D rotation is wrong.
     
  15. rfilkov

    rfilkov

    Joined:
    Sep 6, 2018
    Posts:
    87
    In this scene the 'Ignore Z-coordinates'-setting was enabled. If you'd like to investigate the issue, please e-mail me and, if possible provide me a sample 2d scene, so I can reproduce your issue.
     
  16. mizan15

    mizan15

    Joined:
    Aug 2, 2020
    Posts:
    4
    Hello, I'm the one who bought a. If there are azure 1 to 3, is it possible to synchronize 1 and 2 of them to observe area A, and 3 to observe area B separately? If possible, please let me know how.
     
  17. rfilkov

    rfilkov

    Joined:
    Sep 6, 2018
    Posts:
    87
    It should be possible, but you should setup the calibration of the sensors (i.e. their positions and rotations in the scene) manually. The MultiCameraSetup-scene that calibrates the camera automatically needs an intersection area of all cameras, in order to work.
     
  18. mizan15

    mizan15

    Joined:
    Aug 2, 2020
    Posts:
    4
    Thanks alot! Then how can I setup positions and rotations of azure? do I need to edit json or edit Kinect4Azure gameobject's transform?
     
  19. cecarlsen

    cecarlsen

    Joined:
    Jun 30, 2006
    Posts:
    864
    Hi Rumen @rfilkov

    I wonder if you can shed some light on the purpose of SpaceTable. It is loaded from a binary, uploaded and used in the expansion from depth image to point cloud.

    1) Why not use intrinsic values to expand the vertex? As in ( uv - principalPoint ) * depth / focalLength. Does SpaceTable also contain lens distortion correction per pixel?

    2) Why does SpaceTable contain XYZ when Z does not seem to be used?

    EDIT: more questions.

    3) sensorData contains color2DepthExtr and depth2ColorExtr. However, those have zero translation and rotation. Do I have to flick a switch to have them updated?

    EDIT. Found a temporary workaround for question 3.

    Code (CSharp):
    1.        Calibration coordMapperCalib = _sensorInterface.kinectSensor.GetCalibration( DepthMode.NFOV_Unbinned, ColorResolution.R720p );
    2.         Extrinsics depth2ColorExtr = coordMapperCalib.ColorCameraCalibration.Extrinsics;
    3.         float u2m = _sensorData.unitToMeterFactor;
    4.         float[] r = depth2ColorExtr.Rotation;
    5.         float[] t = depth2ColorExtr.Translation;
    6.         Matrix4x4 depth2colorCamMat = new Matrix4x4
    7.         (
    8.             new Vector4( r[ 0 ], r[ 3 ], r[ 6 ], 0 ),
    9.             new Vector4( r[ 1 ], r[ 4 ], r[ 7 ], 0 ),
    10.             new Vector4( r[ 2 ], r[ 5 ], r[ 8 ], 0 ),
    11.             new Vector4( t[ 0 ] * u2m, t[ 1 ] * u2m, t[ 2 ] * u2m, 1 )
    12.         );
    13.         _colorToDepthExtrinsics = depth2colorCamMat.inverse;
     
    Last edited: May 11, 2021
  20. rfilkov

    rfilkov

    Joined:
    Sep 6, 2018
    Posts:
    87
    Both approaches would be correct. If you set the positions and rotations of the cameras in 'multicam_config.json' and then enable 'Use multi-cam config'-setting of the KinectManager-component in any scene, you can reuse your camera poses and settings in any scene. If you set them in a scene, they will work only that specific scene.

    If I were you, I would adjust them manually in one scene, and when I'm satisfied with the results there, I'll copy the values to 'multicam_config.json'.
     
  21. rfilkov

    rfilkov

    Joined:
    Sep 6, 2018
    Posts:
    87
    Hi @cecarlsen, to your questions:

    1. The space tables are just caches of the unprojected and undistorted space positions for each point in the respective depth or color camera images. Your formula will only unproject the pixels, but will not correct the distortion. To spare the time for all these computations on each frame, I prefer to do it once and then use the space table caches for point cloud calculations.

    2. Z needs to be there, to spare the conversion between 2d and 3d coordinates at runtime.

    3. 'depth2ColorExtr' should have values, when both 'Get depth frames' & 'Get color frames'-settings are not set to 'None'). Unfortunately 'color2DepthExtr' is usually set to 0 by the SDK. That's why, here is what I would recommend:
    "
    Matrix4x4 depth2colorCamMat = _sensorInterface.GetDepthToColorCameraMatrix();
    Matrix4x4 color2depthCamMat = depth2colorCamMat.inverse;
    "

    Hope this helps.
     
    cecarlsen likes this.
  22. cecarlsen

    cecarlsen

    Joined:
    Jun 30, 2006
    Posts:
    864
    Thanks for the answers @rfilkov, much appreciated.

    Sounds like the way to go.

    Ah. Annoying o_O

    I see. Well that does not work for me. I only acquire the depth image. I've calibrated the physical world space extrinsics from the color image in a separate scene, so I need the color-to-depth extrinsics to calibrate the depth point cloud precisely with the physical space. The workaround I wrote above works for now.

    All the best
    Carl Emil
     
    Last edited: May 12, 2021
  23. cecarlsen

    cecarlsen

    Joined:
    Jun 30, 2006
    Posts:
    864
    Hi @rfilkov

    Sorry to bother you with so many questions ... here is another one:

    Can I disable body tracking for a specific kinect sensor in a setup containing multiple kinect sensors?

    I have two Kinects connected to the same machine. Kinect A is viewing the physical scene top-down, and Kinect B is viewing the same physical scene front front. Only Kinect B is meant to find users (skeleton tracking). Should I use one KinectManager, put both sensors inside it and somehow disable body tracking for Kinect A? Or should I use two KinectManagers, one for each kinect sensor?
     
  24. rfilkov

    rfilkov

    Joined:
    Sep 6, 2018
    Posts:
    87
    The KinectManager-component in the scene always controls all sensors. It should be only one, and the sensor interfaces should be components of the same object or (even better) of its child objects, because they can update the respective object's position and rotation.

    You can't disable the body tracking of one sensor interface with the 'Get body frames' KinectManager-setting. You could either leave it enabled (for both sensors, but 'Kinect A' should not detect any bodies), or modify the KM code. If I were you, I would look for the UpdateTrackedBodies()-method in KinectManager, modify this line accordingly: 'else if(sensorDatas.Count == 1 && sensorIndex == 0 && lastBodyFrameTime != sensorData.lastBodyFrameTime)' and comment out the block starting with: 'else if (sensorDatas.Count > 1 && sensorIndex == 0 && userBodyMerger != null)'.
     
    cecarlsen likes this.
  25. BenrajSD

    BenrajSD

    Joined:
    Dec 17, 2020
    Posts:
    25
    Azure Kinect Examples for Unity
    Hi, I am facing a issue as when I have a pant model attached to the body, the knee joint is always have a angle and looks like bent, I have forced to stop the movement of it, so while moving front and back it's like dragging. How to fix that issue.

    The lower body parts are not at all stable, having jerkiness, so I couldn't able to cloth simulation with that.
     
  26. Vic070

    Vic070

    Joined:
    Aug 6, 2016
    Posts:
    9
    Hello,
    First of all, thank you for your work, this asset is great, though there are some issues I am having that might be some settings I haven't adjusted correctly.

    I've been playing around with the AvatarDemo1 scene. I've added my own rig and model and attached the components necessary for it to work.

    The issue is that the tracking is a bit glitchy, when the person turns around or makes certain movements, the model (mine or the example model) has issues with tracking. The head glitches in place, the model suddenly turns around too fast and kinda adjusts itself in weird positions, etc.

    Is there a way to make body tracking a bit smoother or are there any settings I need to adjust to make it more accurate?

    Thanks for the support.
     
  27. coutlass-supreme

    coutlass-supreme

    Joined:
    Feb 21, 2014
    Posts:
    22

    Hi Rumen,

    I'm also having some trouble with Body tracking lags/hiccups. The strange thing is it works pretty well some seconds before the hiccups start and with no body tracking works as it should.

    The K4A viewer has the same problem for me. I'm not sure what the default parameters are on that app.

    I checked and all the GPU usage comes only from within the Unity application.

    I'm using a GTX 1060 and intel i7 on windows 10 with the Azure drivers you suggest on the assets store page AKS SDK (v1.4.1) and AKBT (v1.1.0) . And the latest Nvidia drivers.

    I tried:
    Lower fps for the camera.
    All processing modes.
    Both onnx model files.

    LITE:

    upload_2021-6-9_19-52-0.png

    Default

    upload_2021-6-9_20-14-20.png
     

    Attached Files:

    Last edited: Jun 10, 2021
  28. rfilkov

    rfilkov

    Joined:
    Sep 6, 2018
    Posts:
    87
    Sorry, I can't reproduce your issue.
     
    BenrajSD likes this.
  29. rfilkov

    rfilkov

    Joined:
    Sep 6, 2018
    Posts:
    87
    First off, here is a tip on how to use your model with AvatarController: Kinect v2 Tips, Tricks and Examples | RF Solutions - Technology, Health and More (rfilkov.com)

    Unfortunately I can't reproduce your issue at the moment. Please e-mail me and attach some screenshots or a short video, so I can better understand what exactly you are doing to get these glitches.
     
  30. rfilkov

    rfilkov

    Joined:
    Sep 6, 2018
    Posts:
    87
    Hi, I tried to reproduce your issue with the KinectAvatarsDemo4-scene, 30 fps, CUDA and the full BT model, and here is what the task manager is showing after ~10 minutes (I have NVidia GTX 1060, as well):



    Are you sure these hiccups are not caused by something else?
     
  31. BenrajSD

    BenrajSD

    Joined:
    Dec 17, 2020
    Posts:
    25
  32. coutlass-supreme

    coutlass-supreme

    Joined:
    Feb 21, 2014
    Posts:
    22
    Thank you for the test, Rumen, they most likely are caused by something else, ill try again on a fresh machine.
     
    rfilkov likes this.
  33. rfilkov

    rfilkov

    Joined:
    Sep 6, 2018
    Posts:
    87
  34. BenrajSD

    BenrajSD

    Joined:
    Dec 17, 2020
    Posts:
    25
  35. mchangxe

    mchangxe

    Joined:
    Jun 16, 2019
    Posts:
    69
    Hi Roumenf,

    Great asset, one question tho. I am using the Depth Image component to display the raw depth image on my canvas. The image comes out great, and by changing the DepthHistImageShader I am able to turn it into the colors I want. However my question is that is there a way to stop pixels from flickering in between colors without sacrificing refresh rate of data?
     
  36. BenrajSD

    BenrajSD

    Joined:
    Dec 17, 2020
    Posts:
    25
  37. rfilkov

    rfilkov

    Joined:
    Sep 6, 2018
    Posts:
    87
    Sorry for the late reply. I'm quite busy lately, and have not much time to answer questions on the forums.
    I tried your model with several people and actually can't reproduce the issue. See the picture below.
    I may need a recording to help me reproduce your issue. Please e-mail me, so I can tell you how to create and send me the recording.

     
    Last edited: Jul 11, 2021
  38. rfilkov

    rfilkov

    Joined:
    Sep 6, 2018
    Posts:
    87
    Not out of the box, unfortunately. DepthHistImageShader uses the unfiltered depth data coming from the sensor/SDK. To remove the noise that bothers you, you would need to implement some kind of temporal filter between the frames.
     
  39. BenrajSD

    BenrajSD

    Joined:
    Dec 17, 2020
    Posts:
    25
    Sure will share the recording again
    https://drive.google.com/file/d/1qHz0BTAG_EtXbTKTHrfvbu_fJPhZ5CKn/view
     

    Attached Files:

  40. rfilkov

    rfilkov

    Joined:
    Sep 6, 2018
    Posts:
    87
    Thank you, but I don't need the same video again. What I need is an Azure Kinect stream recording, created with the help of the k4arecorder-tool. It should reproduce the same behavior as demonstrated on the video, when played together with the pants-model in (let's say) the KinectFittingRoom2-scene of the K4A-asset.
     
  41. BenrajSD

    BenrajSD

    Joined:
    Dec 17, 2020
    Posts:
    25
    Sure, will do and share
     
  42. cecarlsen

    cecarlsen

    Joined:
    Jun 30, 2006
    Posts:
    864
    Hi @rfilkov

    I updated to the newest version (1.16.1) and noticed that the KinectManager generates 1.2kb of garbage every update (Unity 2021.1.16f1). I didn't notice this before. The garbage is generated in the Capture class. It all adds up. Can it be reduced?

    All the best


    KinectGarbage.jpg
     
  43. rfilkov

    rfilkov

    Joined:
    Sep 6, 2018
    Posts:
    87
    Hi @cecarlsen
    Thank you for reporting this issue! I don't remember I've changed anything in the latest releases regarding the captures, but I'll take a look at it, as soon as I get back from vacation. Please remind me in a week or two, in case I forget.
     
    cecarlsen likes this.
  44. louis2009

    louis2009

    Joined:
    Jun 4, 2013
    Posts:
    9
    Hello,

    I'm a Kinect V2 player, but looking for Azure Kinect, I would like to know the quality of the color camera any improved?
    I found the RGB Camera Resolution has up to 3840x2160, but not sure how good is it.

    Thank you very much.
     
  45. rfilkov

    rfilkov

    Joined:
    Sep 6, 2018
    Posts:
    87
    Azure Kinect provides multiple color resolutions, up to 4096 x 3072, as far as I know the quality of the color camera is not better than the one in the Kinect-v2 sensor.
     
    louis2009 likes this.
  46. mchangxe

    mchangxe

    Joined:
    Jun 16, 2019
    Posts:
    69
    I am using the k4a azure kinect examples sdk for unity to create a virtual fitting room type experience.

    What i am trying to accomplish is: when a user is detected, spawn virtual objects on his/her body and them stick to the user mesh (so they continue to stick on the user's body). I have achieved this mostly by taking the positions of the vertices from the baked mesh result of SkinnedMeshRender. To ensure that the fitting is exact for a person of any height and weight, I am using a script in the k4a sdk called AvatarScaler.cs. This script scales the skinnedMeshRenderer mesh to the exact shape of the user's body. I took a peek into the script and saw that the avatar is scaled by scaling each bone (I think, not 100% sure).

    My problem is: my objects are following the user body just fine. The problem is that when the avatar scales, it seems like the mesh that is baked by skinnedMeshRenderer does not take into account such scaling, so the vertex positions im using to place virtual objects are off by a constant scale factor.

    My question is: how do I get vertex positions of a skinnedMeshRenderer after bones are scaled?
     
  47. BenrajSD

    BenrajSD

    Joined:
    Dec 17, 2020
    Posts:
    25
    Hi, I am getting an error as can't create body tracker, in editor tracking is working, while taking build and testing this issue occurs, tested with dev build. What could be the issue, tried in 2020 and 2021 unity versions.

    Another issue is the source which worked in one device is not working in another, in editor itself shows body tracking creation failed, all dependencies are installed, build is working in that device also one of the old source is working. This issue happens when I creates a new project and imported the sdk through package manager and tried. Unity version tried are 2020 and 2021.

    Please help me on this, couldn't able to proceed further on development and build testing and deployment.

    Thanks in advance.
     
  48. rfilkov

    rfilkov

    Joined:
    Sep 6, 2018
    Posts:
    87
    My first suggestion would be to disable the 'Continuous scaling'-setting of the AvatarScaler-component. If this doesn't help, please e-mail me and provide me a scene that I could use to reproduce the issue, so I can take a closer look.
     
  49. rfilkov

    rfilkov

    Joined:
    Sep 6, 2018
    Posts:
    87
    I've tested the K4A-asset with almost all Unity releases (up to 2021.1.x) so far, and it worked fine. Please look at the Player's or at the Editor's log-file after you run (and close) the not-working scene, to see what exactly went wrong. This may give you a hint. Look here where to find the Unity log-files, and here for issues regarding the body tracker creation.
     
  50. illsaveus

    illsaveus

    Joined:
    Nov 19, 2016
    Posts:
    3
    I love this asset but I'm having some trouble with the body flipping 180 degrees from time to time. It seems to be caused by lack of face-tracking. We have some users with issues that the avatar faces backwards and flips back and forth. 80% of the time, the avatar tracks the user just fine but for some people, it flips between facing forward then backward. I was hoping that getting a second camera would help with the tracking issue, but so far I haven't had any luck. Do you have any suggestions on how I could fix this issue?