Search Unity

Azure Kinect Examples for Unity

Discussion in 'Assets and Asset Store' started by roumenf, Jul 24, 2019.

  1. freekpixels

    freekpixels

    Joined:
    Oct 13, 2017
    Posts:
    4
    Hello @roumenf,

    firstly have to say the asset is pretty amazing when it comes down to getting full on access to Kinect Azure and 2 in Unity, top choice!

    Still learning many aspects of it and nowadays have arrived to make use of the Server-Client examples trying to figure out if the server is meant to support more clients connected and able to receive data from the same server,

    based on the tests done so far it would seem I might be doing something wrong as it looks like the clients are fighting over the data when connected to the same server, getting intermittent flashes of few frames playing on one client then disappearing few frames to show up on the next device (client), which could mean my setup is incomplete? (fingers crossed:)

    Thank you!
     
  2. Lukiioii

    Lukiioii

    Joined:
    Mar 12, 2020
    Posts:
    4
    Hi @roumenf , thanks again. I'm trying to control the camera by the gyroscope data from my Azure Kinect, so dose it possible to access the gyro in Unity3D directly? It looks
    Code (CSharp):
    1. Input.gyro;
    does not work for me.
     
  3. freekpixels

    freekpixels

    Joined:
    Oct 13, 2017
    Posts:
    4
    While successfully getting two sensors to stream over network(so far it's very slow) to a device in the same time realised it might be a port conflict?
     
  4. freekpixels

    freekpixels

    Joined:
    Oct 13, 2017
    Posts:
    4
    On the two sensors setup, of course one of the needed bits is to calibrate them so Meshes from the two sensors align together as one object, the wall I'm facing on this one is how to get both the sensors to render in the same coordinate system, could not find access to
    k4a_calibration_3d_to_3d()

    and still not figured out which of the functions implemented in the asset would handle that,

    Have gone through the genius example from the asset that is calibrating more kinects using skeletal tracking but it seems there is a coordinate system issue there as well since when the person moves away from original calibration position the model start to spread apart.

    Thank you and sorry for so many questions, working on understanding the asset more and more and resolving these needs.

    Cheers,
    FP
     
    Last edited: Sep 2, 2020
  5. davidhan

    davidhan

    Joined:
    Oct 8, 2013
    Posts:
    2
    Hi @roumenf

    Thanks for these great examples. I'm trying to write a simple script that exposes the joint position and orientations from a recorded file (txt) so that I can use them to individually animate game objects.

    I'm looking through the KinectInterop.cs and can see that the SetBodyFrameFromCsv method uses a reference to the matrix coming from the sensor (sensorToWorld) to set the position of each joint read from the text file.

    Ideally, I'd like to create an application that can read the text file independent of the sensor (so that the user doesn't have to have a Kinect). How do I set the position and orientation of these game objects if I don't have a matrix coming from the sensor?
     
    Last edited: Sep 3, 2020
    freekpixels likes this.
  6. INGTONY

    INGTONY

    Joined:
    Oct 13, 2014
    Posts:
    24
    evreything works perfect , just one question , does the azzure kinect can support the K2 face tracking? is not in the actual examples or im missing something , thank you
     
  7. vwloch

    vwloch

    Joined:
    Dec 13, 2018
    Posts:
    2
    Hello!

    If I would like to upgrade to the newest release, what is the best way to do so? I love the demos that I have from the first release, but I am looking specifically for the fix to the body collider in portrait mode that you described in your response to this post. What is the best way to get that?

    Thank you!


     
    freekpixels likes this.
  8. freekpixels

    freekpixels

    Joined:
    Oct 13, 2017
    Posts:
    4
    Hi vwloch,

    thanks for posting that quote, actually helps me out a bit on my coordinate system / getting two kinects to align correctly to form one human body :)

    No clue on the body collider issue you asked about tho, been a while since roumenf posted anything hope all groovy on that side and we'll all get the needed info.

    Cheers,
    FP
     
    vwloch likes this.
  9. Spirals0

    Spirals0

    Joined:
    Aug 19, 2020
    Posts:
    5
    Hi

    Does anyone know if its possible to use multi azure kinect to track skeleton better? E.g. having one azure kinect at the front and another one at the back of the person so that it can detect the person's hands when its occluded by the person's body?
     
  10. crogers

    crogers

    Joined:
    Jul 14, 2012
    Posts:
    8
    Hello,
    thank for sharing. the visual fx graph demo was really useful to me.

    currently none of my realsense cameras work with this app (435,455,515) but the azures and KV2 work fine

    my realsense 515lidar did not work with unity until the 2.38 release.
    i think you are using the realsense 2.36 release, i tried dropping in the new dll but it still dint work, didnt error but didnt work.

    thanks!
     
  11. crogers

    crogers

    Joined:
    Jul 14, 2012
    Posts:
    8
    i would recommend using something like the oculus skeleton or finalIK and having both cameras attempt to control that skeleton, as opposed to getting the azure skeleton to deal with 2 cameras.
     
  12. Jolandhat

    Jolandhat

    Joined:
    Oct 7, 2020
    Posts:
    3
    Has any of you used 4 Azure kinect set up and EF EVE volumetric capture? https://ef-eve.com/
    thoughts?
     
  13. misc

    misc

    Joined:
    Jul 11, 2012
    Posts:
    8
    Hi @roumenf,
    could you please check the Cubeman-Controller. I have issues, when I uncheck "mirrored movement". The lines between the joints are not correct any more.
    thanks!
     
  14. Breathing

    Breathing

    Joined:
    Apr 16, 2016
    Posts:
    7
    hey!
    has anyone succeeded in running a demo scene with Intel RealSense?
    for me, it works only in one scene (background removing) and very badly.
    most interested in gestures.

    thank you!
     
  15. julesradu

    julesradu

    Joined:
    Nov 9, 2017
    Posts:
    6
    Hi, I purchased your Azure Kinect Examples for Unity asset, it's great !

    I want to use the PointCloud / SceneMeshDemo to see the 3D environment and then have mesh collider working, is that possible ?

    I actually want this: A non-moving Kinect camera is capturing the 3D mesh; and the Unity user moves the Unity camera around and clicks with the mouse at things on the mesh. I want to have the pointer location to be spawning objects on the raycast intersect with the mesh. For this I probably have to use a mesh collider. (The mesh will be changing all the time, so I don't want to have fixed colliders)

    Is it possible to use MeshCollider in this example, or something similar ? I couldn't get it to work in the example scene.
    Thanks !
    PS: Because the mesh collider wasn't working, I actually created a ComputeShader to recalculate the mesh vertices/triangles, and pass that to Unity. It works ok (in the editor I can now click on parts of the mesh and Unity editor selects the game object) but the mesh collider raycast still doesn't work even with this data.
     
    rfilkov likes this.
  16. GZMRD17

    GZMRD17

    Joined:
    Jan 25, 2020
    Posts:
    33
    Hello roumenf. Would you know from your experience or point me to a link that gives recommended Kinect 4 settings for RGB, Depth, Body tracking, IR and audio recording? In other words, optimal frame rate, FOV setting, resolution, etc. I would like to configure six kinects all the same. Eventually importing the fused body tracking into Unity, but I also want RGB, Depth, IR and Audio data. I will appreciate any recommendations.
     
  17. Retrokat

    Retrokat

    Joined:
    Apr 23, 2015
    Posts:
    12
    Hi there Rumen, I'm using a Kinect v2 and since the Azure update I can't find the field to set the height of the Kinect v2 - am I just missing it somewhere or has that field disappeared for v2? I don't have enough Azure Kinects for my 3 Xmas installations so had to pull out an old v2 and was very happy it worked with the same manager.
     
  18. Lando9000

    Lando9000

    Joined:
    Apr 14, 2013
    Posts:
    36
    Anything using body tracking seems to crash instantly in Unity 2020.2.1, everything works fine in previous versions. Anyone having this issue?
     
    gaelleBe likes this.
  19. rfilkov

    rfilkov

    Joined:
    Sep 6, 2018
    Posts:
    87
    Ah, sorry, folks! I didn't get any notifications about the posts in this forum since months. I'll try to answer as many as possible, starting from the latest ones, but if I miss anything and the issue still a stopper, please post again.
     
  20. rfilkov

    rfilkov

    Joined:
    Sep 6, 2018
    Posts:
    87
    I just tried some scenes with body tracking in Unity 2020.2.1f1, but didn't get any issues or crashes. Please e-mail me, if the issue persists, so we can try to debug it together.
     
  21. rfilkov

    rfilkov

    Joined:
    Sep 6, 2018
    Posts:
    87
    In the K4A-asset you can use the Y-position of the sensor interface transform to set the sensor height. Please look at this tip for more info: Azure Kinect Tips & Tricks | RF Solutions - Technology, Health and More (rfilkov.com)
     
  22. rfilkov

    rfilkov

    Joined:
    Sep 6, 2018
    Posts:
    87
    Six sensors connected to one PC would be quite a challenge, but it's not impossible. So far I had customers, who told me they used up to 5 sensors at the same time. In case of multiple sensors, it's preferred to set the frame rate to 15 instead of 30 (due to the more experienced customers). The RGB resolution should be no more than 1920x1080. The depth mode depends on the required distance (640x576 NFOV for larger distance, 512x512 WFOV for near distance). There is no SDK API for the audio. The Kinect team said one should use the standard Windows audio API for that.
     
  23. rfilkov

    rfilkov

    Joined:
    Sep 6, 2018
    Posts:
    87
  24. rfilkov

    rfilkov

    Joined:
    Sep 6, 2018
    Posts:
    87
    You need to change the Cubeman's Y-rotation from 180 to 0 degrees, as well.
     
  25. rfilkov

    rfilkov

    Joined:
    Sep 6, 2018
    Posts:
    87
    I think it's updated now.
     
  26. rfilkov

    rfilkov

    Joined:
    Sep 6, 2018
    Posts:
    87
  27. rfilkov

    rfilkov

    Joined:
    Sep 6, 2018
    Posts:
    87
    You only need to download and import the latest release from Unity asset store (and now via the Package manager in Unity). All updates are free of any charge, and I always try to make them backward compatible.
     
  28. rfilkov

    rfilkov

    Joined:
    Sep 6, 2018
    Posts:
    87
    The K4A-asset does not support face tracking. It doesn't currently provide any components or demo scenes in this regard, too. If you need K2 face tracking, please consider using the K2-asset: https://assetstore.unity.com/packages/slug/18708
     
  29. rfilkov

    rfilkov

    Joined:
    Sep 6, 2018
    Posts:
    87
    Instead, please look at the 'JointPositionView' & 'JointOrientationView' script components in 'AzureKinectExamples/KinectScripts/Samples'. Just use them in your scene or customize their code, according to your needs.

    If you need to make it work regardless of the presence of the sensor, you can add a sensor interface with DummyK4AInterface-component on it (or duplicate an existing sensor object and replace the sensor interface component with DummyK4AInterface).

    Finally, if you need to process the body recording in an external app, please look at this tip: Azure Kinect Tips & Tricks | RF Solutions - Technology, Health and More (rfilkov.com)
     
  30. rfilkov

    rfilkov

    Joined:
    Sep 6, 2018
    Posts:
    87
    Please look at this tip for the network-sensor setups: Azure Kinect Tips & Tricks | RF Solutions - Technology, Health and More (rfilkov.com) Please e-mail me about the issues you have in this regard, so we can try to debug & work around them together.

    And at this tip on how to calibrate and use multiple cameras in your scene(s): Azure Kinect Tips & Tricks | RF Solutions - Technology, Health and More (rfilkov.com)
     
  31. Shisagi

    Shisagi

    Joined:
    Apr 25, 2018
    Posts:
    1
    This is an incredibly helpful asset. Thank you.

    I'm trying to mirror my movements with the avatar facing same direction as me (away from camera). As i need to use gestures i need the kinect facing my front.
    If i use the camera relative positioning, walking towards or away from the kinect the avatar moves opposite of my intention.
    If i do not use camera relative positioning, i get the desired movement. The issue with not using camera relative positioning is that the avatar gets instantiated at the avatarmatcher's position. This makes walking in and out of view from the kinect create an offset with the avatar.
    Could you give me some pointers on either flipping front-back movement in mirrored mode or instantiate relative to camera?
    Thanks!
     
  32. MassModulesDesigner

    MassModulesDesigner

    Joined:
    Dec 27, 2017
    Posts:
    1
    Hello roumenf,
    I used 2 kinect4Azure cameras and used the sample of MultiCameraSetup-scene to setup.
    I followed the Azure Kinect Tips & Tricks step by step.
    But every time when I enabled the ‘Sync multi-cam frames’-setting, it displayed anything.
    I checked that it is because the body count is 0.
    If I disabled the ‘Sync multi-cam frames’-setting, it can get my body.
    I would like to ask how can I make the cameras getting my body when the ‘Sync multi-cam frames’-setting is enabled.
     

    Attached Files:

    • 0.jpg
      0.jpg
      File size:
      670.5 KB
      Views:
      367
    • 1.jpg
      1.jpg
      File size:
      644.8 KB
      Views:
      367
    • 2.jpg
      2.jpg
      File size:
      645.1 KB
      Views:
      357
  33. WayneVenter

    WayneVenter

    Joined:
    May 8, 2019
    Posts:
    56
    @rvilkof I'm struggling to get the FollowTransform to work or to read the IMU data from the K4A. There is no option to "Detect floor for pose estimation" as per your Trick page on positioning the sensor and for the life of me I can't find a place to get the sensors transform position while it is moving. In short what I'm trying to do is keep the SceneMesh0 in one position and as the K4A sensor moves update the point cloud to cover multiple angles as the sensor moves. Can you give me some guidance on how to retrieve IMU data from the sensor.
     
  34. rfilkov

    rfilkov

    Joined:
    Sep 6, 2018
    Posts:
    87
    Please open AvatarController.cs and look for 'posRelInvertedZ'. I commented out this setting in the latest release to avoid the clutter of settings with similar names, but obviously it is still needed. You need to uncomment two blocks of code - 1. where the setting is declared and 2. where it is used. If you have difficulties, please contact me and I'll send you my script.

    Then you need to set 'Pos Relative to Camera' and enable 'Pos Rel Inverted Z' at the same time, to get the avatar movement in the opposite Z-direction.
     
  35. rfilkov

    rfilkov

    Joined:
    Sep 6, 2018
    Posts:
    87
    Are your cameras wired with a sync cable, as described here? If there are no frames coming, when 'Sync multi-cam frames' is enabled, this usually means there is significant difference between the timestamps of the camera streams. But I need to debug it a bit with the help of some synchronized recordings, in order to tell you what exactly is wrong in this case. If you'd like to provide me synched recordings, please e-mail me and I'll tell you how exactly to make them.
     
  36. rfilkov

    rfilkov

    Joined:
    Sep 6, 2018
    Posts:
    87
    Here is what you need to do. I'm just trying it:

    Then check the "missing" 'Detect floor for pose estimation' setting. It is enabled by default. Then I ran the scene and got the sensor interface transform updated right away:

    Then I checked the 'SceneMeshS0'-object and saw very similar transform position & rotation (because of the FollowSensorTransform-component). Please note the sensor index in FollowSensorTransform, as well.
     
  37. WayneVenter

    WayneVenter

    Joined:
    May 8, 2019
    Posts:
    56
    Hi, I found the "Detect floor", my mistake.
    upload_2021-2-25_18-31-54.png
    But here is the next problem, I don't want the SceneMesh0 to move and "jump" like this
    upload_2021-2-25_18-33-1.png
    See the animated Gif.
    I want the SceneMesh0 to stay "static" location and the camera to move. Also the X and Z Position values does not change.
     

    Attached Files:

    Last edited: Feb 25, 2021
  38. WayneVenter

    WayneVenter

    Joined:
    May 8, 2019
    Posts:
    56
    OK, I disabled "Follow Sensor Transform" on the SceneMesh0 and added it as a component to the Main Camera, it is not behaving as I want it. I want to see the Main Camera move as the sensor moves.

    Edit ... Further update, tested Display Information and Update Transform, The Position X and Z never changes from 0 value, only the Y value, and on Rotation, only X and Z values update, not Y. This does not look right.

    More info, when changing the Reference Pose to "Color Camera Pose" it behaves differently.
     
    Last edited: Feb 25, 2021
  39. joycon3353

    joycon3353

    Joined:
    Mar 15, 2019
    Posts:
    2
    Hello.

    I am testing the example of BackgroundRemovalDemo3.
    I changed (Background Removal Manager -> Player Index = 0).

    i want to use background Removal By Body Index.

    Problems arise when two or more people are recognized.

    sometimes draw another PlayerImage.
    sometimes get another bodyIndex.

    There is no problem when using kinectv2.
    Problems arise when using Azure.

    // example log. ( get bodyindex in background Removal By Body Index class)
    playerIndex : 0 userId : 1 bodyIndex : 0.
    playerIndex : 0 userId : 1 bodyIndex : 0.
    playerIndex : 0 userId : 1 bodyIndex : 0.
    playerIndex : 0 userId : 1 bodyIndex : 1.
    playerIndex : 0 userId : 1 bodyIndex : 0.
    playerIndex : 0 userId : 1 bodyIndex : 0.
    playerIndex : 0 userId : 1 bodyIndex : 0.
    playerIndex : 0 userId : 1 bodyIndex : 0.
    playerIndex : 0 userId : 1 bodyIndex : 1.
    playerIndex : 0 userId : 1 bodyIndex : 0.
    playerIndex : 0 userId : 1 bodyIndex : 0.
     
  40. rfilkov

    rfilkov

    Joined:
    Sep 6, 2018
    Posts:
    87
    OK. Let me explain. The current Kinect pose detection (the floor detector) assumes the sensor is static, not moving. In this regard, what is important for the first static sensor is its height above the ground (i.e. the Y-position) and its rotation. The rotation around Y-axis is not really needed, because it's meaningless. That's why you get only the Y-component in the transform position and 0 in the Y-rotation component. If there are other static sensors connected, their positions and rotations can be estimated automatically by the MultiCameraSetup-scene given the position and rotation of the first camera.

    If you'd like to create your own algorithm for pose estimation of moving camera, I can tell you how to get the IMU data or where to add your code in the Azure Kinect sensor interface.
     
    Yunitea likes this.
  41. rfilkov

    rfilkov

    Joined:
    Sep 6, 2018
    Posts:
    87
    Hi, the body indices may change. They are changed internally, by the body tracking SDK, when the users get detected. The body tracking of Azure Kinect is different than the body tracking of Kinect-v2. Also, when the users are close to each other, this can confuse the body tracking SDK, and change the detected user with the respective player index. In all cases the player index and the user ID should remain the same.
     
  42. deweyfang

    deweyfang

    Joined:
    Mar 8, 2021
    Posts:
    1
    Hello, I'm new to this plug in and encountered a problem where when using build in rendering pipline everything works fine. But when I made a project in the HDRP with pipline I can still do the preview just fine but when I trying to build and run the project. It didn't turns on my azure kinect (the IR light isn't lit up anymore). Did any one have ideas on how to solve this ?

    Build target platform Window x86_64
     
  43. rfilkov

    rfilkov

    Joined:
    Sep 6, 2018
    Posts:
    87
    Please find the Player's log file, after you run the EXE, and check in it what exactly went wrong. Here is where to find Unity log-files: Unity - Manual: Log Files (unity3d.com) If you can't understand what's wrong, please e-mail it to me, so I can take a look.
     
  44. lauchsuppe

    lauchsuppe

    Joined:
    Dec 6, 2014
    Posts:
    39
    Are there any updates on Linux compatibility? I am currently trying to make the Azure SDK work with Unity on Linux (Ubuntu 18.04.), but getting an error "k4abt" when creating a Tracker.
     
  45. rfilkov

    rfilkov

    Joined:
    Sep 6, 2018
    Posts:
    87
    Unfortunately I'm not a Linux guy and cannot do this alone. If you'd like to help in this regard, please e-mail me. k4abt-error would mean that the k4abt native library could not be found (or loaded for some reason).
     
  46. mariusz_str

    mariusz_str

    Joined:
    Feb 10, 2021
    Posts:
    2
    @rvilkof First of all thank you for a great package.

    Just to let you know there is finally a new version of body tracking SDK released.
    Any chance to have the unity package updated to use it?
     
    koichikasai likes this.
  47. Lordmin

    Lordmin

    Joined:
    Mar 9, 2017
    Posts:
    62
    hello!



    Can I control the'Keystone' function using Azure Kinect like the link above?
     
  48. rfilkov

    rfilkov

    Joined:
    Sep 6, 2018
    Posts:
    87
    Yes, the update will be out soon.
     
  49. seldemirov

    seldemirov

    Joined:
    Nov 6, 2018
    Posts:
    48
    Hi, everyone! I am looking for an asset that has a gesture control implementation. There is a mention of this in the description of this asset, but I did not find demo videos, documentation, user guides on this matter.
    Maybe someone has already come across and tell me how it works?
     
  50. underkitten

    underkitten

    Joined:
    Nov 1, 2018
    Posts:
    30
    For some reason performance is chugging in scenes like AvatarDemo. Laptop has 2080GTX and other Unity projects works just fine. FPS is not stable and drops every second. Laptops GPU load is ~55-600% and CPU 35%.
    PointCloudDemo and BlobDetection work perfect though.
    Followed instructions: 86_64
    Auto Graphics for Windows is OFF (set to Direct3D11)

    AzureKinectViewer works perfectly.

    Also tried on my dekstop(3080) and did not notice frame drops.

    EDIT: I guess it is something with my laptop settings (battery optimization particularly)
     
    Last edited: Mar 29, 2021