Search Unity

  1. Megacity Metro Demo now available. Download now.
    Dismiss Notice
  2. Unity support for visionOS is now available. Learn more in our blog post.
    Dismiss Notice

Azure Kinect Examples for Unity

Discussion in 'Assets and Asset Store' started by roumenf, Jul 24, 2019.

  1. josip-sarlija

    josip-sarlija

    Joined:
    Nov 5, 2014
    Posts:
    1
    Hi,
    is there a way to set kinect height and angle, or to set automatic angle detection like in Kinectv2 versions?
     
  2. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    josip-sarlija likes this.
  3. mKaddour

    mKaddour

    Joined:
    Sep 10, 2018
    Posts:
    3
    Hi !
    i'm trying to use the mocap scene with my own model.
    The problem is that the 3D isn't animated.

    I attached to it the same scripts as on the provided model.
    thank you.
     
  4. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    The idea of the mocap-animator scene is to provide a simple tool for creating humanoid animations in Unity. The animation is recorded by-default to KinectDemos/MocapAnimatorDemo/Animations/MocapRecording, and is used by the RobotAnimated-animation controller in the same folder. This animation controller is used by the RobotAnimated-game object in the scene.

    You can utilize both - the animation controller directly, or include the recorded animation in your own animation controller. This means you can retarget the same animation to your own humanoid model in your own scene. This is a feature of the Mecanim animation system in Unity. You don't need to put your model into the mocap-animator scene.
     
  5. mKaddour

    mKaddour

    Joined:
    Sep 10, 2018
    Posts:
    3
    Hi,
    thank you for you answer, ill try to put everything on my own scene like you said.
    By the way, Your asset is awesome, it beacame very faster to dev on Azure Kinect !!
    Keep it going like this, you're awesome !
     
    roumenf likes this.
  6. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    Thank you!
     
  7. lomaikabini

    lomaikabini

    Joined:
    Feb 28, 2014
    Posts:
    16
    Hi All,

    Is there a possibility to use BackgroundRemovalDemo1 in the portrait mode?
     
  8. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
  9. lomaikabini

    lomaikabini

    Joined:
    Feb 28, 2014
    Posts:
    16
  10. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    Hm. I just tested all background removal scenes with 9:16 aspect ratio in the Editor, and all seem to work as expected. What does not work for you?
     
  11. lomaikabini

    lomaikabini

    Joined:
    Feb 28, 2014
    Posts:
    16
    It seems that min/max positions from ApplyForegroundFilterByBody() doesn't fit in portrait and it cuts image not as expected
     
  12. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    OK. Please contact me to share more details about your issue.
     
  13. fooldome

    fooldome

    Joined:
    May 29, 2012
    Posts:
    9
    Hi,
    I have a scene using microsoft's kinect v2, it works great in the editor, but when I build the kinect isn't turning on. Sure I must be missing something. Can you help?
    AMAZING ASSET BTW
     
  14. AJMaceikaBrightLine

    AJMaceikaBrightLine

    Joined:
    Dec 13, 2019
    Posts:
    4
    Hey Rumen,
    I am using the background removal with the camera orientated clockwise and am getting some weird offset of the masking.It looks like the mask and the rgb feed are just slightly off in one direction. Have you tested the background removal with the camera orientated this way? And if you have did you need to change anything? I am also combining the the functionality of the Fitting room and Background removal.
    Any help would be greatly appreciated.
     
  15. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    First please check that you are building for Windows. Then find the player's log-file and look at its contents, to find out what exactly went wrong. Here is where to find the Unity log-files: https://docs.unity3d.com/Manual/LogFiles.html
     
  16. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    Yesterday I got a recording from a customer, with setup similar to yours (the camera was turned clockwise). In this case you don't need to change the screen aspect ratio or resolution, just turn the screen clockwise too, to keep the max resolution possible. The body tracking needs to be configured to work with clockwise camera, as well.

    As far as I see, the background removal demo 3 (filter by body index) works correctly. Demo 1 (filter by body joints) needs some adjustment, in means of 'Offset to floor'. This is because the left part of the body is somehow filtered out. But when the sensor is turned clockwise, the user's left is actually sensor's bottom. Also, I don't see any noticeable discrepancies between the depth and RGB streams.
     
  17. AJMaceikaBrightLine

    AJMaceikaBrightLine

    Joined:
    Dec 13, 2019
    Posts:
    4
    Thank you very much I will test the body index removal out and let you know.
     
  18. mKaddour

    mKaddour

    Joined:
    Sep 10, 2018
    Posts:
    3
    Hello !
    I'm trying to use the "Grounded Feet" option of the AvatarController Script, but i have a little problem : my 3d model keep going up and down, and don't stop doing this.
    Do you have an idea of what is causing this ?
    I'm using it in a scene where i want to do a Live Mocap.
    Thank You !
     
  19. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    Hi, it looks like a physics issue. If your 3d model has a Rigidbody-component, check if its 'Use gravity'-setting is disabled and 'Is kinematic'-setting is enabled. The other option (as far as I remember) was to disable the 'Vertical movement'-setting of AvatarController.
     
  20. cecarlsen

    cecarlsen

    Joined:
    Jun 30, 2006
    Posts:
    858
    Hi, thanks for the great asset!

    How do I loop playback of recordings that is set at Kinect4AzureInterface.recordedFile? It plays back the file once, and then stops.
     
  21. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    Hi, please see the screenshot.
     

    Attached Files:

    cecarlsen likes this.
  22. jarmohh

    jarmohh

    Joined:
    Dec 30, 2017
    Posts:
    12
    Hi, nice work!

    One question. In my use case there is only one avatar. I need that avatar z is real distance from the Kinect camera. Seems that place where human is detected has z=0. Then moving backward and forward is minus and plus z. Is there setup for that feature?
     
  23. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    I'm not quite sure I understand your question, but there is a 'Pos relative to camera'-setting of the AvatarController-component. For reference, see how U_CharacterFront-avatar moves in KinectAvatarsDemo1-scene.
     
    jarmohamalainen likes this.
  24. Invent4

    Invent4

    Joined:
    Aug 20, 2012
    Posts:
    15
    Hello,

    Does someone managed how to detect the HandState (Open/Closed) while the official SDK don't include it?

    I'm trying to solve it with the distance of the joints (thumb, hand and tip), but it's not good enough.

    I've some projects based on HandState, from old K2, so I need to find a solution for this.

    Any help would be welcome.

    Thanks,
     
  25. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    I have also tried to work around this issue (the same way as you did), but without major success, as well. The best solution would be to have classifier for the hand states in the Body Tracking SDK. Microsoft has all the resources to do it. That's why I posted this feature request: https://feedback.azure.com/forums/9...-articulated-hand-tracking-and-classifiers-fo and this issue on the SDK issues' page: https://github.com/microsoft/Azure-Kinect-Sensor-SDK/issues/1053 Feel free to upvote or comment there, if you like.

    A partial solution, suggested by one other customer, would be to generate hand grips, when the hand stays in place, but I prefer to wait for more a complete solution.
     
  26. Invent4

    Invent4

    Joined:
    Aug 20, 2012
    Posts:
    15
    Yes, I've already upvoted the feature's request on the link.
    But, while waiting for an official solution, I'm trying other solution.

    I think I'm near to some conclusions with the Joints, but what about the size of the hand Blob?

    I'm not used with the Blob codes yet, but I will give a try on it. Suggestions are welcome.
     
    roumenf likes this.
  27. jarmohamalainen

    jarmohamalainen

    Joined:
    Apr 7, 2019
    Posts:
    6
    Thanks. I got it now.
     
  28. cecarlsen

    cecarlsen

    Joined:
    Jun 30, 2006
    Posts:
    858
    @roumenf I keep finding neat optimization tricks in your code, very inspiring. I came across your SetComputeBufferData method inside the Kinectinterop class that uses reflection to access InternalSetNativeData. I haven't seen anyone use this before. I assume that SetData copies the data while InternalSetNativeData just sets a pointer. Is there any downsides to this apart from minor garbage?

    EDIT: On the GPU side there may be room for further optimization. I see that the UserMeshShader does 38 buffer reads per vertex. Are you aware of this?
     
    Last edited: Feb 1, 2020
  29. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    Actually, I like to read code, and have some C background, as well. I saw InternalSetNativeData() used by someone, and then researched it a bit in the Unity library. It is used by the NativeArray-related methods, which means it works faster than with managed data structures. And, when it comes to copying arrays of data, I always prefer the good old C-way, because it is usually optimized to use the fastest processor instructions. Not aware of downsides, so far.

    Thank you for profiling UserMeshShader! I'll take a look. This shader has its downsides and my intention is to replace it with something else soon.
     
    cecarlsen likes this.
  30. cecarlsen

    cecarlsen

    Joined:
    Jun 30, 2006
    Posts:
    858
    Thanks for the info @roumenf, I'm not doubting your C skills =)

    I have a couple of proposals / feature requests.

    1) The console messages from KinectManager and Kinect4AzureInterface is crowding my console. It would be wonderful to have an option like consoleMessagesEnabled.

    2) Some names may be misleading. KinectManager also manages RealSense, perhaps it should be named "DepthSensorManager". And the package itself is named AzureKinectExamples, I think it deserves a more general name, like "DepthSensorTools" or "DepthSensors" or "DepthSensorFoundation". Perhaps it started out as examples, but I think if offers more convenience than that at this point.

    EDIT, removed a topic.
     
  31. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    Thank you for the suggestions! Feel free to e-mail me directly, if you have more or need to discuss anything.

    Regarding the naming: The name, structure and API of the KinectManager and KinectInterop come from the previous Kinect-related assets (i.e. Kinect and Kinect-v2). And I'd like to maintain this historical consistency. This way it may be easier for the users of the previous assets to upgrade to this one (or at least I hope so).

    AzureKinectExamples is named after Azure Kinect (the youngest child in the Kinect family), but my repo is still called DepthSensorExamples. It's more general, but I'm not sure it will be so understandable to the prospective users. And I wouldn't dare to call my humble efforts to make the depth cameras more accessible a Foundation :)
     
    cecarlsen likes this.
  32. GZMRD17

    GZMRD17

    Joined:
    Jan 25, 2020
    Posts:
    33
    I have been using the MultiCameraSetup to create a calibration file and the cameras appear to be working correctly after going through the calibration process. If I want to use the multi-camera configuration in the RecorderDemo, do I need to create multiple children for the cameras I am using (i.e. Kinect4Azure0,1,etc) under the KinectController or will this be taken care of by checking the box "Use Multi Cam Config?"
     
  33. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    Just enable the 'Use Multi Cam Config'-option of KM. When you run the scene, the KinectManager will load the config file, and then create & position the configured sensor interfaces accordingly.
     
  34. GZMRD17

    GZMRD17

    Joined:
    Jan 25, 2020
    Posts:
    33
    In the kinect 4 azure interface (script), do I need to change the standalone settings to master and subordinate accordingly or leave these as "standalone?" It appears to be working with the "standalone" setting, but I am not sure if this is going to cause problems.
     
  35. pacheco_unity371

    pacheco_unity371

    Joined:
    Aug 21, 2019
    Posts:
    2
    Hi,

    The BackgroundRemovalDemo, FittingRoomDemo and OverlayDemo demo scenes are started but are not working and in the log it does not show any error. Can you help me?

    upload_2020-2-10_16-48-58.png
    (KinectFittingRoom1 log)

    Installed version:
    Azure Kinect Sensor SDK (v1.3.0)
    Azure Kinect Body Tracking SDK (v1.0.0)
    Unity 2019.2.13f1 Personal
     
  36. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    The sensors can always work as standalone. This means each one works as a standalone sensor. If you configure them as master/subordinate in the K4A-interface, you need to wire them accordingly too. In this case the sub sensor will only start working when it receives a signal from the master, and will be further synched to the master, as well.
     
  37. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    I don't see user detection messages in the console. All these scenes require a user in front of the camera. Make sure the users are properly detected in the Azure Kinect Body Tracking Viewer.

    If the issue persists, please e-mail me, mention your invoice number and send the Editor's log file, so I can take a look. Here is where to find Unity log files: https://docs.unity3d.com/Manual/LogFiles.html
     
  38. pacheco_unity371

    pacheco_unity371

    Joined:
    Aug 21, 2019
    Posts:
    2
    In Azure Kinect Viewer, the camera was not loaded correctly. Solved!
     
    roumenf likes this.
  39. GZMRD17

    GZMRD17

    Joined:
    Jan 25, 2020
    Posts:
    33
    Would you recommend one method over the other?

    Also, I am using four Kinect V3 cameras and going through the calibration process. I have a new workstation with two GPU's and pretty fast processor plenty of RAM. How long would you estimate the calibration process take for four cameras? It appears to be taking over 10 minutes or maybe longer. I had to stop the processes because I needed the computer for something else. Thanks for any recommendations.
     
  40. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    It depends on whether you need synchronization between the cameras or not.

    I'm not sure what calibration process you mean. If you mean the MultiCameraSetup-scene, the process should take less than a minute, apart from the manual adjustments at the end. If you have issues with it, please e-mail me and send me the Editor's log, so I can take a look. Here is where to find Unity log-files: https://docs.unity3d.com/Manual/LogFiles.html

    By the way, it's not Kinect V3, but V4 instead (aka Azure Kinect, Kinect 4 Azure, K4A). V3 was the depth tracking device used in HoloLens 1.
     
  41. GZMRD17

    GZMRD17

    Joined:
    Jan 25, 2020
    Posts:
    33
    Thank you. The calibration appears to be working and I got it to complete 100% by walking back and forth in-front of the cameras. I took about 2 minutes max. I have all four cameras working now.

    I'd like to re-calibrate the cameras and sync them. I have them synced outside Unity using the Microsoft utility and connected in daisy-chain configuration. For Unity, in the MulticameraSetup, I created Kinec4Azure0, 1, 2, 3 and assigned the device indexes as 0, 1, 2 and 3. Does the program automatically assigns the cameras to the sensor-interface objects based on the indexes I give to each? In other words, is the master camera automatically assigned to device index 0, first subordinate to device index 1, and the same with the other subordinates?
     
  42. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    Ah, yes. I forgot to mention that one person should be visible to all cameras until the calibration completes. Here is the tip: https://rfilkov.com/2019/08/26/azure-kinect-tips-tricks/#t10

    To your question: The device indexes should be the same as detected by the Azure Kinect SDK. The simplest way to find these indexes is to open 'Azure Kinect Viewer' and look at the dropdown box. The 1st (topmost) item is device #0, the 2nd one is device #1, the 3rd one is device #2, etc. These same indexes should be set as 'Device index' in the respective AzureKinectInterface-components in MultiCameraSetup.
     
  43. GZMRD17

    GZMRD17

    Joined:
    Jan 25, 2020
    Posts:
    33
    I did read the instructions on the website, but I noticed that if I do not move in-front of the cameras, the calibration progress bar does not update. This make senses if I am trying to get the cameras to track me. Thanks!
     
  44. TSStefan

    TSStefan

    Joined:
    Feb 18, 2019
    Posts:
    2
    Hey!
    First off: great asset. I'm really impressed and super happy with it.
    I'm just stuck on rotating the Azure Kinect and displaying it on a portrait-mode display.
    I followed: Setting up position and rotation https://rfilkov.com/2019/08/26/azure-kinect-tips-tricks/#t9
    Looks like everything is oriented correctly in relation to each other (floor "below" the rig e.t.c.) but I can't rotate the camera output.

    Could you give me some pointers?
    Thanks a lot!
     

    Attached Files:

  45. GZMRD17

    GZMRD17

    Joined:
    Jan 25, 2020
    Posts:
    33
    I created a very simple C++ program to obtain the body joint position similar to what I did with Unity. I am using the BodyDataRecorder in the Unity recorder demo but my position vs. time data is scaled very differently. Do you have any recommendations for scaling the data properly as extracted from the cameras and bring these data as the BodyRecording.TXT data in the BodyDataRecorder asset so when playing the Avatars motion, it fits the scale of the demos?
     
  46. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    If the problem is that the camera output is also rotated, please disable the FollowSensorTransform-component of the MainCamera in the scene. It makes the camera "see" the world the same way the sensor sees it. Instead, in this case you can set the camera's position and rotation manually.
     
  47. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    I'm not sure I understand your issue. BodyDataRecorder saves the data of all body joints of all currently detected bodies. The positions are in meters and the rotations are in degrees. If this doesn't answer your question, please feel free to e-mail me, with more details about your issue, when scaling the recorded data on replay.
     
  48. GZMRD17

    GZMRD17

    Joined:
    Jan 25, 2020
    Posts:
    33
    Thanks, that helps. Let me do some extra checking on my end first.
     
  49. GZMRD17

    GZMRD17

    Joined:
    Jan 25, 2020
    Posts:
    33
    While I am researching the scaling factors. I have another question: I am trying to setup two cameras in one room, and two other cameras in another room all connected to the same computer. Is this possible to do? If so, I will have two people standing in-front of the pairs of cameras in each room for the calibrations. Do you have any recommendations?
     
  50. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    Is there an area visible to all four cameras, or they track two separate rooms in pairs? If they share a common area, the person should stay/move there and be visible to all cameras. Only one person should be visible to the cameras, for calibration to complete.