Search Unity

Azure Kinect Examples for Unity

Discussion in 'Assets and Asset Store' started by roumenf, Jul 24, 2019.

  1. trifox

    trifox

    Joined:
    Jul 21, 2012
    Posts:
    5
    thanks for the tip but it does not work in an easy way, by using the depth from the color in a shader is still the same way off, i will leave it for now, but the color texture seems to be still a bit off, or i am just dont getting it, thx for fast reply
     
  2. Kempfan

    Kempfan

    Joined:
    Apr 26, 2022
    Posts:
    1
    Hello, I have a question regarding the body tracking for the RealSense D400 series:
    Unfortunately I read too late that the standard intel realsense sdk doesn't include skeleton tracking. And that we need a additional sdk from Cubemos.
    While trying to purchase the license for that I discovered that intel doesn't sell it anymore and I'm not able to get a license key.

    Can someone tell me how to get the key for Cubemos Skeleton Tracking SDK now to use it for this Azure Kinect Examples Package in Unity?

    In another forum where was the advice to use Nuitrack instead of Cubemos SDK for realsense devices.
    Is it possible to integrate Nuitrack in this Example Package in Unity instead?

    Thanks in advance
     
  3. cecarlsen

    cecarlsen

    Joined:
    Jun 30, 2006
    Posts:
    862
    @rfilkov It seems that KinectManager.GetInfraredImageTex() returns null. Do I need to activate something to make it work? I can get the IR image data directly with KinectManager.GetRawInfraredMap() and upload to a texture without problems - but it would be convenient if the method worked. Perhaps I missed something.
     
  4. matmat_35

    matmat_35

    Joined:
    Apr 20, 2018
    Posts:
    9
    Hello,
    I'm working with the "VfxPointCloudDemo" example , in this example the "PointCloudVertexMapK4A" texture is the global depth camera view, i would like to know if there is a trick to get only the users depth texture inside the vfx graph?
    Thank you very much fo the great package!
    Mathieu
     
  5. mchangxe

    mchangxe

    Joined:
    Jun 16, 2019
    Posts:
    69
    Hello, I am having trouble setting up a multi azure kinect setup. When I run the multicameraSetup scene, my configed interaces are not used, instead, a new Kinect4Azure instance is created, only activating my first azure kinect. In the Azure kinect viewer software from microsoft, I can see and start the second azure kinect with no problems.

    EDIT: nvm figured it out, both interfaces' device sync mode needs to be set as standalone instead of the master and subordinate
     
    rfilkov likes this.
  6. kosowski

    kosowski

    Joined:
    Jun 19, 2014
    Posts:
    15
    Hi @rfilkov , I've been unable to playback any recording bag file taken with the RealSense D435. Is there any concrete configuration of cameras and resolutions needed at record time on the RealSense Viewer app to make it work with the plugin?
    Thanks!
     
  7. mruce

    mruce

    Joined:
    Jul 23, 2012
    Posts:
    28
    Hey @rfilkov, I'm working on a virtual-fitting-room-like app, where user sees real time video from Azure and clothes overlaid on top.
    The issue is - as far as I understand it - avatar is controlled by rotations but does not apply user's positions (other than the root movement). Because of that behavior joints are not always properly aligned with camera stream.
    Is there a way to use joints positions in animating the avatar?
     
  8. rfilkov

    rfilkov

    Joined:
    Sep 6, 2018
    Posts:
    87
    Yes, you can do that. First, please set the 'Get body frames'-setting of the KinectManager-component to 'Body and body index data', then set the 'Point cloud player list'-setting of the PointCloudTarget-component to '-1' (all users) or to a comma-separated list of the user indices you need.
     
  9. rfilkov

    rfilkov

    Joined:
    Sep 6, 2018
    Posts:
    87
    Not really. All you need to do is:
     
  10. rfilkov

    rfilkov

    Joined:
    Sep 6, 2018
    Posts:
    87
    Using joint positions will distort the model. I tried it, when I started working on the fitting-room demo scenes, but didn't like the result, at all. That's why I abandoned this approach back then.

    I would recommend, when the joints are not aligned with the RGB camera stream, to use the scaling factors of the ModelSelector-component (or AvatarScaler-component) in the scene, to align the joints as good as possible with the color camera stream. Please start with 'Body scale factor'. Set it to 1 at the beginning, and adjust it to get the best alignment. Then if the arms or legs are still not properly aligned, adjust the 'Arm scale factor' or 'Leg scale factor' appropriately. Again, start from 1 and adjust it to get the best alignment.
     
  11. rfilkov

    rfilkov

    Joined:
    Sep 6, 2018
    Posts:
    87
    Did you set the 'Get infrared frames'-setting of the KinectManager-component to 'Infrared texture'?
     
  12. rfilkov

    rfilkov

    Joined:
    Sep 6, 2018
    Posts:
    87
    Sorry, but I don't have plans to integrate Nuitrack in the K4A-asset. They have a weird licensing scheme. My advice would be not to use RealSense, in case you need body tracking.
     
  13. rfilkov

    rfilkov

    Joined:
    Sep 6, 2018
    Posts:
    87
    Guys, sorry for the delayed replies. My work at the moment is very time demanding. I'll try to reply as soon as possible in the future, but let me say sorry now for any further delays.
     
    Last edited: Jun 30, 2022
  14. AllFatherGray

    AllFatherGray

    Joined:
    Nov 29, 2014
    Posts:
    17
    I noticed that this asset supports the Intel RealSense 400 series. The D455 camera supports 90 FPS. How can I unlock that frame rate? I see that there is an enum called DepthCameraMode that only goes to 30 FPS.
     
  15. lazyrobotboy

    lazyrobotboy

    Joined:
    Jul 1, 2020
    Posts:
    16
    Question regarding syncing two or more Azure Kinects:
    Has anyone tried to use the "MultiCameraSetup" scene provided within the package? The process works perfectly fine but after checking the synced pointclouds they are shifted to each other.
    I had to flip the point clouds in the "Kinect4AzureInterface.cs" (x-axis in lines 447-450), could it be related to that change?
     
  16. ktymkntr

    ktymkntr

    Joined:
    Jun 28, 2022
    Posts:
    1
    I have a question about Kinect4AzureInterface.
    By setting it to "Device Streaming Mode: Save Recording", I can record it correctly.
    However, If set to "Device Streaming Mode: Play Recording", the file just recorded will not play correctly.
    The error below is displayed.
    ----
    AzureKinectException: result = K4A_RESULT_FAILED
    Microsoft.Azure.Kinect.Sensor.AzureKinectException.ThrowIfNotSuccess[T] (T result) (at <36c3c4c7c748482698d76ea90494f942>:0)
    Microsoft.Azure.Kinect.Sensor.Playback.OpenPlaybackFile (System.String filePath) (at <36c3c4c7c748482698d76ea90494f942>:0)
    Microsoft.Azure.Kinect.Sensor.Playback..ctor (System.String filePath) (at <36c3c4c7c748482698d76ea90494f942>:0)
    com.rfilkov.kinect.Kinect4AzureInterface.OpenSensor (com.rfilkov.kinect.KinectManager kinectManager, com.rfilkov.kinect.KinectInterop+FrameSource dwFlags, System.Boolean bSyncDepthAndColor, System.Boolean bSyncBodyAndDepth) (at Assets/AzureKinectExamples/KinectScripts/Interfaces/Kinect4AzureInterface.cs:298)
    com.rfilkov.kinect.KinectManager.TryOpenSensors (System.Collections.Generic.List`1[T] sensorInts, com.rfilkov.kinect.KinectInterop+FrameSource dwFlags) (at Assets/AzureKinectExamples/KinectScripts/KinectManager.cs:3412)
    UnityEngine.Debug:LogException(Exception)
    com.rfilkov.kinect.KinectManager:TryOpenSensors(List`1, FrameSource) (at Assets/AzureKinectExamples/KinectScripts/KinectManager.cs:3437)
    com.rfilkov.kinect.KinectManager:StartDepthSensors() (at Assets/AzureKinectExamples/KinectScripts/KinectManager.cs:3270)
    com.rfilkov.kinect.KinectManager:Awake() (at Assets/AzureKinectExamples/KinectScripts/KinectManager.cs:3064)
    --
    Failed opening Kinect4AzureInterface, device-index: 0
    UnityEngine.Debug:LogError (object)
    com.rfilkov.kinect.KinectManager:TryOpenSensors (System.Collections.Generic.List`1<com.rfilkov.kinect.DepthSensorBase>,com.rfilkov.kinect.KinectInterop/FrameSource) (at Assets/AzureKinectExamples/KinectScripts/KinectManager.cs:3438)
    com.rfilkov.kinect.KinectManager:StartDepthSensors () (at Assets/AzureKinectExamples/KinectScripts/KinectManager.cs:3270)
    com.rfilkov.kinect.KinectManager:Awake () (at Assets/AzureKinectExamples/KinectScripts/KinectManager.cs:3064)
    ----
    Incidentally, it appears to be reading the sensor even though it is in playback mode.
    I'm not sure what's going on.
     
  17. lazyrobotboy

    lazyrobotboy

    Joined:
    Jul 1, 2020
    Posts:
    16
    Another question, maybe someone besides @rfilkov can help?
    When displaying the camera image as a point cloud in Unity, I can't move my camera object up close without having the whole point cloud fading away. Changing the coarse factor does not help and I was not able to find another solution in the script. Has anyone encountered this before and found a solution?
    upload_2022-8-29_16-5-48.png
     
  18. SarthakVANDAL

    SarthakVANDAL

    Joined:
    Sep 10, 2020
    Posts:
    2
    Hello all, Not sure if anyone has tried this before , but this is in regards to blob tracking. From my testing of the plugin, the blob id's for each detected blob jump from 1 to another, i.e. If a blob is detected at index 0 and another blob comes into the frame and is detected sometimes the later blob switches to index 0 and the 1st blob jumps to blob index 1. Any idea how to stop this from happening so the blob id's remain consistent till the time they are stopped being detected?
     
  19. SarthakVANDAL

    SarthakVANDAL

    Joined:
    Sep 10, 2020
    Posts:
    2
    Another question I had is making blob detection work with multi camera config, I want to try a configuration where two kinect azure's are In a series configuration - so as to give us a bigger canvas to play with, Is there a way the multi camera config can work with this kind of configuration. Failing that.. I was wondering if there was a way of programatically flattening the fisheye lens of the kinect when in WFOV mode, as the WFOV mode would give me enough coverage and I won't have to worry about Making two kinects work in NFOV mode.. Thanks
     
  20. adielfernandez

    adielfernandez

    Joined:
    Apr 26, 2021
    Posts:
    4
    Hello @rfilkov ! I'm using your asset for a kinect azure project. I'm noticing a big performance dip when running the SceneMeshDemo scene. The example runs smoothly at 60 fps when visualizing the point cloud, but if we initialize the Kinect to use the _3840x_2160_30fps setting, the framerate drops significantly to about 30 fps. Can you help me understand what's happening here and how to boost that performance?
     
  21. GoldenChief

    GoldenChief

    Joined:
    Mar 26, 2020
    Posts:
    4
    Guys...I need your help.

    I just installed the Azure Kinect Examples for Unity package in Unity. Everything is working fine but I've got 2 problems.

    1) I've setup 2 Kinects and using this package I've followed all the instructions as mentioned in the documentation. When you try the bodytracking part, there is a user body image on the bottom right corner that acts as a debug to see if it is capturing the data or not, but as of now what I noticed is that the data only shows for only one sensor.

    My question is how can I show the user data for two sensor so that I can check if the data is getting captured on the second Kinect Sensor or not. I just want to check the range at which the second sensor reads to what extent?

    2) By enabling the MultiCam Config...I can see my body getting tracked on both sensors but I can still see the warning that says "Resources not found: multicam-config.json" in the console. How can I fix this warning. Also when building the application, where should I place these JSON files in the build so that I can use it?

    Can someone please help me with this...?
     
    Last edited: Oct 7, 2022
  22. xuan_celestial

    xuan_celestial

    Joined:
    Jul 17, 2018
    Posts:
    18
    Suggestion Needed.

    What Function used:
    -Background Removal / User mesh

    What to achieve:
    -Wrap texture generated by the above 2 method onto a model/mannequin as a Game Object

    What can be compromised:
    -Can be not so accurate, but at least color matches. Eg: cloth/skin colors

    Currently the main issue is that I couldn't apply the texture due to the texture generated are not meant to use as a Texture for model.

    Really need some suggestions or is there any function that I missed that included within the plugin.
     
  23. sabint

    sabint

    Joined:
    Apr 19, 2014
    Posts:
    26
    Hi, I'm trying to understand the "multi cam config" a bit more. Does it apply to just point cloud data or also for skeletal tracking? Is it possible to use this asset with multiple Az kinects to get a better mocap (e.g., when hands get occluded from one cam's view)?
     
  24. starlitetw

    starlitetw

    Joined:
    Jul 10, 2017
    Posts:
    1
    HI
    I want to generate a collider from the user image so that the falling objects can bounce off when they hit the edge of the character, so I want to start by getting the image pixels, but after I use kinectManager.GetUsersImageTex() to get the texture, I then use Color [] pixels = texture.GetPixels(); The colors obtained are all 408. What should I do? Or is there a better way?
     
  25. cecarlsen

    cecarlsen

    Joined:
    Jun 30, 2006
    Posts:
    862
    When I set KinectManager.getColorFrames, getDepthFrames, and getInfraredFrames to their respective TextureType enum versions of None, all textures are still being copied. This happens at line 823 in Kinect4AzureInterface. It's quite annoying, because it generates a lot of garbage per frame, according to the Profiler.

    1) Shouldn't there be a check so that textures are only copied when they are requested? Like so:
    Code (CSharp):
    1. if( kinectManager.getColorFrames != KinectManager.ColorTextureType.None && sensorCapture.Color != null && sensorData.lastColorFrameTime != currentColorTimestamp && !isPlayMode )
    2) Is there a more efficient way to copy the textures that avoid generating garbage?

    KinectGarbage.png
     
  26. cecarlsen

    cecarlsen

    Joined:
    Jun 30, 2006
    Posts:
    862
    It would be great to have an option to switch off everything related to BodyData when it's not needed in the project. For example, in my current project I'm only using depth, but body data is still being computed, generating unnecessary GC garbage.

    KinectBodyGarbage.png
     
  27. scimusmn

    scimusmn

    Joined:
    Jan 18, 2024
    Posts:
    1
    Hello @rfilkov and others here! Thank you for the wonderful asset!

    We are using the BodyDataRecorderPlayer.cs to record skeletal tracking to text files for later playback. Everything works great, but the playback only works when a Kinect device attached to the computer.

    The playback of each CSV line still works in BodyDataRecorderPlayer.cs without a Kinect, but the corresponding playback code in KinectManager.cs that should react to those updates don't function because they rely on a live device.

    Do you know of a way to playback the .txt files without a Kinect device attached to the computer?

    Thank you for any guidance you can offer.
     
  28. cecarlsen

    cecarlsen

    Joined:
    Jun 30, 2006
    Posts:
    862
    Please make KinectManager.GetSensorData public by default (its internal now). There is a lot of useful data in KinectInterop.SensorData. For example I'm using colorCamIntr and depthCamIntr.

    Also, in Unity 2023 there is a lot of deprecation warnings for FindObjectOfType<T>().