Search Unity

Azure Kinect Examples for Unity

Discussion in 'Assets and Asset Store' started by roumenf, Jul 24, 2019.

  1. caseyfarina

    caseyfarina

    Joined:
    Dec 22, 2016
    Posts:
    8
    Thank you so much for creating a great tool! I've tried running multiple realsense cameras with the cubemos tracking addon. It looks like only the first sensor is recognized by cubemose. Is there any way to create multiple skeletal tracks using realsense and cubemose? @rfilkov
     
  2. rfilkov

    rfilkov

    Joined:
    Sep 6, 2018
    Posts:
    87
    Replied by e-mail.
     
  3. rfilkov

    rfilkov

    Joined:
    Sep 6, 2018
    Posts:
    87
    I think this may be an issue of the Azure Kinect BT SDK, when tracking some people. Unfortunately I can't reproduce this issue here, and would need a recording made with the k4arecorder-tool, to take a closer look at it. Please e-mail me for further instructions on how to make the recording and how to send it over to me, if you'd like to provide a recording.
     
  4. alexbofori

    alexbofori

    Joined:
    Aug 27, 2016
    Posts:
    8
    Hello,

    I have a few questions ( skeleton tracking ), just to make sure I understand it correctly before I deep dive into it:

    1. Do I need a CUDA capable gpu? Are amd apu-s with integrated gpu supported, desktop cpu 5600G / 5700G, notebook cpu 4800U / 5800U?

    2. Is the RealSense D455 supported, dual, triple? Would you recommend it over the D435 version for skeleton tracking?


    Thank You!
     
  5. rfilkov

    rfilkov

    Joined:
    Sep 6, 2018
    Posts:
    87
    Hi. To your questions:
    1. I would recommend a CUDA capable GPU for better performance. CUDA is the by-default BT processing mode in the K4A-asset. But Body Tracking SDK v1.1 allows selection of the processing mode (used by the underlying onnxruntime) to be DirectML or TensorRT, as well, so other GPUs are also supported. This is a setting of the Kinect4AzureInterface-component in the scene.

    2. I have not tested D455, but it should be supported. Please note though, Intel does not provide body tracking SDK for its RealSense sensors. Instead, it recommends using the Cubemos skeleton tracking SDK. This requires an update of the RealSenseInterface-script in the K4A-asset. If you install the Cubemos SDK and need the RS-interface update, please e-mail me, tell me your invoice or order number, and I'll send it over to you. Otherwise, I would recommend D415.
     
  6. illsaveus

    illsaveus

    Joined:
    Nov 19, 2016
    Posts:
    3
    Fantastic, I'll send over an email to you right away!
     
  7. pouria77

    pouria77

    Joined:
    Aug 13, 2014
    Posts:
    36
    Hello

    We're using the Kinect Azure for a dancing demo application, in which people dance in front of a wide screen and then we'll see their cutout against some background (maximum 3 people at the same time).

    I'm using the kinect manager and background remover in my scene, and I have an issue, which is, the kinect only covers about 75% of the screen, meaning, the farthest side from either left or right that the user is being tracked are not wide enough.

    I thought the first solution would be to change the Depth Mode to WFOV (as is possible using the Azure Kinect Viewer) but I haven't found a way to change that from inside unity, and none of the other options seem to have much effect (including the depth camera resolution, in the inspector).

    Does anyone know how I can fix it?

    I have attached a photo of my settings on the Kinect manager and background remover.
    Thanks a lot
     

    Attached Files:

    Last edited: Oct 6, 2021
  8. matmat_35

    matmat_35

    Joined:
    Apr 20, 2018
    Posts:
    9
    Hello,
    i used the "Azure Kinect Examples" with the kinect v2 in Hdrp but it seems that there are no more "Smoothing" & "Velocity Smoothing" parameters in the Kinect manager like in the "Kinect v2 examples".
    These parameters were very useful...
    Best
    Mathieu
     
  9. rfilkov

    rfilkov

    Joined:
    Sep 6, 2018
    Posts:
    87
    You can set the depth-camera mode (in means of resolution and NFOV/WFOV) in the Kinect4AzureInterface-component settings. See below.

     
  10. rfilkov

    rfilkov

    Joined:
    Sep 6, 2018
    Posts:
    87
    Hi Mathieu,
    Yes, you are right! I have to bring these filters to the K4A-asset, too. Please e-mail me about this issue, so I don't forget.
     
  11. pouria77

    pouria77

    Joined:
    Aug 13, 2014
    Posts:
    36

    I tried it, but not only it didn't change the wideness, it made the camera have this weird output.. I made a video to show what I mean :



    As you see, the far left of the camera is still center of the image (our target output resolution is 4992x1080) but the strange thing is that output.

    Am I doing something wrong in the settings?
     
  12. rfilkov

    rfilkov

    Joined:
    Sep 6, 2018
    Posts:
    87
    Just tried it in the demo scene:



    Please note though, the WFOV modes work well in close distances only - up to 3m-3.5m max.
     
    Last edited: Oct 8, 2021
  13. pouria77

    pouria77

    Joined:
    Aug 13, 2014
    Posts:
    36
    Thanks for getting back to me.

    I think I need to explain my issue more clearly.

    Based on Kinect's color camera specs, if the user stays 9 feet from the camera, they should be able to move in an area of 15 feet wide that'll be covered by the kinect. I uploaded an image to demonstrate.

    In my test, the area I can cover before I go out of bounds is around 8 feet.
    I used kinect's backgroundRemovalDemo2 for these tests.

    This is demonstrated better in these videos that I made :




    My screen resolution is 4992x1080 btw.

    I tried different settings on the Color Camera mode, and depth camera mode.. trying different resolutions, none of them have much effect on the range.

    What am I missing?

    Thanks again
     

    Attached Files:

    Last edited: Oct 8, 2021
  14. BenrajSD

    BenrajSD

    Joined:
    Dec 17, 2020
    Posts:
    25
    Hi, I am facing a issue with detection on some of dress colors. With blue and green detection seems weak and with black woolen kind of jacket it is not detecting at all, any settings to be made to improve that. Also what is the light and environment settings you are suggesting for having better detection, where kinect is placed.
     
  15. BenrajSD

    BenrajSD

    Joined:
    Dec 17, 2020
    Posts:
    25
    We are facing the same, mostly on light colored faces and light is more intense
     
  16. BenrajSD

    BenrajSD

    Joined:
    Dec 17, 2020
    Posts:
    25
    One more thing @rfilkov is there is any possibility to restrict the shoulder angle, I want it to restrict till A-pose with the model, in rest pose hands are penetrating with the body. Tried with bone angle, it affect other parts. Any settings or solution to control that bone, same till T-pose, not allow to go beyond that. Muscle settings in editor won't helping, since Kinect overwrite that values.
     
  17. pouria77

    pouria77

    Joined:
    Aug 13, 2014
    Posts:
    36
    A quick question: Is there a way to use background remover with only color camera? or it has to use both depth and color camera?
     
  18. rfilkov

    rfilkov

    Joined:
    Sep 6, 2018
    Posts:
    87
    No, you can't remove the background without the depth camera. It's needed for the depth image transformation to the color camera space, and for the body tracker as well.

    To your previous videos: Let me explain a bit how the background remover and the Azure Kinect sensor work. The background-remover uses depth image transformation to the color-camera space. If you look closer, you will see the color camera resolutions have 2 aspect ratios (16:9 and 4:3). The depth camera modes are two types, too - NFOV and WFOV. NFOV modes are narrower (hence the name), have hexagonal shape and have the ability to detect farther - up to 5.4 meters for 320x288, and 3.8 meters for 640x576. The WFOV modes are wider, have oval shape, cover better the color camera image, but the range of detection is closer - 2.9 meters for 512x512 and 2.2 for 1024x1024.

    When you transform the NFOV depth image (e.g. 640x576) to the color camera, it doesn't cover it well, and you can see its hexagonal area of detection in the video. When you transform a WFOV depth image (e.g. 1024x1024), it covers the color camera image fully, but the user needs to stay really close to the sensor. This mode works with 15 FPS only, and it's not recommended for body tracking, too. So, for a THIRD time now I tell you: in your case, please use the 512x512 depth mode instead, to get a compromise - good color image coverage, decent body tracking and farther max distance. This is the mode I used in my screenshot above (outlined there).

    Last, whatever your screen resolution is, the coverage will be within the aspect ratio of the selected color camera resolution, i.e. 16:9 or 4:3. The rest of the on-screen picture (left and right) will not be covered.

    Please look at this page, if you need more info regarding the Azure Kinect image transformations.
     
  19. rfilkov

    rfilkov

    Joined:
    Sep 6, 2018
    Posts:
    87
    The muscle settings should work, if you enable the 'Apply muscle limits'-setting of the AvatarController-component.
     
  20. pouria77

    pouria77

    Joined:
    Aug 13, 2014
    Posts:
    36
    Got it. Thanks for the great explanations.

    Best
     
  21. emmanuelbonnet

    emmanuelbonnet

    Joined:
    Sep 30, 2015
    Posts:
    1
    Hello,


    First of all, thanks for your project, it's a very good starter for me.


    My goal is to remove the background around a person, detect a movement and take a picture with a custom image behind them. Using the BackgroundRemoval and PoseDetection demo I was able to make it work but I still have some problems.


    1. Is there a way for me to improve the matting around the head? We tried different hair styles and kept having cropped or missing parts, increasing the “Head offset” don’t fix it

    2. There is a “ghosting” or a “shadow” around my hand when I put it before my chest, I read it was necessary to use at least 2 kinect to circumvent this but do you have any other ideas?

    Specs:

    i7 10750H

    gtx 1660ti

    Kinect V2 and Kinect V4 (azure), tried both but not simultaneously


    Thanks for your time
     
  22. rfilkov

    rfilkov

    Joined:
    Sep 6, 2018
    Posts:
    87
    Hi, to your questions:

    1. Regarding Kinect-v2, I know from several customers that putting a light source (e.g. an extra lamp) in front and above the user's head helps with the hair issues. And I've heard that this does not apply to Azure Kinect, but have never tested it.
    2. Try to increase the number of 'Dilate iterations' (and optionally enable the 'Apply median filter') settings of the BackgroundRemovalManager-component in the scene.
     
  23. matmat_35

    matmat_35

    Joined:
    Apr 20, 2018
    Posts:
    9
    Hi all,
    Is there an example somewhere that explains how to recover the color and depth texture in shadergraph, please?
    Thanks
    Mathieu
     
  24. rfilkov

    rfilkov

    Joined:
    Sep 6, 2018
    Posts:
    87
    Hi Mathieu, what do you mean?
     
  25. matmat_35

    matmat_35

    Joined:
    Apr 20, 2018
    Posts:
    9
    Hello rfilkov,
    I' would like to be able to do some video processing with shadergraph, but i can't seem to retrieve kinect color texture and depth texture in shadergraph. This could be very useful to have some examples of the technique to send the kinect color and depth texture through a shadergraph.
    Best
     
  26. rfilkov

    rfilkov

    Joined:
    Sep 6, 2018
    Posts:
    87
    Please look at the VfxPointCloudDemo-scene in 'AzureKinectExamples/KinectDemos/PointCloudDemo'-folder and at the info regarding this scene setup here. It uses the color and vertex textures (PointCloudColorMapK4A and PointCloudVertexMapK4A-textures in the Textures-subfolder) to generate the point cloud there. These textures are used by the PointCloudTarget-component (and hence the sensor interface component) in the scene.
     
  27. Jelvand

    Jelvand

    Joined:
    Oct 4, 2019
    Posts:
    4
    Hello rfilkov,

    Thanks for a great package!
    One thing I'm trying to achieve is to make projects buildable with IL2CPP with your package. I know that it fails due to the missing MonoPInvokeCallback in the Microsoft.Azure.Kinect.Sensor.dll, and that the real fix should come from Microsoft along the lines of the issue 1033 in their github (https://github.com/microsoft/Azure-Kinect-Sensor-SDK/issues/1033).
    But since we need this ASAP, I downloaded the Microsoft repo, built the project, and subsequently built the managed dll in order to implement the patch mentioned in the issue myself.
    Everything went (seemingly?) well until I tried replacing your version of the .Sensor.dll with my custom built version.
    The problem is that suddenly a lot of data-types are missing, specifically the BodyTracking class, and all the k4abt_* enums and structures.

    So why am I writing you about this?
    Well, since the code for these body tracking data-types where nowhere to be seen in Microsoft's tree, and after building several versions myself (1.4.0.0, 1.4.1.0, release/v1.4.x tip, develop tip etc.) without finding these data-types, I started disassembling the different .Sensor.dll versions using ILSpy, and found these strange results:
    * In your version (from AzureKinectExamples v16) under ../Kinect4AzureSDK/Plugins, the datatypes where of course present as referenced by your code.
    * The official 1.4.1 release of the Microsoft DK did not contain these datatypes in the .Sensor.dll.
    * And as previously stated, any version I build myself, fail to have the needed data-types in them.

    So before I start asking Microsoft about where these datatypes have gone and possibly making a fool of myself as I cannot see them at all in their source tree, I must ask you, are you already using a custom built version of the Microsoft.Azure.Kinect.Sensor.dll? Are you the one adding these body-tracking datatypes to the dll, or have you found this as a forked version somewhere else?
    If not, do you maybe have a clue what's going on and why these data-types are not present in the offical Sensor SDK (I couldn't find them in the body tracking SDK either tbh)?

    Further, if you are forking the dll yourself, the patch described in the issue above would be a welcome feature to your fork to enable IL2CPP :).
     
    Last edited: Nov 16, 2021
  28. matmat_35

    matmat_35

    Joined:
    Apr 20, 2018
    Posts:
    9
    Hello rfilkov,
    Thank you very much!
    It works perfectly.
     
  29. rfilkov

    rfilkov

    Joined:
    Sep 6, 2018
    Posts:
    87
    Hi Jelvand,

    Sorry for the delayed response. I don't get notifications from the Unity forums and obviously have overlooked your post :(

    You're right. The 'Microsoft.Azure.Kinect.Sensor.dll' library in the K4A-asset is my custom build. I've added the classes, methods and structures that were needed by the K4A-asset back then, long before Microsoft introduced C# wrappers to some of them.

    Thank you for the IL2CPP suggestion. Please feel free to e-mail me, if you still need the fix.
     
  30. ModLunar

    ModLunar

    Joined:
    Oct 16, 2016
    Posts:
    374
    Because many of my projects strictly use .asmdefs (Assembly Definition assets) in Unity,
    I can't use this plugin in certain projects without using adding an asmdef (Assembly Definition) asset to define for the entire AzureKinectExamples scripts (and demos included, 1 assembly for now to keep it simple).

    I put one at the top-level under Assets/AzureKinectExamples and called it AzureKinectExamples.asmdef.
    I didn't modify the asmdef at all, and it compiled in Unity.

    upload_2021-12-8_0-26-58.png


    However, I'm wondering why I'm getting compilation errors only in Visual Studio 2019.

    It seems to be referencing 2 different versions of a System.Numerics.Vectors assembly (v4.0.0.0 vs. v4.1.3.0), and it gets confused.

    Is there any way to fix this so I can use the AzureKinectExamples scripts in a project that uses Assembly Definitions?

    Note:
    I use them because without defining assemblies, you can't support unit tests with Unity Test Framework, and the project is generally less neat for separating large bodies of scripts from many other unrelated scripts, and your project requires full re-compilation of all your scripts (instead of recompiling just 1 assembly) any time you make small changes.

    Even though I can enter playmode because Unity compiles, I can't enter debug mode anymore in my project because Visual Studio 2019 doesn't compile!

    upload_2021-12-8_0-24-7.png
     
  31. rfilkov

    rfilkov

    Joined:
    Sep 6, 2018
    Posts:
    87
    As far as I see the Kinect4AzureInterface references System.Numerics.Vectors-assemby v4.1.3.0.
    Please e-mail me, and tell me how exactly to reproduce your issue, and I may be able to give you some hints in this regard.
     
    ModLunar likes this.
  32. o2co2

    o2co2

    Joined:
    Aug 9, 2017
    Posts:
    45
    upload_2021-12-20_18-37-32.png
    2d skeleton ,I don't want him to be 3D rotation, hoping to face the camera. Thank you for your help.
     
  33. rfilkov

    rfilkov

    Joined:
    Sep 6, 2018
    Posts:
    87
    Please try to enable the 'Ignore Z-coordinates'-setting of the KinectManager-component in the scene.
     
  34. o2co2

    o2co2

    Joined:
    Aug 9, 2017
    Posts:
    45
    Thank you
     
  35. Sound-Master

    Sound-Master

    Joined:
    Aug 1, 2017
    Posts:
    48
    Hello @rfilkov
    I am exploring using Ray Tracing in my Unity HDRP project, which requires Direct3D12. Would the Azure kinect asset work if I selcted Direct3D12 as the graphics API? Or can I include borh Direct3D11 and Direct3D12?

    Many thanks

    Michele
     
  36. lazyrobotboy

    lazyrobotboy

    Joined:
    Jul 1, 2020
    Posts:
    16
    Awesome package, @rfilkov !
    I saw an unused "CreateRecording" within the DeviceStreamingMode, are there any plans to enable recording .mkv files from within Unity or is it even working somehow currently?
    Thank you in advance!
     
  37. Kurino_kairtou

    Kurino_kairtou

    Joined:
    Aug 26, 2021
    Posts:
    1
    Hello,
    Thanks for creating this great tool for Azure Kinect!
    Last month I purchased the lastest SDK and use it to build my project. The project ran perfectly on Nvidia gpus without problems , but after switching the project to an AMD GPU laptop, I found some error stopping the initializing process:
    Can't create body tracker for Kinect4AzureInterface0!

    It seems like this line:
    Code (CSharp):
    1. bodyTracker = new BodyTracking(calibration, btConfig);
    failed to create the bodyTracker and throw out an error. I tried to switch the bodyTrackingProcessingMode to DirectML mode but didn't work.
    My AMD GPU is an Radeon Graphic Vega 8 intergrated, AMD claims it support DirectML.How can I fix this bug?
     
  38. Sound-Master

    Sound-Master

    Joined:
    Aug 1, 2017
    Posts:
    48
    The solution for me on an AMD laptop was to switch to the GPU mode (not GPU with CUDA) and then it worked.

    M
     
    rfilkov likes this.
  39. rfilkov

    rfilkov

    Joined:
    Sep 6, 2018
    Posts:
    87
    No, there is no 'create recording' in the K4A-asset. There is only 'play recording'. For performance reasons I decided not to include the saving of sensor streams in the code.
     
  40. dawnr23

    dawnr23

    Joined:
    Dec 5, 2016
    Posts:
    41
    Hello

    https://dnsoftcokr.sharepoint.com/:...lNhzJdCWI7ViwBvXDPWV6cK0oPUS-6uKNaxg?e=Eyd4pG

    In the BackgroundRemovalDemo2 scene, black hair and black mask on the face are not recognized.

    Is there a way to significantly increase the depth camera value or the color key value area?

    Is there any way to mitigate this a bit through scripting of this asset?

    Even if other places are perceived as more messy, the black hair must be visible.
     
  41. rfilkov

    rfilkov

    Joined:
    Sep 6, 2018
    Posts:
    87
    This is a problem of the Azure Kinect. The black hair, mask, clothing are not detected correctly. A workaround used for the Kinect-v2 was to put more light (an extra light source) in front and above the person, to increase the reflectivity of the black material. Please try this in your setup, as well.
     
  42. jnoering

    jnoering

    Joined:
    Apr 24, 2018
    Posts:
    3
    Hello, I am using the Azure Kinect together with a projector to make an interactive projection.

    My Setup:

    - The azure and the projector are facing in the same Direction (against the wall)
    - I am standing infront of the wall (-> the projected image)

    My Problem:
    When I start the demos with this setup, my silhouette/avatars position is the opposite of mine. So when I stand left it stands right and everything I do is done inverted by the silhouette/Avatar.
    How can I match the position of the silhouette/avatar to my position?

    Thank you!,
    JMN
     
  43. mchangxe

    mchangxe

    Joined:
    Jun 16, 2019
    Posts:
    69
    hey team, thanks for creating this wonderful tool. I got a shader-esque question for an effect i want to do based on the background removal feature of the tool. Here is a the reference picture:
    Screenshot 2022-04-04 144802.png

    as you can see its a cat image with background partial removed with square/jagged edges instead of a smooth edge around precisely the cat outline. Obviously for our situation it would be a human figure instead of the cat. Is there any way to achieve this effect by toying with the background removal component or the relevant display shaders? How would you go about this?

    Thanks in advance.
     
  44. Sound-Master

    Sound-Master

    Joined:
    Aug 1, 2017
    Posts:
    48
    Hello,

    I have three questions:

    1) I have an azure Kinect camera mounted above a large screen, about 2 meters above the floor. I am working with the avatar demo 1 scene. As the camera is tilted down, the scene is also tilted from the point of view of the camera. To compensate I have tried the offset transform in the avatar controller component, which kind of works but the avatar doesn't get closer or far away from the camera anymore. Is there a better way of compensating for the camera tilt?

    2) When updating to the body tracking SDK v.1.1.1 and the latest version of the azure Kinect unity plugin I experience a very bad lag. The avatar stutters badly and progressively more. I am using the azure SDK v.1.4.1 and i have updated the camera firmware. The issue gets a lot better if I revert to the SDK version 1.1.0 and an older version of the plugin.

    3) I am also experiencing an issue where as the user is calibrated and removed a few times (enter and leave the camera view) the avatar doesn't always get calibrated in the same position relative to the camera and even out of shot sometimes. It seems to matter whether I enter from the left or the right side of the camera view. I have an offset transform applied to the avatar controller as well as the main camera and I have disabled the camera follow component on the main camera. This happens also with the body tracking SDK v.1.1.0 and an older version of the azure Kinect unity plugin.

    Am I doing anything wrong? Could it be the hardware I am using? I have set the body tracking processing mode to GPU. Could it be an issue with my hardware?

    I am using unity 2020.3.24 on a MacBook Pro running Windows 10 pro and mounting an Intel i9 CPU and an AMD Radeon Pro 5500M. I am using the HDRP 10.6.0.

    Any advice would be greatly appreciated!

    Many thanks

    Michele
     
    vhm_kattegat likes this.
  45. Arthur-Delacroix

    Arthur-Delacroix

    Joined:
    May 7, 2014
    Posts:
    1
    Hi! roumenf
    I try to re-play the point cloud data use different way, I use Unity Recorder to record the PointCloudColorMapK4A and PointCloudVertexMapK4A texture to PNG image sequencs / H.264 MP4 / VP9 WebM / ProRes QuickTime Apple ProRes 4444 XQ(ap4x), but when I use these data on a render texture in ColorPointCloud particle, it doesn't work, the particle is mess, is there any way to record point cloud data and re-play it?
     
  46. MostlyAR

    MostlyAR

    Joined:
    Sep 9, 2021
    Posts:
    1
    Make sure your amount bits per color match the ones used in the VFX example. It seems to be really sensitive in this regard.

    @rfilkov, Thanks for this great asset, helped a bunch! I'm trying to do colored point cloud matching (after openCV aruco calibration). However, I couldn't find any option to match brightness (exposure settings)? Do you have any suggestions?
     
  47. jnoering

    jnoering

    Joined:
    Apr 24, 2018
    Posts:
    3
    Hi,

    I am currently working on a Floorprojection game. However, I struggle to match my feet with the Depth Image and the DepthCollider.

    I tried to fit the displayed depth image not to the height of the image but to the width. So the projected image should cover the area, that my azure covers. Hence, the depth image should be the size of my body. It actually fits more or less. The feet are a bit bigger than my actual feet and there is still an offset between my feet and the depth image. If you know how I can automatically calibrate this offset I would be more than thankful for an answer! :)


    For the generated DepthCollider I tried the same. I went into the DepthSpriteViewer Script from the DepthColliderDemo2D Scene. There, in the SetupJointColliders() method I manipulated the following line of code from this:
    foregroundImgRect = kinectManager.GetForegroundRectDepth(sensorIndex, foregroundCamera);

    to this:

    foregroundImgRect = kinectManager.GetForegroundRectDepth(sensorIndex, foregroundCamera, foregroundCamera.aspect, foregroundCamera.aspect);

    where the last two parameters scale the image.

    Here I had to however tweak the collider size quite a lot and it wont fit perfectly.

    Is there any better way to do all of this? I have the feeling, that I did a huge workaround, for something that probably is already included functionality in the Asset.

    Thank you in advance!


    JMN
     
  48. jnoering

    jnoering

    Joined:
    Apr 24, 2018
    Posts:
    3
    Hi,

    I have another problem, which I wanted to post in another post, for a better overview.

    In my Floorprojection game, the user gets removed every couple of seconds. Sometimes it will not even be recognized. I think it depends to a certain degree on the location the player is and in the way they move. However, even when I perform slow movements in the center of the cameras viewport, the user detection "flickers", meaning it adds and removes the user constantly.
    How can I solve this?

    I can provide more information if you need them.

    Thanks in advance! :)

    JMN
     
  49. trifox

    trifox

    Joined:
    Jul 21, 2012
    Posts:
    5
    Hello, i am struggling mapping the color image to the depth image,
    following this documentation
    https://docs.microsoft.com/en-us/azure/kinect-dk/use-image-transformation

    i found one position where the method k4a_transformation_color_image_to_depth_camera is called in the scripts:
    which is in the Kinect4AzureInterface.cs file:
    coordMapperTransform.ColorImageToDepthCamera(capture.Depth, capture.Color, d2cColorData);

    here we see that it is written to d2cColorData variable, how would one display the mapped image?

    ---

    a second approach was to utilitze the provided BackgroundColorCamDepthImage component, following the basic setup using
    sensorData.sensorInterface.EnableDepthCameraColorFrame(sensorData, true);
    and then obtaining the texture using
    sensorData.depthCamColorImageTexture
    leads to an image, but it is only one colored and not the expected colorization from the color camera mapped to the depth image

    what am i misunderstanding/misusing here? generally i believe the overall scripts provided have a simple approach to obtaining such a mapped image, but i would need a small hint to implement, perhaps include it in the examples?
     
  50. trifox

    trifox

    Joined:
    Jul 21, 2012
    Posts:
    5
    i see, when i understand correctly the depth image conversion provides the depth for the color image, okay i think i understand that it is the other way round, but i think i can work with that, because i just want the depth and color to be aligned in the rendering,thx