Search Unity

Azure Kinect Examples for Unity

Discussion in 'Assets and Asset Store' started by roumenf, Jul 24, 2019.

  1. TSStefan

    TSStefan

    Joined:
    Feb 18, 2019
    Posts:
    2
    Hi,
    Unfortunately rotating the camera only seems to rotate the world. The body itself stays sideways. I found "K4ABT_SENSOR_ORIENTATION_CLOCKWISE90 " you mentioned earlier but setting this instead of default doesn't seem to do aynthing.

    I've hit another issue I hope you can help me with. I'm trying to get the DepthColliderDemo2D to work in portrait mode. (Azure Kinect physically stays in default mode). But when I change the Display from 16:9 to 9:16 the scale that gets calculated in DepthSpriteViewer
    Code (CSharp):
    1. float worldScreenHeight = foregroundCamera.orthographicSize * 2f;
    2. float spriteHeight = depthImage.sprite.bounds.size.y;
    3.  
    4. float scale = worldScreenHeight / spriteHeight;
    seems to be off and the colliders and the yellow depth-sprite no longer align. And I'm not quite sure why.
    Additionally, is there a way to properly sync the color image with the depth image? I found
    Code (CSharp):
    1. sensorData.sensorInterface.GetDepthCameraColorFrameTexture()
    and I thought I could get the depth-texture from there instead of
    Code (CSharp):
    1. Texture texDepth = kinectManager.GetUsersImageTex();
    but I couldn't get it to work quite yet. Is this the way to go?

    I added some screenshots so maybe they illustrate my problem.. Sorry to bring up so many issues. Have a nice evening/weekend!
     

    Attached Files:

    Last edited: Feb 21, 2020
  2. Emmetropia

    Emmetropia

    Joined:
    Feb 21, 2020
    Posts:
    1
    Hi,

    I bought this today to use with Kinect v2, but when trying to run any demos, I just get "No suitable depth sensor found. Please check the connected devices and installed SDKs"

    Kinect SDK is definitely installed and the Microsoft configuration tool connects and works properly.

    Unity version is 2019.3.2f1 running on Windows 10.

    -- Edit - Ok, I didn't realise you need to specify the sensor in the setup script. All working now!
     
    Last edited: Feb 21, 2020
    roumenf likes this.
  3. TheoTowers

    TheoTowers

    Joined:
    May 8, 2013
    Posts:
    19
    Hello!

    I'm having trouble integrating the code from the Azure Kinect Examples for Unity with other Azure Kinect code. The issue seems to be they are using the Body Tracking DLL and its new API.

    On the Asset Store page for the Examples, it says: "(v1.9.1) Updated to Azure Kinect Body Tracking SDK v1.0.0", however, I am not seeing any evidence of this in the codebase, nor am I seeing the new body tracking DLL in your Plugins folder (called "Microsoft.Azure.Kinect.BodyTracking.dll").

    Is there by chance some sort of mistake here? Do you have plans to support this new DLL and API?
     
  4. GZMRD17

    GZMRD17

    Joined:
    Jan 25, 2020
    Posts:
    33
    The way I am trying the cameras is with two cameras in completely separate rooms. Can I calibrate the cameras in pairs with only one person in the view of each pair? I'm guessing not, but I'd figure I'd check.
     
  5. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    To your issues:
    1. Yes, the bodies and frames are always in the sensor's coordinate system. If you turn the sensor 90 degrees, the "world" will be turned 90 degrees too. The BT hint is only to improve the body tracking SDK in this case, but this will not turn the coordinate system or the output texture. I think you need to turn the output texture 90 degrees (or turn the monitor/screen 90 degrees). This is Unity issue not related to the Kinect functionality, but here is something that may help you on this one, as to me: https://forum.unity.com/threads/rotate-texture.19018/

    2. Regarding DepthColliderDemo2D: Yes, there is an issue with the body collider in portrait mode. Thank you for reporting it. It should be fixed in the next release.

    3. To sync the depth image with the color image, just enable 'Sync depth and color'-setting of the KinectManager-component in the scene.

    4. To get GetDepthCameraColorFrameTexture() working, you need to invoke the EnableDepthCameraColorFrame()-method of the sensor interface first.

    Please also note, I don't work at weekends.
     
    freekpixels likes this.
  6. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    What exactly do you mean by "integrating the code from the Azure Kinect Examples for Unity with other Azure Kinect code"? Why would you need to integrate two Azure Kinect packages together?

    Regarding the body tracking: Yes, the K4A-asset works with Body Tracking SDK 1.0.0. The SDK is already supported. You can compare the BT DLLs in the root folder of your Unity project with the DLLs in the Body Tracking SDK folder, if you have any doubts.

    'Microsoft.Azure.Kinect.BodyTracking.dll' and its respective API is not part of the K4A-asset though, because it introduces additional library dependencies. My target is the opposite - to eliminate the unneeded dependencies.
     
    freekpixels likes this.
  7. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    No, you can't auto-calibrate the cameras without any common area of view. You can only do this manually.
     
    freekpixels likes this.
  8. GZMRD17

    GZMRD17

    Joined:
    Jan 25, 2020
    Posts:
    33
    Thank you. Would you know where I can find any literature on calibrating the cameras manually? Or, can I just measure their locations by hand and insert these coordinates in the X,Y,Z position and rotations for each camera?
     
  9. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    Yes, you need to measure the XYZ-distances between the cameras in the 1st camera coordinate system (in meters), as well as the orientation of the 2nd camera, according to the 1st one. Then set the measured values as 2nd camera's sensor interface position and rotation manually.
     
    freekpixels likes this.
  10. GZMRD17

    GZMRD17

    Joined:
    Jan 25, 2020
    Posts:
    33
    Great. Thanks!
     
  11. vwloch

    vwloch

    Joined:
    Dec 13, 2018
    Posts:
    2
    Hi-

    This is a fantastic asset! I am trying to use the Avatar (specifically Avatar Demo 4) demos in a vertical set up. I seem to have everything working except for 1 issue. Once everything is set up vertically, I lose the ability to track my motion to the left or right of the screen (though moving towards and away from the kinect works great). No matter where I am in relation to the camera, the Avatar stays in the center of the vertical screen. Interestingly, once I turn off the grounded feet setting, it allows me to move a bit right and left, however, it also moves the avatar to the far right of the vertyical screen, so that it is not fully in frame. Any ideas on how to fix it so that in vertical mode it will track right and left movement instead of keeping it in the center of the frame?

    Thank you!
     
  12. GZMRD17

    GZMRD17

    Joined:
    Jan 25, 2020
    Posts:
    33
    Hi Roumenf. I have two kinect4Azure cameras configured and have gone through the calibration process several times. Below are the configurations for each camera. Both cameras work well in the calibration multicamera setup; however, when I go to the recorder demo and click multicamera setup, only ONE camera works. Is there anything else I need to do other than clicking multicamera setup to get both cameras to work after the calibration is completed and saved to the .json file? I had this working before with four cameras and now I can only get one camera to work. I would appreciate any suggestions.
    upload_2020-2-27_16-43-21.png
    upload_2020-2-27_16-46-17.png
     
  13. Hoib17

    Hoib17

    Joined:
    Feb 28, 2020
    Posts:
    1
    I have three questions:

    1. Do I need to use the multicamera calibration demo to use two synchronized cameras with the recorder demo or can I create the Kinect4Azure0, Kinect4Azure1 and configure these as master and subordinate, with the correct indexes and enter their positions/rotations manually? I would physically wire the cameras in synched configuration as Microsoft instructs.

    2. When I use one camera, I cannot figure out why the avatar moves to the right, when I move to the left and vice versa, the front-back motion seems OK. I have the mirrored motion turned off. I read an old post where you commented this: “If the problem is that the camera output is also rotated, please disable the FollowSensorTransform-component of the MainCamera in the scene. It makes the camera "see" the world the same way the sensor sees it. Instead, in this case you can set the camera's position and rotation manually.” I cannot find the “FollowSensorTransform-component of the MainCamera.”

    3. I’ve also created two game windows for eventually using two cameras. Obviously, this would be so I can have one camera mostly showing one game window and the other camera the other game window. I’d like the cameras in the game windows to mimic what the Kinect cameras are seeing. This appears to be related to the “FollowSensorTransform-component,” but I am not sure.
     
  14. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    Hi, this sounds a bit odd. May I ask you to e-mail me and tell me more details about your vertical setup, the settings you have changed in the scene and, if possible, attach a short video depicting the issue.
     
  15. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    Hi, as far as I see, you have entered the positions and rotations of the both cameras manually in MultiCameraSetup. Am I right? This would mean they've never got saved to the json-config file, hence cannot be loaded in the other scenes. Please look at the root-folder of the project, to check if the config file is there or not.
     
  16. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    To your questions:

    1. The multi-camera calibration scene only simplifies the process. It saves the settings, positions and rotations of the cameras in a config file that can be later loaded and recreated by the KinectManager in the other scenes. Of course you can set them up manually, as well. It's up to your preference.

    2. I'm not sure how exactly your cameras and monitor are setup at the moment. Please e-mail me a picture of your physical setup, your scene components and settings and, if possible, a short video depicting the issue.

    3. I would use custom layers here, if I were you - one set of UI or 3D components in one layer, rendered by the 1st camera only, and other set of components in other layer, rendered by the 2nd camera only.
     
    GZMRD17 likes this.
  17. GZMRD17

    GZMRD17

    Joined:
    Jan 25, 2020
    Posts:
    33
    Thank you Rounmen. I may have followed the instructions incorrectly. I understood that when I calibrate the cameras using the automatic calibration, I still needed to enter values for rotations and Y values as part of the process? Those are the values you see in the screen grabs. I do have a .json file in the main folder of my project. I will try recalibrating and start from scratch today and check that the .json file has the same information. Will this work?
     
  18. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    For automatic calibration, you need to enter the position and rotation of the 1st camera only. The idea is to have at the end of the process the settings, positions and rotations of all cameras saved in the config-file. This way the KinectManager could recreate them later in the other scenes, when you enable that 'Use multi-cam config'-setting.
     
    GZMRD17 likes this.
  19. TheoTowers

    TheoTowers

    Joined:
    May 8, 2013
    Posts:
    19
    Thanks for explaining this! I was able to implement something that meets my needs :)
     
    roumenf likes this.
  20. GZMRD17

    GZMRD17

    Joined:
    Jan 25, 2020
    Posts:
    33
    I am trying to understand the time columns created in the “BodyRecording.txt” file generated with the Recorder demo. It appears that the first column is the actual time from the camera (which has a varying delta-t), then "k4b", then "liRelTime." (which also has a varying delta-t). What are these time columns and what is the difference between these (the time column before “k4b” and the time column after)? Also, what are the units of time for both columns? I would appreciate any assistance.
     
  21. underkitten

    underkitten

    Joined:
    Nov 1, 2018
    Posts:
    30
    Hello roumenf,

    I am trying troubleshoot the errors that I get:

    Can't create body tracker for Kinect4AzureInterface0!
    AzureKinectException: result = K4A_RESULT_FAILED

    I tried to run the app on multiple machines. 3/3 work (3 that don't work are identical machines).
    Installed drives in default folders. Tried Body tracking 1.0.0 - these errors, 0.9.5 - same errors, anything below crashes the app.
    The only difference I can see that working machines run Nvidia card and not working are AMD.

    EDIT: tried on one more machine (clean), Installed 2 sdks. Same issue, NVIDIA card
     
    Last edited: Mar 6, 2020
  22. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    Please look at this tip: https://rfilkov.com/2019/08/26/azure-kinect-tips-tricks/#t15
     
    GZMRD17 likes this.
  23. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    The latest version (v1.9.2) of the K4A-asset works with Body Tracking SDK v1.0.0 (and v0.9.5). The previous SDK versions have slight changes in the SDK API that can cause the crashes you have experienced.

    Before trying the demo scenes of the K4A-asset, please check if the Azure Kinect Body Tracking Viewer works as expected. There are special hardware requirements of the current Body Tracking SDK. Please look here and here.
     
    underkitten likes this.
  24. underkitten

    underkitten

    Joined:
    Nov 1, 2018
    Posts:
    30

    Yes, I figured that Azure Kinect Body Tracking is only supported by NVIDIA cards.
     
  25. GZMRD17

    GZMRD17

    Joined:
    Jan 25, 2020
    Posts:
    33
    When I have multiple cameras and body trackers. The recorder demo generates one "bodytracking.txt" file. How are the body tracking data from multiple cameras combined into one file? Or, are these data only coming from device 0 or the first camera?
     
  26. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    When you are using multiple cameras, the bodies detected by all cameras are combined into single set of bodies. The KinectUserBodyMerger-class takes care of that. All bodies from this "combined" set are saved into "bodytracking.txt"-file by the BodyDataRecorderPlayer-component.
     
  27. GZMRD17

    GZMRD17

    Joined:
    Jan 25, 2020
    Posts:
    33
    That is really nice! Would you have any links to where they discuss the algorithms/math used for this process? I would like to familiarize with the method for my own edification.
     
  28. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    Sorry, I don't have any links, but the source code is in there.
     
  29. GZMRD17

    GZMRD17

    Joined:
    Jan 25, 2020
    Posts:
    33
    Now worries. Thanks.
     
  30. MartinAdcada

    MartinAdcada

    Joined:
    Jan 24, 2020
    Posts:
    6
    Hey folks, I'd be very curious what I'm missing here...

    I tested both, my very own scene AND the Fitting Room Demo in Editor and the built Player.
    I'm currently using Unity 2019.3.0f6 (gonna upgrade it to the latest at the end of the week).

    In Unity everything works as intended. But in both cases I'm getting the following error:


    Fallback handler could not load library C:/Users/Martin Gebske/Documents/UnityProjects/Builds/KinectOpeningTest_Data/Mono/k4a
    Fallback handler could not load library C:/Users/Martin Gebske/Documents/UnityProjects/Builds/KinectOpeningTest_Data/Mono/k4a.dll
    Fallback handler could not load library C:/Users/Martin Gebske/Documents/UnityProjects/Builds/KinectOpeningTest_Data/Mono/k4a
    Fallback handler could not load library C:/Users/Martin Gebske/Documents/UnityProjects/Builds/KinectOpeningTest_Data/Mono/libk4a
    Fallback handler could not load library C:/Users/Martin Gebske/Documents/UnityProjects/Builds/KinectOpeningTest_Data/Mono/libk4a.dll
    Fallback handler could not load library C:/Users/Martin Gebske/Documents/UnityProjects/Builds/KinectOpeningTest_Data/Mono/libk4a
    Fallback handler could not load library C:/Users/Martin Gebske/Documents/UnityProjects/Builds/KinectOpeningTest_Data/Mono/k4a
    Fallback handler could not load library C:/Users/Martin Gebske/Documents/UnityProjects/Builds/KinectOpeningTest_Data/Mono/k4a.dll
    Fallback handler could not load library C:/Users/Martin Gebske/Documents/UnityProjects/Builds/KinectOpeningTest_Data/Mono/k4a
    Fallback handler could not load library C:/Users/Martin Gebske/Documents/UnityProjects/Builds/KinectOpeningTest_Data/Mono/libk4a
    Fallback handler could not load library C:/Users/Martin Gebske/Documents/UnityProjects/Builds/KinectOpeningTest_Data/Mono/libk4a.dll
    Fallback handler could not load library C:/Users/Martin Gebske/Documents/UnityProjects/Builds/KinectOpeningTest_Data/Mono/libk4a

    Fun stuff here:

    DllNotFoundException: k4a
    at (wrapper managed-to-native) Microsoft.Azure.Kinect.Sensor.NativeMethods.k4a_device_get_installed_count()
    at Microsoft.Azure.Kinect.Sensor.Device.GetInstalledCount () [0x00000] in <8989a27780664da9be56aa50f11a482d>:0
    at com.rfilkov.kinect.Kinect4AzureInterface.GetAvailableSensors () [0x00006] in <c05aa64343584858973c6e34e57d4ca0>:0
    at com.rfilkov.kinect.Kinect4AzureInterface.OpenSensor (com.rfilkov.kinect.KinectInterop+FrameSource dwFlags, System.Boolean bSyncDepthAndColor, System.Boolean bSyncBodyAndDepth) [0x000f8] in <c05aa64343584858973c6e34e57d4ca0>:0
    at com.rfilkov.kinect.KinectManager.StartDepthSensors () [0x00277] in <c05aa64343584858973c6e34e57d4ca0>:0
    UnityEngine.DebugLogHandler:Internal_LogException(Exception, Object)
    UnityEngine.DebugLogHandler:LogException(Exception, Object)
    UnityEngine.Logger:LogException(Exception, Object)
    UnityEngine.Debug:LogException(Exception)
    com.rfilkov.kinect.KinectManager:StartDepthSensors()
    com.rfilkov.kinect.KinectManager:Awake()
    (Filename: <8989a27780664da9be56aa50f11a482d> Line: 0)
    Failed opening Kinect4AzureInterface, device-index: 0

    I checked the Body Tracker (works as intended), I checked the installation destination of the SDK (everything at the right place)

    Once again: For testing purpose I created a blank Unity project, just imported the Examples and built it.

    Hope this is an easy fix.

    Cheers
    Martin
     
  31. MartinAdcada

    MartinAdcada

    Joined:
    Jan 24, 2020
    Posts:
    6
    Stupid me. RTFM and select x64 Architecture in Build Settings -.-
     
    roumenf likes this.
  32. cecarlsen

    cecarlsen

    Joined:
    Jun 30, 2006
    Posts:
    862
    Hi @roumenf

    I am trying to load the RawInfraredMap data into a Texture2D object. Why? Because I need to feed it to an API that only accepts Texture2D. The data don't need to touch the GPU (I could leave out the _texture.Apply() call). Perhaps you have a hunch why my image looks wrong? TextureFormat.R16 seems to be the only option. GraphicsFormat.R16_UInt is not supported on my machine (Windows+Nvidia). Is this possible at all then?

    EDIT: for later reference: the code below is working. My issue was later in the process (image was destroyed by wrong handling in OpenCV).

    Code (CSharp):
    1. void Update()
    2. {
    3.   KinectManager kinectManager = KinectManager.Instance;
    4.   if( kinectManager && kinectManager.IsInitialized() )
    5.   {
    6.     KinectInterop.SensorData sensorData = kinectManager.GetSensorData( 0 );
    7.     if( sensorData.lastInfraredFrameTime != lastFrameTime )
    8.     {
    9.       // Create texture.
    10.       if( !_texture ) {
    11.         int width = sensorData.depthImageWidth;
    12.         int height = sensorData.depthImageHeight;
    13.         int pixelCount = width * height;
    14.         _texture = new Texture2D( width, height, TextureFormat.R16, false, true );
    15.         //_texture = new Texture2D( width, height, GraphicsFormat.R16_UInt, TextureCreationFlags.None ); // R16_UInt not suppoted ?
    16.         _texture.name = "KinectIR";
    17.         _rawImageDataBytes = new byte[ pixelCount * 2 ];
    18.       }
    19.  
    20.       // Get raw image data.
    21.       ushort[] rawImageData = kinectManager.GetRawInfraredMap( 0 );
    22.  
    23.       // ushort[] to byte[].
    24.       // https://stackoverflow.com/questions/37213819/convert-ushort-into-byte-and-back
    25.       Buffer.BlockCopy( rawImageData, 0, _rawImageDataBytes, 0, rawImageData.Length * 2 );
    26.  
    27.       // Load into texture.
    28.       _texture.LoadRawTextureData( _rawImageDataBytes );
    29.       _texture.Apply();
    30.  
    31.       // Output.
    32.       _textureEvent.Invoke( _texture );
    33.  
    34.       // Store time.
    35.       lastFrameTime = sensorData.lastInfraredFrameTime;
    36.     }
    37.   }
    38. }
    KinectIR2Texture2D.jpg
     
    Last edited: Apr 2, 2020
  33. cecarlsen

    cecarlsen

    Joined:
    Jun 30, 2006
    Posts:
    862
    Unrelated to the above, but are you aware of the DLL issues in the 2020.1.0b?

    Assembly 'Assets/AzureKinectExamples/SDK/Kinect4AzureSDK/Plugins/Microsoft.Azure.Kinect.Sensor.dll' will not be loaded due to errors:
    Microsoft.Azure.Kinect.Sensor references strong named System.Buffers, versions has to match. Assembly references: 4.0.2.0 Found in project: 4.0.3.0.​

    Assembly 'Assets/AzureKinectExamples/SDK/Kinect4AzureSDK/Plugins/System.Memory.dll' will not be loaded due to errors:
    System.Memory references strong named System.Buffers, versions has to match. Assembly references: 4.0.2.0 Found in project: 4.0.3.0.​
     
  34. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    I just tried your code (with some slight modifications, because the IR values are very small) and I got a nice looking IR texture. Not sure why you get this output, but the reason is not in the code above. Feel free to e-mail me, if you want to get my modified script. I'll add it to the next package release in KinectScript/Samples, as well.

    On another note, you can get a nice looking IR texture (black-white instead of red) directly by setting 'Get infrared frames' to 'Infrared texture' and then calling 'KinectManager.Instance.GetInfraredImageTex()' in your script.
     
    cecarlsen likes this.
  35. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    Thank you for the info! I'll try to get rid of these dependencies in the next release. Again, feel free to e-mail me, if you want to try it out, as soon as I'm ready with it.
     
    cecarlsen likes this.
  36. GZMRD17

    GZMRD17

    Joined:
    Jan 25, 2020
    Posts:
    33
    Hello roumenf. So far I've been using Unity in Windows 10, but I need to migrate and run your Unity body tracking demos and run some additional C# scripts for the Kinect 4 cameras in Ubuntu. Before I dive into this for hours, do you know if this is possible? If so, what version of Unity would you recommend, the latest one?
     
  37. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    I don't know for sure, but it should be possible. Both Sensor SDK and Body Tracking SDK have Linux installations. Unfortunately, as I said before, I don't have Ubuntu here and I'm not a Linux expert either. But if you like, we can try to do it together. Feel free to e-mail me, if you want to do it privately, or we can do it publicly here.

    The 1st step would be to put 'libdepthengine.so', 'libk4a.so' and 'libk4arecord.so' into the 'AzureKinectExamples/SDK/Kinect4AzureSDK/Plugins'-folder of the K4A-asset and make sure they are correctly recognized by Unity as native libraries. After this step, some demo scenes that don't require body tracking should start working.

    The 2nd step would be to add the tools-subfolder of the Azure Kinect Body Tracking SDK to the system path, so the body tracking libraries could be found and loaded at runtime, as well.
     
  38. GZMRD17

    GZMRD17

    Joined:
    Jan 25, 2020
    Posts:
    33
    Let me discuss with colleagues and try to get C# working in Ubuntu first. It will probably be the weekend before I have some information.
     
  39. GZMRD17

    GZMRD17

    Joined:
    Jan 25, 2020
    Posts:
    33
    Hi roumenf. I have not had time to install Visual Studio Code for C# in ubuntu yet. I will keep you posted. I have a separate question: Do you know if there any way in windows to program the time and date in the cameras or do they take the time from the computer?
     
  40. MartinAdcada

    MartinAdcada

    Joined:
    Jan 24, 2020
    Posts:
    6
    Another stupid question. I'm using the camerafeed of the Kinect.
    When I'm using Particles (not VFX Graph... perhaps that's the issue?) I can only see the particles where they hit my player shiluette but not the other surrounding which isn't affected by my 3d model... can you understand my issue? ^^
     
  41. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    I don't think there is a way to set the device timestamps.
     
  42. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    I'm afraid I can't understand your issue. Could you please e-mail me next week with some more details and screenshots (or a short video) depicting the issue. Please mention your invoice number in the e-mail, as well.
     
  43. GZMRD17

    GZMRD17

    Joined:
    Jan 25, 2020
    Posts:
    33
    Sounds good. That was my worry. I've looked into it a lot and may need to contact Microsoft.
     
  44. liping1

    liping1

    Joined:
    Apr 16, 2020
    Posts:
    1
    Hello - I have an Azure Kinect, have installed the Kinect Sensor SDK and Kinect Body Tracking Viewer - both works - but can't seem to get any of the demo scenes working in Unity. I'm on Unity 2019.3.9f1. It says "No suitable depth-sensor found. Please check the connected devices and installed SDKs" when I tried playing or creating a build. Any ideas? Not sure what I'm missing here, have been trying to figure it out for 2 days! Thanks!


    AzureKinectOpenDeviceException: result = K4A_RESULT_FAILED
    Microsoft.Azure.Kinect.Sensor.AzureKinectOpenDeviceException.ThrowIfNotSuccess[T] (System.Func`1[TResult] function) (at <193cccb1651846988252858da21b6c95>:0)
    Microsoft.Azure.Kinect.Sensor.Device.Open (System.Int32 index) (at <193cccb1651846988252858da21b6c95>:0)
    com.rfilkov.kinect.Kinect4AzureInterface.OpenSensor (com.rfilkov.kinect.KinectInterop+FrameSource dwFlags, System.Boolean bSyncDepthAndColor, System.Boolean bSyncBodyAndDepth) (at Assets/AzureKinectExamples/KinectScripts/Interfaces/Kinect4AzureInterface.cs:219)
    com.rfilkov.kinect.KinectManager.StartDepthSensors () (at Assets/AzureKinectExamples/KinectScripts/KinectManager.cs:2503)
    UnityEngine.Debug:LogException(Exception)
    com.rfilkov.kinect.KinectManager:StartDepthSensors() (at Assets/AzureKinectExamples/KinectScripts/KinectManager.cs:2527)
    com.rfilkov.kinect.KinectManager:Awake() (at Assets/AzureKinectExamples/KinectScripts/KinectManager.cs:2304)
     
  45. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    Please e-mail me the Editor's log-file, so I can take a closer look. Here is where to find Unity log-files: https://docs.unity3d.com/Manual/LogFiles.html Please also make sure the path, where the Unity project resides does not contain any non-English characters.
     
  46. UDN_31531f55-d82f-47e5-aeb6-20f4aaa3f82b

    UDN_31531f55-d82f-47e5-aeb6-20f4aaa3f82b

    Joined:
    Aug 5, 2016
    Posts:
    11
    Hi there - I am using two Azure Kinects to detect users in front of a curved wall and displaying their 'shadow' by instantiating a prefab which has the background removal manager, background removal by body index and raw image components. It's currently set up so that the raw image from sensor 0 and sensor 1 are side by side, accounting for the sensor position. Is there a way to perform one seamless background removal using two sensors, or obtain the sensor index upon user detection so that I can instantiate the prefabs accordingly? The reason why I need separate instances of each user is to perform individual effects on each one. Thank you!

    Update: I've just assigned each background removal instance to a sensor and player index instead of instantiating them - not sure why I did not think of this earlier! However if there's a better way to go about this it'd be great to know, thanks!
     
    Last edited: Apr 21, 2020
  47. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    Hi, if you want to instantiate the BackgroundRemovalManager-component dynamically, you probably use AddComponent(). As far as I remember, this automatically calls the Start()-method of the component. Hence, after you change the player index and sensor index of the newly instantiated component, you should explicitly call its OnDestroy()-method to release the resources (please make OnDestroy() public in this case), and then call Start() again, to re-get the needed resources (render textures, compute buffers, etc.).
     
  48. ruvidan2001

    ruvidan2001

    Joined:
    Apr 5, 2020
    Posts:
    6
    Hi there! I just bought this awesome asset but most of the demos are pink. So when I open something like "Mocap animator Demo", the floor and the people are all pink. I've loaded quite a few demos, and most are pink. I researched but couldn't figure out if there was a setting I missed out on.
     
  49. GZMRD17

    GZMRD17

    Joined:
    Jan 25, 2020
    Posts:
    33
    Hi roumenf. Do you know where I can find the class KinectUserBodyMerger, In which script/file?
     
  50. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    When you create the Unity project (where you are going to import the K4A-asset), please select the 3D template on the left. I suppose your current project is HDRP or URP, and this is causing the pink (i.e. invalid) shading of materials in the most demo scenes. There is a VFX-demo scene in the package as well, but my personal advice would be to create separate projects for HDRP and for the normal 3D.