Search Unity

  1. Megacity Metro Demo now available. Download now.
    Dismiss Notice
  2. Unity support for visionOS is now available. Learn more in our blog post.
    Dismiss Notice

Kinect v2 with MS-SDK

Discussion in 'Assets and Asset Store' started by roumenf, Aug 1, 2014.

  1. Montecillos

    Montecillos

    Joined:
    Nov 6, 2017
    Posts:
    2
    Yeah I want an object to only move in the X, Y, or Z axis, I'm using the first interaction demo to try it out but so far it still behaves normally when I try it.
     
  2. DUDESG

    DUDESG

    Joined:
    Dec 27, 2013
    Posts:
    1
    Great asset by the way

    Is there a way to just blob track everything with in a certain depth, maybe between 1m - 1.5m of the kinect without the need of recognizing a whole users first. I would like to blob out objects of arms to interact with rigid bodies in the scene
     
  3. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    Well, let me explain a bit. You can freeze the positions and rotations of physical objects (objects with rigid body), only if they are controlled by physics, i.e. by forces. Although the interactable objects in this scene have RigidBody components, they are not really physical objects. They are controlled by the GrabDropScript component. It moves them by updating their transform's position according to the hand cursor's position.This means they are not treated as physical objects in the scene. The only time they become physical, is when you drop an object. Then its 'Use gravity'-setting is enabled, the gravity force is applied and they fall. At this moment you should see the effect of your freezes.

    Of course, you are free to modify the script, to apply forces to the objects hovered by the hand cursor instead of moving them. My goal with this demo was other - to show simple means for dragging and dropping of virtual objects.
     
  4. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    Thank you! See the demo scenes in KinectDemos/VisualizerDemo-folder. Maybe this is what you're looking for..
     
  5. this-play

    this-play

    Joined:
    Sep 8, 2016
    Posts:
    5
    Salve Rumen !

    We are developing a kinect user interface that utilizes transparent *.webm videos in the canvas for user interaction feedback. So far it's working well, but we have difficulties implementing it into your fittingroom demo.

    The videos are written into 2D textures and get displayed on raw image elements within the canvas.
    These gameobjects use a material with an unlit/transparent shader. It works in our other projects, but not in the fittingroom.

    I suspect that it has something to do with the user bodyblender.
     
  6. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    Hi, I suspect your suspicion is correct :) Disable the UserBodyBlender-component of the MainCamera in the scene, and you will find out.
     
  7. tomaskl

    tomaskl

    Joined:
    Jan 25, 2017
    Posts:
    4
  8. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    There is a Wave-gesture included - waiving from the elbow, as far as I remember.
     
  9. ddsinteractive

    ddsinteractive

    Joined:
    May 1, 2013
    Posts:
    28
    Good day! How do I work with the sensor angle of -90º if the sensor is suspended from the ceiling and looking at the floor. We want to do depth blob tracking and link a simple gameobject to follow the motion across the floor. Thank you! This will be for an interactive projection project.
     
  10. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    This should be feasible, if you only need scanning of the surfaces and not body tracking. Look at KinectSceneVisualizer-demo in KinectDemos/VisualizerDemo-folder.
     
  11. jmv85

    jmv85

    Joined:
    Oct 6, 2017
    Posts:
    1
    Hello,


    I'm just starting to study projection matrix and wrap my head around how I can manipulate this inside of Unity. I'm currently trying to prototype a holographic scene, but as a beginner I'm having a hard time understanding what I'm reading in these forums and tying it back to my prototype. Do you have any tutorial suggestions for a beginner? Or possible readings? I really would like to understand the foundations of this before I feel I can successfully implement your script.

    Thank you!
     
  12. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    Here is a good read, although I'm not sure if projections and projection matrices are good place for a beginner to start: https://en.wikibooks.org/wiki/Cg_Programming/Unity/Projection_for_Virtual_Reality

    If you have the K2-asset, see the KinectDemos/VariousDemos/KinectHolographicViewer-scene and its SimpleHolographicCamera-component. It is based on the Davy Loots script above.
     
  13. unity_6hBuye54xQI54w

    unity_6hBuye54xQI54w

    Joined:
    Dec 12, 2017
    Posts:
    1
    I got the same problem aswell. I am not sure if I understand his solution correctly. I tried removing my kinect.dll s temporarily with no success. Are there any other possible solutions or advices? (the kincet samples are working fine)
     
  14. danielvettorazi

    danielvettorazi

    Joined:
    Mar 8, 2016
    Posts:
    6
    Hey guys! I am working in a project that need to be a live mocap from a actor to the game scene.
    With this plugin can i archieve this? (Live mocap from his body to a character)
    Thanks!
     
  15. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    Yep.
     
  16. thom72

    thom72

    Joined:
    Jan 3, 2018
    Posts:
    2
    hi, i look for mapping my skeleton on video rvg from kinect , i use script's sdk "body source manager". the skeleton is too small compared to the user in the camera and when I launch the application the skeleton is still in the middle even if the user is on one side.

    Thank you for your answers
     
  17. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    I don't remember a script called 'body source manager' in the K2-asset. If you have the package, just look at KinectOverlayDemo2-scene in KinectDemos/OverlayDemo-folder, and its SkeletonOverlayer-script component.
     
  18. Stephen_O

    Stephen_O

    Joined:
    Jun 8, 2013
    Posts:
    1,510
    Hello,

    I've heard kinect is being discontinued.. Does anyone know what the best alternatives would be and if it might also make use of this sdk?
     
  19. artgarcia

    artgarcia

    Joined:
    Nov 17, 2017
    Posts:
    6
    Hi!
    I am using this amazing asset to animate an avatar using kinect v2 (adobe fuse avatar).

    I have seen that there is a demo for facial animation (open mouth, and so on). Is it possible to use this asset to animate the avatar face? I was not able to find doc about how the demo is created, there are a set of transforms for lips, eyes and jaw positions, but I don´t know how to replicate that. I don´t even know if it is possible to use the Adobe fuse avatars rigged with mixamo for this facial animation...

    Any help?

    Thanks in advance.
     
  20. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    The best alternative so far (although inferior to Kinect-v2) are the Orbbec Astra & Astra-Pro sensors. There is currently Orbbec-Astra interface in the K2-package, but it is not yet finished. The depth-color coordinate mapping is still missing. In the meanwhile, Orbbec released SDK 2.0 and the interface will be probably updated in the future K2-releases.
     
    YuYoshioka, Stephen_O and vivalavida like this.
  21. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    There are two options for facial animation:
    1. Animate rigged face model with ModelFaceController, as shown in KinectFaceTrackingDemo4-scene. The component documentation is here: http://ratemt.com/k2docs/ModelFaceController.html You need to experiment with your model and the component settings (transforms, axes, max values).

    2. Animate face with blend shapes (similar to iPhone facial animations, after they discovered the depth sensors). In this case the script component to use is BlendShapeFaceController.cs. You need to customize it a bit, with the names of your model's blend shapes. Please e-mail me, if you need a demo model. Don't forget to mention your invoice number, as well.
     
  22. artgarcia

    artgarcia

    Joined:
    Nov 17, 2017
    Posts:
    6
    Thanks for the fast response. After having played a bit with the names of the blend shapes I think it is better to send you an email.
    Thanks again!.
     
  23. artgarcia

    artgarcia

    Joined:
    Nov 17, 2017
    Posts:
    6
    Hello again,

    I have another question ^^'. Is the animation using blend shapes sent through the network in a multiuser setup? I managed to make somethig kind of right with the blend shapes names, but when I tried it with two users, only the one connected to the kinect v2 is displaying the animations :-s.

    Thanks in advance.
     
  24. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    The BlendShapeFaceController-component has a setting called 'Player index'. Use it to set the user index you want the component to track. The same applies to the most of Kinect-related components.
     
  25. artgarcia

    artgarcia

    Joined:
    Nov 17, 2017
    Posts:
    6
    Do you mean for networked multiuser? I tried changing the player index but it does not seem to work.

    Thanks for your email btw.
     
  26. YuYoshioka

    YuYoshioka

    Joined:
    Sep 7, 2016
    Posts:
    11
    I'm really happy to hear that. I can't wait for updates!
     
    roumenf likes this.
  27. wolfy2

    wolfy2

    Joined:
    Nov 2, 2015
    Posts:
    13
    Hi. Im trying make Kinnect Disconnected/Connected text to appear based on the state in realtime.. where in scripts can i find appropriate state/bool/ something?.

    Edit:
    I did finally found K2SensorChecker script but putting the function in Update drops frame rate to 1fp /5sec. Any help?
     
    Last edited: Jan 15, 2018
  28. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    Sorry for the late response! Not sure why you would need this at all. Kinect is a sensor that should stay connected. It's not advisable to disconnect and reconnect it all the time, while the app is running.

    K2SensorChecker's idea was to check for Kinect presence only once - at the scene start. Moving the code in Update() would connect and disconnect to/from the sensor on every frame.

    Here is a better way to do it. The following code is to be added to Awake() or Start()-method of your script. Don't forget to unsubscribe from the event in OnDestroy(), as well. Sorry, I can't test the code at the moment, but suppose it should work.:
    Code (CSharp):
    1.        
    2.         KinectManager kinectManager = KinectManager.Instance;
    3.         if (kinectManager != null)
    4.         {
    5.             KinectInterop.SensorData sensorData = kinectManager.GetSensorData();
    6.  
    7.             Kinect2Interface k2int = (Kinect2Interface)sensorData.sensorInterface;
    8.             k2int.kinectSensor.IsAvailableChanged += K2int_kinectSensor_IsAvailableChanged;
    9.         }
    10.  
    11. ....
    12.  
    13.     void K2int_kinectSensor_IsAvailableChanged (object sender, Windows.Kinect.IsAvailableChangedEventArgs e)
    14.     {
    15.         // your code here
    16.     }
    17.  
     
    graworg likes this.
  29. snw

    snw

    Joined:
    Mar 13, 2014
    Posts:
    42
    Hi,

    I think there is a memory leak related to the FacetrackingManager and getting HD face data.
    If you open a scene in the editor using this feature you will see that memory consumption rises every time you hit play. The memory only gets released again if you close the editor.

    As I use the plugin currently in a project, is there a quick fix to solve this issue?


    Thanks
     
  30. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    You are right. This is a known issue. Unfortunately this leak is not in the FacetrackingManager, but in the underlying FT plugins from Microsoft. It was even reported without much success back then: https://social.msdn.microsoft.com/F...ace-tracking-leaking-memory?forum=kinectv2sdk

    It should not affect your project, because Kinect & face-tracking are usually initialized & closed only once - when the project is started and closed. For the editor the workaround is to close and reopen it after ~10 runs. Before it crashes, the previous time the face tracking usually stops working (as far as I remember), so you can't miss it.
     
    snw likes this.
  31. Voronoi

    Voronoi

    Joined:
    Jul 2, 2012
    Posts:
    584
    I am using the Fitting Room demo to detect the gender of a user with the Kinect SDK 1.8. It works great when testing it myself, but I run into problems with multiple users and my code to re-check when a gender is not found.

    My set up is to detect by Appearance, so I immediately check for gender when a new User is found. Then, if no gender was found, I check again every 5 seconds or so, as long as a user is detected. I am allowing up to 6 users, so I want to know the gender of all 6 if possible. In other words, if a male and female are present, I want to know that. I don't track their bodies, just using the Cloud Face API to detect gender, age, expression.

    I've tracked down the issue and the script I am using is your CloudFaceDetector.cs, which is using the playerIndex variable, which is set to 0. Meaning, it will only detect the first player entered. How might I go about using more than one player, ignoring the person who has already been identified as male or female?

    What I like is how it works with one person, it returns the gender and as long as that user does not leave, I am aware of that user's gender. That prevents needless checks once I have both a Male and Female identified as an active user. It is an installation, so the main point is just the gender of the group in front of the installation.

    Some of the things I've tried result in too many calls to the API too quickly, so I need to delay checking, but somehow compare what user has been tracked and what user has not been tracked. I'm only interested in checking non-gender identified players.

    It's kind of a nightmare to check this, as I really need 3 or 4 people milling about and don't have people that I can grab as I try out my various solutions.
     
    Last edited: Jan 25, 2018
  32. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    OK, let me explain it a bit, as long as I remember the procedure. I created this code quite long time ago... The whole gender & age detection is in CloudFaceDetector-script, so you'd need to modify it, according to your goals.

    The face detection is done by processing the color camera image (texImage) and detecting all available faces on it. Because in the fitting room we target one user, I tried to clip only this user's image (texClipped) from the color camera image. My goal back then was to send as little portion of the image to the cloud service, as possible.

    In your case maybe it will be better to send the whole camera image instead and then compare the positions of the found faces (faceManager.faces) to the color-overlay positions of the users' heads. Face detection is invoked in DoFaceDetection() and the detected faces are processed there, as well.

    One other advice would be to do it as rarely as possible. Once the genders of all currently tracked users are detected, you don't need to call the cloud API any more, until a new user comes in.
     
  33. Voronoi

    Voronoi

    Joined:
    Jul 2, 2012
    Posts:
    584
    Thanks for your help. That is essentially what I did, it was as much a logic problem as anything. Rather than clip the image, I just sent the image if a person is detected in front of the installation, but neither Male or Female gender has been found.

    I do iterate through all of the faces found in each image check. It rechecks every 5 seconds, depending on how many people. Once both genders are found, it stops checking. So far, seems to be working great!
     
    roumenf likes this.
  34. TijsVdV

    TijsVdV

    Joined:
    Mar 7, 2017
    Posts:
    32
    Hey,

    First of all great plugin, makes my life much easier.
    We are making a game where two players need to be tracked and shown on screen. At the moment we are using the DepthSpriteViewer script for this to show both players. This works nicely and places the colliders properly.

    But we now need to be able to give both players a different color / material so that we can manipulate each player seperate when they pick something up. Currently i am adapting the material on the DepthImage but this ofc changes the full image. How would we solve this?
     
  35. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    If you change the 'Compute user map'-setting of the KinectManager-component from 'Cut-out texture' to 'User texture', you will have differently painted users on screen.

    For two players you would need to duplicate the DepthSpriteViewer and setup the player indices for each player you'd like to track.
     
  36. TijsVdV

    TijsVdV

    Joined:
    Mar 7, 2017
    Posts:
    32
    Thx for the quick response! We are already doing what you say, we have two DepthSpriteViewer scripts for each player. Problem is for example we want to do an outline around 1 player when he picks something up. We have a shader that applies this to the depthimage component material but this changes both players.
     
  37. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    See the code of DepthShader.shader in the Resources-folder, to find out how to get the player index. Feel free to replace the whole shader or parts of its code with the code of your shader, to outline the players instead.
     
  38. gaye2

    gaye2

    Joined:
    Feb 6, 2016
    Posts:
    2
    To fix rotational error that occurs when person is not centered in the middle of the kinect, is it possible to add the rotational fix in ModelHatController.cs to the JointOverlayer.cs for head overlayers?
     
  39. gineshidalgo99

    gineshidalgo99

    Joined:
    Apr 5, 2016
    Posts:
    1
    roumenf likes this.
  40. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    Hm, why not. Have you tried it? It should be after rotJoint estimation.
     
  41. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    No, not at the moment. There is no interface to CMU-OpenPose yet, but this looks like a good next challenge. Thank you!
     
  42. ojaxala

    ojaxala

    Joined:
    Apr 13, 2018
    Posts:
    1
    I have a question
    How can I get KinectManager.MapDepthFrameToSpaceCoords to not only the whole depth but only the user's part?
     
  43. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    This method always returns the space coordinates corresponding to all depth points. Then you can use KinectManager.Instance.GetRawBodyIndexMap() to get the body index array of the same size as the depth frame, and filter only the space coordinates that have the same body index as the user. The body index of the user you can get with KinectManager.Instance.GetBodyIndexByUserId().
     
  44. kilroyone

    kilroyone

    Joined:
    Sep 18, 2014
    Posts:
    4
    I do not know whether anyone asked here about measuring sizes for clothes (approximate).
    For example:
    - Waist circumference
    - Chest circumference
    - Hip girth
    - Leg length
    - Hand length
    Any ideas how do that?
     
  45. tszchung_lai

    tszchung_lai

    Joined:
    Feb 25, 2016
    Posts:
    1
    Is this package support Asus xtion2?
     
  46. ChilledCitizen

    ChilledCitizen

    Joined:
    May 11, 2016
    Posts:
    1
    Hi!

    I and my team are developing a rehabilitation game using Kinect, and I was wondering is there a way to optimise the data streams? The game runs smoothly otherwise, but the character that has the AvatarController attached to it, is laggy on our Intel NUC that's gonna be running the game when it's piloted for hospitals. The main difference between the NUC and the laptop that I use for development is, that the NUC only has integrated graphics, but a powerful one at that. When checking the NUC's performance, we noticed that it didn't utilise the IGPU almost at all, other than rendering the world, putting all the load to the CPU from the Kinect. Where as on the laptop, it uses the GPU more.

    Is there a way to either optimise the data streams by maybe having them update less often, have them threaded to different processes / to have it utilise the integrated graphics fully, or by limiting the type of data coming through since we only need the bodyData and GestureData?

    Thanks in advance!
     
  47. Rolento

    Rolento

    Joined:
    Aug 26, 2013
    Posts:
    1
    I'm trying to get a character with the avatar controller component to stay in place, but also be able to crouch down. I set an offset node, and have vertical movement enabled but now my character can't crouch down.

    If I don't have an offset node I can crouch, but my avatar dosnt start where I want it to (a specific point every time, ie 0,0,0) each time.

    What's the best solution to this since i need crouching, and potentially jumping while staying in place.
     
  48. onechenxing

    onechenxing

    Joined:
    Feb 13, 2015
    Posts:
    1
    If UserMeshVisualizer and BackgroundRemovalManager use together。
    UserMeshVisualizer will display error。
     
  49. Voxelscanner

    Voxelscanner

    Joined:
    May 7, 2017
    Posts:
    4
    Hi,
    Thank you for a great asset! I just got started reading through the code and was wondering why the joint orientations are calculated in the Kinect Manager script (CalculateJointOrients(ref KinectInterop.BodyData bodyData)), instead of using the values directly available from the Kinect skeleton stream.

    Thanks
     
    Last edited: Aug 1, 2018
  50. skuby

    skuby

    Joined:
    Oct 27, 2014
    Posts:
    2
    Hi, I just purchased this. I got this error when trying to run the sample scene.

    Assets/K2Examples/KinectScripts/Interfaces/NuitrackInterface.cs(529,23): error CS0246: The type or namespace name `nuitrack' could not be found. Are you missing an assembly reference?