Search Unity

Kinect v2 with MS-SDK

Discussion in 'Assets and Asset Store' started by roumenf, Aug 1, 2014.

  1. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    Thanks for the advice. I checked it on my computer and the avatar is not scaled continuously anymore, just as expected. Keep in mind that when you raise your hands the sensor detects most of the body joints a bit higher than when you lower your hands. You can see this in OverlayDemo2 for instance, which has nothing to do with models, scaling, etc. If this is your problem, disable the 'Vertical movement'-setting of AvatarController-component of the model, too.
     
  2. Huy Thach

    Huy Thach

    Joined:
    Mar 27, 2013
    Posts:
    6
    Thanks for your help,
    When I disable "Vertical movement" the height of 3D model is not scaled when I raise my hand. However, when I sit down, the height of model is not changed. Anyway, I will try to find solutions for this problem.

    I see the 3D model is scaled based on the height of users. Do you know how to scale model on both height and width of user? Thanks.
     
  3. ralf_b

    ralf_b

    Joined:
    Jul 9, 2013
    Posts:
    48
    roumenf and vivalavida like this.
  4. Pendrokar

    Pendrokar

    Joined:
    Mar 26, 2011
    Posts:
    95
  5. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    I wish I had 4 hands and 25 hours per day, to do every wish possible :) But before deciding if or not to add support to UWP and Xbox-One, I would need to experiment a bit with the framework, available tools and computer-console interaction. This will take some time, for sure. And my Xbox at home is still 360. @Pendrokar, thank you for providing the link to this forum. I'll take a look at the weekend. By the way, what exactly means "new developer mode" and how does it differ from the old dev. mode?
     
    Pendrokar likes this.
  6. Nico20003

    Nico20003

    Joined:
    Apr 4, 2014
    Posts:
    35
    it is posible with kinect v1 and this asset to do something like this?



    Thanks
     
  7. FarazKhalid

    FarazKhalid

    Joined:
    Jan 13, 2014
    Posts:
    4
    Hi,
    Kinect v2 examples are great. I am experiencing problem with face examples.
    All the other demos are working fine but face demos are not tracking face.
    Can you please tell what can be the issue?
    Thanks.
     
  8. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    Why not. The data for face vertices and triangles is available. See FacetrackingDemo1. The FT-generated mask is there, only not as wireframe, because drawing single lines in Unity is not so simple task.
     
    Nico20003 likes this.
  9. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    Didn't I already answered this on the website? Here it is: "It may seem counter intuitive, but update your Nvidia drivers (or video drivers) to their latest version. This helped, when I got a problem like yours."
     
    FarazKhalid likes this.
  10. glardon

    glardon

    Joined:
    Apr 8, 2015
    Posts:
    14
    Hi

    I'm trying to get BodyIndex frames working in my project, but i don't understand what i'm doing wrong.


    Depending the initialization of the multisourceReader, the MultiSourceFrameArrived event is fired or not.

    this is my simple OnMultiSourceFrameArrived :

    private void OnMultiSourceFrameArrived(object sender, MultiSourceFrameArrivedEventArgs e)
    {
    Debug.Log("OnMultiSourceFrameArrived");
    }


    And this a simple working initialization.
    In this case OnMultiSourceFrameArrived is well called :

    _reader = _sensor.OpenMultiSourceFrameReader(FrameSourceTypes.Color);
    _reader.MultiSourceFrameArrived += OnMultiSourceFrameArrived;



    BUT, if i do that, the OnMultiSourceFrameArrived is no more called :
    _reader = _sensor.OpenMultiSourceFrameReader(FrameSourceTypes.Depth | FrameSourceTypes.Color | FrameSourceTypes.Body | FrameSourceTypes.BodyIndex);

    This doesn't work better

    _reader = _sensor.OpenMultiSourceFrameReader(FrameSourceTypes.Depth);


    And i can't understand why.
    Can someone help me ?
    Do you know any demo using BodyIndex, working, somewhere on the web ?

    Thanks !
     
    Last edited: May 10, 2016
  11. coshea

    coshea

    Joined:
    Dec 20, 2012
    Posts:
    319
    HI @roumenf

    Posting on here for the sake of anyone else with the same thing. I don't use #define USE_SINGLE_KM_IN_MULTIPLE_SCENES, I have one KinectManager that starts and doesn't destroy across scenes.

    However the issue with this is that the gestureListeners list on KinectManager.cs only gets built during startKinect, so I copied this code into a new method that can be called within my scene:

    Code (csharp):
    1.  
    2.     public void refreshGestureListeners()
    3.     {
    4.  
    5.         gestureListeners.Clear();
    6.  
    7.             MonoBehaviour[] monoScripts = FindObjectsOfType(typeof(MonoBehaviour)) as MonoBehaviour[];
    8.  
    9.             foreach (MonoBehaviour monoScript in monoScripts)
    10.             {
    11.                 if (typeof(KinectGestures.GestureListenerInterface).IsAssignableFrom(monoScript.GetType()) &&
    12.                    monoScript.enabled)
    13.                 {
    14.                      gestureListeners.Add(monoScript);
    15.                 }
    16.             }
    17.  
    18.     }
    19.  
    Would be great if this could be added in an update.

    Thanks
     
    roumenf likes this.
  12. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    Hi @coshea,
    OK, I'll add this method to the KinectManager for the next update. Just for the sake of clarity, there is a special script (component) called LocateAvatarsAndGestureListeners, demonstrated in the scenes of the multi-scene demo. If added to an object in the scene, it does exactly the same.

    By the way, just finished the interaction listeners you wished some time ago ;) They'll be available with the next update, too.
     
  13. andriashardinata

    andriashardinata

    Joined:
    Oct 30, 2015
    Posts:
    1
    I already use this plugins and it's great
    sorry i'm still beginner, can someone point me how to build networked point cloud using multiple kinect, on multiple pc
    each kinect connect to pc with unity3d

    thxs
     
  14. vivalavida

    vivalavida

    Joined:
    Feb 26, 2014
    Posts:
    85
    You can't connect multiple kinects using the kinect sdk.. you probably might have some luck with roomalive tooklit.

    and if you do get things working, it'd be nice to see how it's done.
     
  15. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    This is not yet possible with this package. So far only the body-joint information is transmitted over the network, by the Kinect data-server. Unfortunately the K2-mobile-examples package (the client part, for info see here) is not yet published on the Unity Asset store. Face meshes and user meshes are in my plan for future update of these two packages.

    Regarding what you need, maybe you should look at Mimesys-VR: http://www.mimesysvr.com/
     
    vivalavida likes this.
  16. hope_xiao

    hope_xiao

    Joined:
    Jul 30, 2013
    Posts:
    1
    error with mobile vr :
    Assets/KinectScripts/Interfaces/DummyK2Interface.cs(4,14): error CS0101: The namespace `global::' already contains a definition for `DummyK2Interface'
    Assets/SolarSystem/Scripts/RotateAround.cs(4,14): error CS0101: The namespace `global::' already contains a definition for `RotateAround'
     
  17. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    Did you import the MobileVR-package into a NEW Unity project, as required by the instructions here: https://rfilkov.com/2016/05/07/kinect-v2-mobile-vr-examples/ ?
     
  18. Jackin-Joe

    Jackin-Joe

    Joined:
    May 31, 2016
    Posts:
    2
    how can i get head rotation,i need a example,if you know,please tell me.thank you
     
  19. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    First, make sure you have the FacetrackingManager along with the KinectManager-component in your scene. It is needed to detect the head orientations. Then, to get the head rotation in your script, use 'KinectManager.Instance.GetJointOrientation(userId, (int)KinectInterop.JointType.Head, !mirrored)' or 'FacetrackingManager.Instance.GetHeadRotation(mirrored)'. See the ModelFaceController-component in FacetrackingManager4-scene, if you need example.
     
  20. Pendrokar

    Pendrokar

    Joined:
    Mar 26, 2011
    Posts:
    95
    Didn't receive a notification about a reply. So only replying now.

    There was an old dev mode? From what I understood before March it was only possible to deploy projects to a special Xbox One given by Microsoft to developers. Later, with the ID@Xbox program, Microsoft started to hand out these special Xbox's to Indie developers.
     
    roumenf likes this.
  21. JackStreicher

    JackStreicher

    Joined:
    Apr 13, 2016
    Posts:
    1
    I want my charakter to ride on a Glider which is constantly moving (it doesn't work when I put him in a hierarchy for some reason... he copies the objects rotation but his position is set off)

    To my Main-Question: If I crouch the legs are pulled to the charakter middle instead of the torso etc. going towards the Object:
    You can almost see it in the following picture. Instead of lifting his legs up he is supposed to actaully crouch. How can I fix my issued?
     
  22. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    Sorry, I was busy the last two days with a very interesting conference. I think you should use the OffsetNode-setting of AvatarController in your setup. There are some changes and a new avatar-demo scene in the current beta-version regarding this offset-node. Please e-mail me next week (and mention your invoice # too), and I can send it to you to check it, if you like. If the legs are pulled instead of avatar going down, this means the avatar's body is not moving. Need to look closer at your scene to find the reason, if you can't find it alone.
     
  23. Pendrokar

    Pendrokar

    Joined:
    Mar 26, 2011
    Posts:
    95
    I guess you never came around trying it. I get a few similar errors when trying to build a project with just this asset for UWP (Universal 10). See the full list in the attached log.

    Doesn't seem too fatal, but it might mean that a UWP compatible version of this asset would have to posted as a standalone one. I've found a few fixes for errors in the log:

    error CS1061: 'Type' does not contain a definition for 'IsAssignableFrom' and no extension method 'IsAssignableFrom' accepting a first argument of type 'Type' could be found (are you missing a using directive or an assembly reference?)​

    This seems to be easily fixed by adding "#import System.Reflection".

    error CS0117: 'Thread' does not contain a definition for 'Sleep'
    error CS1061: 'MemoryStream' does not contain a definition for 'Close' and no extension method 'Close' accepting a first argument of type 'MemoryStream' could be found (are you missing a using directive or an assembly reference?)​

    A Unity 3D developer has pointed out steps in how to fix these errors here:
    http://forum.unity3d.com/threads/wi...ns-for-socket-and-thread.358107/#post-2317814

    [edit] Or perhaps we have to wait a more full support for UWP as Microsoft has released two new driver updates for Kinect since May: https://blogs.msdn.microsoft.com/ki...options-in-the-windows-10-anniversary-update/
     

    Attached Files:

    Last edited: Jul 15, 2016
  24. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    Hi Pendokar, Thank you for pushing on this issue :) I was quite busy lately. But believe it or not, I'm working on it this week, along with some other wishes. Please contact me again next week, to check how it goes. I may also need you to test it externally, if you don't mind
     
    Last edited: Jul 15, 2016
    Pendrokar likes this.
  25. zackyzack

    zackyzack

    Joined:
    Dec 28, 2012
    Posts:
    3
    hey @roumenf , how can i detect multiple players with your plugin?

    thanks
     
  26. Pendrokar

    Pendrokar

    Joined:
    Mar 26, 2011
    Posts:
    95

    @zackyzack I'll assume that you don't have the asset. Roumenf should really add some screenshots. I took one. In the inspector menu you can see that there is the setting "Max Tracked Users". If not enough or no other person is around, you can also test it by increasing the "Wait Time Before Remove". This does not have any recognition of a person that came or left though.
     
  27. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    I'm not sure what exactly you mean. Apart from the information @Pendrokar shared above: The Kinect-v2 sensor can detect up to 6 players. You don't need to do anything about it. The components of K2-asset, like AvatarController, InteractionManager and all other components that are player-aware, usually have 'Player index'-setting, which you can use to specify which player you need this component to track - 0 meaning the 1st player, 1 - the 2nd, 2 - the 3rd one, etc. Hope this information helps.
     
  28. zackyzack

    zackyzack

    Joined:
    Dec 28, 2012
    Posts:
    3
    sorry just read this post,

    i'm already have that assets, but when i try a fitting room scene, i can't detect more than 1 user.
    is that i miss something?

    thanks
     
  29. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    Yes, the fitting room demos are for one user - the one who tries the clothing. What exactly do you want to do?
     
  30. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    @Pendrokar please contact me by e-mail, if you want to check the Windows-store build at your end.
     
    Pendrokar likes this.
  31. Mudiaa

    Mudiaa

    Joined:
    Sep 4, 2013
    Posts:
    3
    Is there a way for the Kinect v2 to calculate the heart rate of the user
     
  32. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
  33. zackyzack

    zackyzack

    Joined:
    Dec 28, 2012
    Posts:
    3
    a groupie fitting room, is it possible to do that?
     
  34. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    At the moment you would need to modify the ModelSelector component, but it will be much more easier in the next K2-package update (v2.10.1) to do multi-user dressing room.
     
  35. pao_olea

    pao_olea

    Joined:
    Dec 31, 2015
    Posts:
    16
    Hi

    I need use the kinect in a zenith position (I attached a reference image or with this link), and the only things that I can see are heads and not bodys, so, How I can detect movement?.

    cenital.PNG

    Thanks.
     
  36. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    I think in this case you could use image processing on the depth image or user texture, in order to detect movements. Look at OpenCV algorithms, to check what would work best.
     
  37. cstsangac

    cstsangac

    Joined:
    Jul 31, 2016
    Posts:
    1
    Thanks for your SDK. It's very helpful.

    In the IsBodyTurned() function in Kinect2Interface.cs -

    //face = On: Face (357.0/1.0)
    //face = Off
    //| Head_px <= -0.02
    //| | Neck_dx <= 0.08: Face (46.0/1.0)
    //| | Neck_dx > 0.08: Back (3.0)​
    //| Head_px > -0.02
    //| | SpineShoulder_px <= -0.02: Face (4.0)
    //| | SpineShoulder_px > -0.02: Back (64.0/1.0)​

    What are stuff like (357.0/1.0), (46.0/1.0), (4.0) mean?

    Also, Is this function(AllowTurnArounds in KinectManager) restricted to certain turning angles(on Y axis)? I find it not responding if I face the kinect cam completely with my back.
     
  38. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    This was my attempt to use machine learning to estimate when the user has turned his back to the sensor. The numbers in brackets are the cases, as far as I remember. I.e. 357 cases match the result, 1 doesn't. This algorithm was not very successful in my tests though, and because of more urgent tasks, I didn't have time to improve it so far. That's why I say it's still experimental and don't really recommend the 'Allow turnarounds'-option to be used yet.
     
  39. intentionperception

    intentionperception

    Joined:
    Jun 13, 2016
    Posts:
    13
    FYI, after installing to 5.4.0b24, these warnings are shown:
    1. Script 'AudioSource' has the same name as built-in Unity component. AddComponent and GetComponent will not work with this script.
    2. Script 'Joint' has the same name as built-in Unity component. AddComponent and GetComponent will not work with this script.
     
  40. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    I know, but the sources that produce these warnings come from the Kinect team at Microsoft. I don't want to modify them. Don't worry, these warnings are harmless.
     
  41. intentionperception

    intentionperception

    Joined:
    Jun 13, 2016
    Posts:
    13
    This is a great tool, Rumen. I've been able to make some recordings, and I wonder if you could offer tips for some of the issues I've encountered:

    1) When I try to capture a nod "no" (rotation of head around Y axis), it appears that no movement is detected. I've heard that Kinect v2 doesn't capture bone rotation, which would explain this. Do you know?

    2) When I try to capture a nod "yes" (rotation of head around X axis), very little movement is visible in Kinect Studio or in your tool, even when I make a very exaggerated motion. Is this consistent with the precision of Kinect v2 in your experience?
     
  42. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    If you want to capture head rotations, I would recommend to add the FacetrackingManager-component to your scene. It tracks the head movements much better than the general joint tracking subsystem.
     
  43. gino_pisello

    gino_pisello

    Joined:
    Jun 24, 2013
    Posts:
    33
    Hello!
    KinectFacetrackingDemo3 doesn't work anymore since I switched to unity 5.4. I can see the texture applied on the material of faceview GO in the Inspector, but I can't see it on the quad in the game window.
    Anybody knows why this is happening? Could it be related to the RenderTexture bugs in unity 5.4?
     
    roumenf likes this.
  44. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    Hi, the root of the problem is in the blur-shader used by the BackgroundRemovalManager-component. This shader does not work properly in Unity 5.4. In order to quickly work around it, you can comment out the invocation of ApplyImageBlur() -function in UpdateBackgroundRemoval() of KinectInterop.cs. This will make the user image a bit blocky, but will show the output anyway. There should be a better workaround in the next package release, I hope.
     
  45. eco_bach

    eco_bach

    Joined:
    Jul 8, 2013
    Posts:
    1,601
    If there were some sort of 'Asset Store developer award', Rumen would easily win it. The amount and quality of the support I have received has been phenomenal! If anyone is reading this and is considering purchasing, don't hesitate. Just get it. Best value in the entire asset store.
     
    vivalavida and roumenf like this.
  46. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    Thank you, Jim! I just try to help, within some limits :) For sure, there are also other good supporters in the Asset store.
     
  47. pao_olea

    pao_olea

    Joined:
    Dec 31, 2015
    Posts:
    16
    Hi,

    How can I have access to a clean depth image?, now I just can use GetUsersLblTex() and only works when exist an user. I need see everything, like this image (attached).

    Thanks.
     

    Attached Files:

  48. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    To access the raw depth image, you can use GetRawDepthMap()-function of KinectManager. It returns the array of short[] Kinect provides on each frame. I think you can convert it easily to the texture you need. GetUsersLblTex() returns the texture containing the detected users only. It may be different, depending on the value of 'Compute user map'-setting of KinectManager-component.

    FYI: To get a scene similar to what you need, but in 3d and in colors, see the KinectSceneVisualizer-scene in VariousDemos-folder. The source is also there, so you can modify it, as to your needs. Hope this info helps.
     
    g__b likes this.
  49. DarknessPhoenix

    DarknessPhoenix

    Joined:
    Aug 17, 2016
    Posts:
    3

    Hi @vivalavida,

    Can you please explain how did you achieve this?
    Or maybe sharing the script/project that you used?
     
  50. agramma

    agramma

    Joined:
    Jun 20, 2011
    Posts:
    13
    Hello,

    I am using the kinect v2 mocap animator and I would like to load motion data from an xml file instead of using kinect data directly. Could you give me some guidelines about which part of the code I should change. It would be very helpful!

    Thank you!
     
    Last edited: Aug 22, 2016