Search Unity

  1. Megacity Metro Demo now available. Download now.
    Dismiss Notice
  2. Unity support for visionOS is now available. Learn more in our blog post.
    Dismiss Notice

Kinect v2 with MS-SDK

Discussion in 'Assets and Asset Store' started by roumenf, Aug 1, 2014.

  1. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    Have you tried the other asset (the free K1-asset) on the same machine? Just import it into new Unity project. I suppose it will cause the usual errors - DeviceNotGenuine, etc. as described here: https://rfilkov.com/2013/12/16/kinect-with-ms-sdk/#ki This will help me diagnose your current issue.
     
  2. caitsithware

    caitsithware

    Joined:
    Feb 28, 2014
    Posts:
    14
    Yes. I tried on kinect-with-ms-sdk, but the result was the same.
    Tracking stops after 90 seconds. There is no error.
     
  3. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    Hm, this should be some new version of the same issue. Feel free to ask for refund, if Kinect-v1 is your only option.
     
  4. caitsithware

    caitsithware

    Joined:
    Feb 28, 2014
    Posts:
    14
    I am not planning to make a refund request.
    Is there anything I can help with?
     
  5. caitsithware

    caitsithware

    Joined:
    Feb 28, 2014
    Posts:
    14
    I have tried the following newly.
    • Reinstall SDK and driver and reboot PC. (I am doing a few times but not resolved)
    • Log return value of kinect-with-ms-sdk's NuiSkeletonGetNextFrame method
      • Added log output to line 701 of KinectWrapper.cs.
        Code (CSharp):
        1. Debug.Log(GetNuiErrorString(hr));
        • I get an error after 90 seconds "Device is not genuine."
     
  6. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    Yes, this is what I thought either. It is the same error, only this time it doesn't freeze the Editor. Unfortunately we cannot do much about it. When I reported it to Unity staff few years ago, they responded this was caused by internal Kinect SDK crash.

    My best advice would be to try the same on some other, more powerful machine. Before I had this issue on one of my machines (an old one), and not on the others.
     
  7. rotorstudio

    rotorstudio

    Joined:
    Jan 9, 2019
    Posts:
    2
    I know, thanks for the reply ;)
     
    roumenf likes this.
  8. caitsithware

    caitsithware

    Joined:
    Feb 28, 2014
    Posts:
    14
    I understood it was a problem of Kinect SDK.
    I can not buy a machine immediately, so I will buy RealSense.
    Thank you.
     
  9. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    OK. I would recommend RealSense D415. But if there is still a chance to find Kinect-v2 and the respective adapter, don't hesitate to get one instead.
     
  10. jackvob1

    jackvob1

    Joined:
    Mar 2, 2018
    Posts:
    38
    Hello again :)

    i want to ask about the fitting room when i turn around the dress doesnt turn but it goes front side not back side
     
  11. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    Hi, this is an old problem, caused by the Kinect's body tracking algorithm. It tracks users correctly, only when they are facing the camera. My workaround suggestion is to add the FacetrackingManager as component to the KinectController-game object (don't change any setting) and enable the 'Allow turn-arounds'-setting of KinectManager. Don't expect 100% success, but this is all I can offer at the moment.
     
  12. jhuh3226

    jhuh3226

    Joined:
    Oct 24, 2018
    Posts:
    9
    "How can I map face motion on Fuse created Avatar?

    Hello, I'm now really happy with the asset and have questions on how to map face motion on Avatar that I created in Fuse and rigged at Mixamo.
    I have found previous post and answers on the same issue, but haven't found the solution to solve it. (Down below was the two options for the solution)
    There are two options for facial animation:
    1. Animate rigged face model with ModelFaceController, as shown in KinectFaceTrackingDemo4-scene. The component documentation is here: http://ratemt.com/k2docs/ModelFaceController.html You need to experiment with your model and the component settings (transforms, axes, max values).

    2. Animate face with blend shapes (similar to iPhone facial animations, after they discovered the depth sensors). In this case, the script component to use is BlendShapeFaceController.cs. You need to customize it a bit, with the names of your model's blend shapes. Please e-mail me, if you need a demo model. Don't forget to mention your invoice number, as well.

    I have tried to understand and try solution one, though can't able to find Model "ModelFaceController" script in the asset. Could you elaborate on how I can access the ModelFaceController script and what should be done to move the face of my Avatar?
    For the second solution, I'm wondering how I should change the name of the blend shape for my model created in Fuse. I have opened the slothHead.fbx in Maya and checked the names are eyeBlink_L, eyeLookin_R, and etc under input which is blendShape2. Should I modify my model in Maya and make the blend shapes like those names? Is there any way to name this at Blender? or any other simpler way of doing it? and could you send me the demo model?

    Thank you and hope to hear from you.
     
  13. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    I answered these questions by e-mail, but I'm copying my answer here, in case someone else is interested:

    "
    In the latest update of the K2-asset (v2.19) I replaced the KinectFaceTrackingDemo4-scene and removed its scripts and models. That's why you can't find the ModelFaceController-script anymore. It was difficult to follow by the customers.

    Now this scene (KinectFaceTrackingDemo4) is replaced with a new scene, model and script that use blend shapes instead. The model is 'sloth_head' and its important script-component is 'BlendShapeFaceController'. This component is more general than the previous one, mentioned above. Take a look at it in the scene and I'll try to explain some details about it.

    The component has three important settings:
    - 'Head transform' - along with 'Mirrored head movement' it is used to move the head model in space, to overlay the user's face on screen.
    - 'Face anim units' - This is a list of facial animation units to use. The values of animation units are tracked and provided by the Kinect face-tracking subsystem, and used to animate the model's blend shapes.
    - 'Face blend shapes' - This is a list of model's designed blend shapes, corresponding to each of the selected animation units above. Each animation unit controls its respective blend shape. That's why the sizes of both lists should be the same.

    Let me give you an example: The 1st animation unit in the demo 'Jaw open' controls the 1st blend shape 'blendShape2.jawOpen' of the model, the 2nd animation unit 'Lip pucker' controls the 2nd blend shape 'blendShape2.mouthPucker' of the model. I designed the component this way, because the blend shapes and their names almost always differ from one face model to another. You can see the actual blend shapes of 'sloth_head' if you unfold the object in Hierarchy, select 'Sloth_Head2' and then unfold the 'Blend shapes'-setting of its SkinnedMeshRenderer-component.

    Please mind, I'm not a model designer and can't give you advices how to add blend shapes to a face model, in Maya or Blender. But I'm sure you can look how it is done for 'sloth_head' and do something similar with your face model.
    "
     
  14. sarahkimys

    sarahkimys

    Joined:
    Feb 23, 2019
    Posts:
    1
    Hi there!

    I'm using this to create a dodgeball game for a school project. I was previously working with just the Kinect plug-in for Unity and attaching the ball to the right hand of the Kinect "skeleton" and using a mouse click, the ball would throw.
    Now I'm trying to do something similar but using the right hand of the Kinect Demo AvatarDemo1 but I can't seem to find any reference in the scripts to the Kinect body joints.

    Would you be able to help me?

    Thank you!
     
  15. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    Hi, I'm not sure what exactly you can't find. Look at this tip for a simple example: https://rfilkov.com/2015/01/25/kinect-v2-tips-tricks-examples/#t7 The K2-asset documentation is here: https://ratemt.com/k2docs and here https://ratemt.com/k2gpapi/annotated.html
     
  16. jackvob1

    jackvob1

    Joined:
    Mar 2, 2018
    Posts:
    38
    Hello, I want to ask about the category fitting room how to use it I want do clothes, pants, boots. is it just add some new folder like clothing from demo or no ? , and I want to ask about example there's 3 category and if I choose 1 from clothes category and I choose another one from different category it didn't disappear. ty
     
  17. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    In your case you need to add 3 folders under the Resources-folder - clothes, pants & boots. Then add 3 ModelSelector-components to the KinectController-game object. Set their respective 'Model category'-setting to 'clothes', 'pants' & 'boots', and the 'Number of models'-setting to the number of models in each respective category. Regarding the other ModelSelector-settings, see the demo scene. More info is available here.
     
  18. jhuh3226

    jhuh3226

    Joined:
    Oct 24, 2018
    Posts:
    9
    Inquiry on how to stabilize shaky movement on clicking "apply muscle limit".

    Hello, I'm trying to use "apply muscle limit" function on the avatar on certain condition.
    I have created an avatar with animation applied to it and trying to only mirror the movement of the 'head' according to the movement of the user using Kinect. I have used mask function in the inspector and used 'avatar control classic' on moving the head only.
    The problem is on setting value in the muscle setting and try to apply it using the 'apply muscle limit', the avatar has a lot of noisy movement (having unnecessary movement) when the function is applied. When it's not applied there's no problem as such.

    Is there any way to solve this problem?

    Thank you
     
  19. jhuh3226

    jhuh3226

    Joined:
    Oct 24, 2018
    Posts:
    9
    Additional question on modifying the mirrored level of the user on the avatar.

    Hello, I'm making a project for my thesis and on doing it, modification(limitation) on the movement of the mirrored avatar should be done to conduct research.

    In your example, the avatar is fully mirrored, synchronizing 100% with the movement of the user and muscle setting allows me to design an avatar with limited body movement.
    However, what I'm trying to is not to set limit but give more natural feeling by restricting the bones position related to arms and neck into 1/3 of how user is actually moving (current way of muscle limit feels like one person's movement is stuck over a certain point -> so wish to move avatar less and moderately every frame or moment than the actual movement of the user).

    I have looked through the AvatarController script though does not have a clear idea on where I should modify to get the desired outcome.

    Is there any way or hint on writing script to make something I wish to do?
    Thank you.
     
  20. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    I answered your questions per e-mail. Please use only one communication channel, either e-mail OR the forum. This will save me some time for answering the same questions twice.
     
  21. jackvob1

    jackvob1

    Joined:
    Mar 2, 2018
    Posts:
    38
    Hello,

    is there a demo for example like when Kinect detect people walking by it will play a video but not repeat every person detected it will play only once.
     
  22. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    Please look at the KinectMovieDemo-scene in KinectDemos/MovieSequenceDemo.
     
  23. dttngan91

    dttngan91

    Joined:
    Nov 21, 2013
    Posts:
    80
    Wow, it seems like old technology but it is still alive today. I am newbie, may I ask if the Kinect sensor for Window only (no xbox) available to buy at the moment? May I know if the common of those devices in gaming industry?
     
  24. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    Well, the "old technology" still works better than the most of the newest AR technologies and the depth sensors from other producers :)

    I'm not sure though, if the sensor is still available. Please look at Amazon's website for your region and the websites of other online resellers. In the worst case, just wait for the Azure Kinect-sensor later this year.
     
  25. Sproud346

    Sproud346

    Joined:
    Aug 3, 2017
    Posts:
    3
    Hi, How can i find volume levels of microphone input at run time in Kinect V2 Sensor?
     
  26. aditya_atthah

    aditya_atthah

    Joined:
    Dec 16, 2017
    Posts:
    2
    Hi roumenf,

    Thanks for the amazing asset. I am currently using it in a 3D visualization project and selected KinectSceneVisualizer as a starting point. I was then able to carry the resulting mesh to another scene for some post processing like subdividing to make it more smoother and to add some other bells and whistles. But I got stuck at one particular point.

    I need to carve out a "hole" somewhere in the generated mesh and later fill it with another 3D model so that it feels like part of the scene. Since unity camera will be controlled by player and is allowed to go inside the model, I need a hole in the generated mesh. All other factors including Kinect sensor will remain at the same position at all times making sure the model always aligns perfectly with the hole.

    I would greatly appreciate any help you can provide by guiding me which part or method(s) of SceneMeshVisualizer class needs to modified.
     
  27. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    I'm not sure there is API for that in Kinect SDK. But as far as I remember, the K2-sensor acts as a standard microphone device to the system. In this regard, please look at the Windows audio API.
     
  28. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    Hi, I'm not quite sure why you can't just place the model over the scene mesh. But anyway, if you need to modify the mesh, open Scripts/SceneMeshVisualizer.cs (component of the SceneMesh-object in Hierarchy), and modify its UpdateMesh() or EstimateSceneVertices()-methods.
     
  29. King-Kwan

    King-Kwan

    Joined:
    Jan 26, 2013
    Posts:
    3
    Hey, I am wondering why other tracked user than the first can't interact with the spawned objects in the "DepthColliderDemo2d". I have the "Ignore Z cordinates" on in the kinectcontroller.

    Thanks in advance.
     
  30. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    Almost all Kinect-related components (like the DepthSpriteViewer-component of KinectController in the scene) have a setting called 'Player index'. It determines which one of the detected users interacts with the component. A value of 0 means the 1st detected user, 1 - the 2nd one, 2 - the third one, etc.

    'Ignore Z-Coordinates' is irrelevant to the user detection. It is meant to be used in the 2D scenes, where the Z-coordinate of detected body joints is redundant.
     
  31. King-Kwan

    King-Kwan

    Joined:
    Jan 26, 2013
    Posts:
    3
    I see, thank you. So if i wanted to have all the detected users to interact with the spawned object i would have to loop through all the "detected users" and make them interact with the component?
     
  32. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    Yes. The player indices vary between 0 and KinectManager.Instance.GetUsersCount(). For more info: see how the player-index is used - look at the references to 'playerIndex' in the code of DepthSpriteViewer.cs.
     
  33. jackvob1

    jackvob1

    Joined:
    Mar 2, 2018
    Posts:
    38
    Hello I want to ask a couple more question about fitting room :

    1. About the 3d have some 3d but when i import the 3d it doesn't look real is it because the 3d or else ?
    2. About the tracking when turn around is there any other way then use face tracking ?
    3. About the realistic 3D(example dress when move its look like there's gravity something like that) is it from the 3d or there something need to change inside unity ?

    btw Thanks for all the help before :)
     
  34. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,303
    Hi @roumenf ,
    got small feature request - after each update I keep adding horizontalOffset to AvatarController ;~)
    I'm doing something like
    Code (CSharp):
    1.  
    2.         // lets try left and righ upper arm
    3.         // see boneIndex2MecanimMap
    4.         if (horizontalOffset != 0f &&
    5.             bones[5] != null && bones[11] != null)
    6.         {
    7.             // { 5, HumanBodyBones.LeftUpperArm},
    8.             // { 11, HumanBodyBones.RightUpperArm},
    9.  
    10.             Vector3 dirSpine = bones[5].position - bones[11].position;
    11.             targetPos += dirSpine.normalized * horizontalOffset;
    12.         }
    in MoveAvatar

    ( It makes sense for non exactly perpendicular projection, and/or correcting imperfection/s in models - it's not perfect, but supposedly better than nothing )

    would it be possible to include this offset too ?

    Secondly, I'm not sure if it's a bug elsewhere, but quick fix for NRE in GetBoneTransform (Ln ~165):
    replaced
    Code (CSharp):
    1. if(index >= 0 && index < bones.Length)
    with
    Code (CSharp):
    1. if(index >= 0 && bones != null && index < bones.Length)
    Thank you!
     
    roumenf likes this.
  35. jackvob1

    jackvob1

    Joined:
    Mar 2, 2018
    Posts:
    38
    ah I forget 1 more question is there any tips about add size(S,M,L) like category ?
     
  36. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    Thank you very much! I added your changes to AvatarController's code.
     
  37. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    To your questions:
    1. I don't really understand, what do you mean by '3d doesn't look real'. These are just example models. Feel free to create more realistic ones.
    2. My approach uses the face-tracking (without tracking the face model), to check for turned around users. If you can think of some other approach, just go ahead and implement it.
    3. There is no gravity in the fitting-room scenes, as far as I remember.
     
  38. unity_Dxy1_lPEgaHIRQ

    unity_Dxy1_lPEgaHIRQ

    Joined:
    Mar 28, 2019
    Posts:
    1
    @roumenf I would like to know about this too, because I create each size with new scene and that not very good is there any tips for changing size for each size just like category ?
     
  39. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    I'm not sure how the sizes (S, M, L, etc.) are estimated in real life. But I think they may be implemented in the FR-scenes as fixed scales of the cloth models (instead of the AvatarScaler that determines the scale automatically). One better approach though would be to determine the size that matches the user (and show it on screen), based on the current cloth scale and its initial model size. Please tell me, if I'm wrong.
     
  40. King-Kwan

    King-Kwan

    Joined:
    Jan 26, 2013
    Posts:
    3
    Hey. In all your kinect demo scenes the tracking width doesn't track to the end of the screenspace. Is this a hardware problem? And do you have any tips to make the kinect track based on the ratio/resolution of the screen?
     
  41. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    I suppose this is due to the depth camera's field of view being different to the color camera's field of view. I don't limit anything in my scripts and components. One possible workaround would be the main camera to "see" only within the view of the depth camera.

    FYI: The depth camera has resolution of 512 x 424 pixels and FOV 70.6 x 60 degrees, while the color camera has resolution of 1920 x 1080 pixels and FOV 84.1 x 53.8 degrees.
     
  42. yinansong

    yinansong

    Joined:
    May 24, 2017
    Posts:
    2
    Great asset! Thank you very much. I want to use this asset to animate 2D characters in real time. How should I do that?
     
  43. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    There is a setting of the KinectManager-component in the scene, called 'Ignore Z-coordinate'. Please enable it, when you need to control 2D character models.
     
  44. Malveka

    Malveka

    Joined:
    Nov 6, 2009
    Posts:
    191
    Thanks for making and sharing this asset! It has enabled me to get started experimenting with a Kinect v2 sensor in very short order. I'm truly grateful for that.

    Now that I am experimenting, I've encountered a situation that has me puzzled. I've taken the example DemoOverlay2, which places a green ball prefab at each joint, and modified it to instead place a particle system prefab at each joint. I've set each particle system to emit a single long-lived particle that has a trail.

    All works as expected, except that the trails can be quite jaggy. I did lots of tweaking of the KinectManager smoothing settings, but regardless of what I tried it just didn't seem to have much effect. After verifying that the filter functions were actually being invoked with the expected smoothing parameters, I happened to run across these two lines in SkeletonOverlayer.cs.

    Code (CSharp):
    1. Vector3 posJoint = manager.GetJointPosColorOverlay(userId, joint, foregroundCamera, backgroundRect);
    2. //Vector3 posJoint = manager.GetJointPosition(userId, joint);
    When I comment line 1 and uncomment line 2, the smoothing works well. The particle trails become much less jaggy. Unfortunately, the drawn skeleton lines are now skewed. Presumably this is related to the joint positions being mapped/not mapped to the color image. The effect is that it appears that the smoothing is ineffective when GetJointPosColorOverlay is used.

    Is this expected behavior? I'd like to understand what is happening in this situation and would appreciate any insights you can offer.
     
  45. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    Hi, GetJointPosition() returns the 3d-position of the joint in depth camera space, in meters. GetJointPosColorOverlay() converts the 3d joint position to the color camera space. By all means, jaggy is not a desired behavior.

    Please open SkeletonOverlayer.cs-script and try to replace:
    1. 'joints.transform.position = posJoint;' with 'joints.transform.position = Vector3.Lerp(joints.transform.position, posJoint, 5f * Time.deltaTime);' and
    2. 'joints.transform.rotation = rotJoint;' with 'joints.transform.rotation = Quaternion.Slerp(joints.transform.rotation, rotJoint, 5f * Time.deltaTime);'.

    This should smooth the changes in positions and rotations of all joint prefabs (and I suppose it should work better for particle effects). If it doesn't help, please e-mail me some instructions on how to reproduce your issue. I'm not very good with the particle systems and would need some assistance (or a representative project), to look for a workaround.
     
    Malveka likes this.
  46. Malveka

    Malveka

    Joined:
    Nov 6, 2009
    Posts:
    191
    Thanks for looking into this!

    Sorry for the delayed response. Very busy at the moment. I'll try this as soon as I get a chance and report my findings.
     
  47. Malveka

    Malveka

    Joined:
    Nov 6, 2009
    Posts:
    191
    Yes, your recommended changes are effective at smoothing the position/rotation changes so that the particle trails are no longer jagged. Thanks!
     
  48. Monstruo

    Monstruo

    Joined:
    Mar 27, 2019
    Posts:
    1
    Hello, my interest at this time is in face recognition, it does not work for me, I attach the details below:

    System.NullReferenceException: Object reference not set to an instance of an object
    at Kinect2Interface.InitFaceTracking (System.Boolean bUseFaceModel, System.Boolean bDrawFaceRect) [0x00096] in C:\Users\edyca\Documents\Unity\Kinect-Core-V2\Assets\K2Examples\KinectScripts\Interfaces\Kinect2Interface.cs:1119
    at FacetrackingManager.Start () [0x00105] in C:\Users\edyca\Documents\Unity\Kinect-Core-V2\Assets\K2Examples\KinectScripts\FacetrackingManager.cs:629
    UnityEngine.Debug:LogError(Object)
    FacetrackingManager:Start() (at Assets/K2Examples/KinectScripts/FacetrackingManager.cs:652)


    Any recommendations to solve it?
     

    Attached Files:

  49. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    I think there is something wrong with the Kinect's face-tracking system on your machine. Please open 'Face Basics D2D' from 'SDK Browser v2.0', and check if it displays a rectangle around your face on screen and extra info about it. 'SDK Browser v2.0' is part of Kinect SDK 2.0 installation. I think in your case there should be some kind of error instead.

    I'm not sure if this may be applicable to your case, but when I first tried the Kinect face tracking long time ago, I had similar issues with it. Updating NVidia drivers helped me then resolve these issues.
     
  50. phattanapon

    phattanapon

    Joined:
    Jul 21, 2014
    Posts:
    10
    Hi,
    I would like to make something like this (animated virtual dressing of armour)



    Could you please guide me if there's any example that is the closest to this one that I could modify?
    I already have an FBX file of the armour I want to use.
    I can think of few problems like
    - the arm part, for example, should connect between two joints (wrist and elbow maybe), not just stick to one joint, am I right?
    - the different size of players, for example if a small kid plays, how can each armour part shrink to the proper size for that player?

    I have both Kinect V2 and Orbbec Astra Pro.

    Thank you very much for any suggestion you might have.
    Pat