Search Unity

Kinect v2 with MS-SDK

Discussion in 'Assets and Asset Store' started by roumenf, Aug 1, 2014.

  1. vivalavida

    vivalavida

    Joined:
    Feb 26, 2014
    Posts:
    85
    Hi roumen,
    Would it be possible to access this data, at least for the Kinect V1?
    https://msdn.microsoft.com/en-us/library/jj663790.aspx

    The link says it's available, but if you still don't think it's accessible I won't prod any further, thanks.
     
  2. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    Yes, it is available only for the K1-sensor. If you target the Kinect-v1 only, try to use the 'Kinect with MS-SDK'-asset and declare the NUI function in KinectWrapper.cs there. If it doesn't work, please e-mail me to provide you the source of the K1-library used in the K2-asset. But then you should declare and implement it in the sensor interface classes, too.
     
    vivalavida likes this.
  3. vivalavida

    vivalavida

    Joined:
    Feb 26, 2014
    Posts:
    85
    Hi @roumenf ,
    Working on a project using kinect V1,
    Is it possible to send the Kinect v1 data over the same local network?(need background subtraction of two users in different physical locations)

    If not, as a fallback, would it be possible to access different kinect V1 sensors in two different instances of the app on the same computer.

    And by extension, for both the above scenarios, is it possible to access the feed of two kinects in a single application.
    (Do correct me if my approach is wrong)

    Thanks.
     
  4. JaviRM

    JaviRM

    Joined:
    Feb 22, 2017
    Posts:
    2
    Hello @roumenf ,

    I´ve found some problems with avatar detection. I´m using the rigged avatar provided with the package. It can replicate almost all my movements but the problem happens when I try to sit down. When i try to sit the legs of the avatar go up instead of staying on the floor. It´s like the root bone in the pelvis doesn´t move down and instead the leg bones move up to get the body position. As a result the avatar looks like it is floating. Any idea how to solve it?

    Thanks in advance!
     
  5. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    The only server I've seen so far that implemented sending image data over the network is the RAT (Room Alive Toolkit) server. But I think it was Kinect-v2 only. So, you need to adapt the server and its Unity client to work with Kinect-v1. Otherwise, I would answer no to all questions. Sorry.
     
    vivalavida likes this.
  6. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    First off, make sure the 'Vertical movement'-setting of AvatarController is enabled. Then, if the issue persists, enable the 'Grounded feet'-setting too.
     
  7. JaviRM

    JaviRM

    Joined:
    Feb 22, 2017
    Posts:
    2
    Thank you! It seems that the problem has been solved. I checked that root motion is enabled too and probably cause problem.
     
  8. Pyxis-Belgique

    Pyxis-Belgique

    Joined:
    Jul 1, 2016
    Posts:
    14
    Hi Roumenf

    Still present in 2017.2.0b8, GUI Texture seems to be deprecated in the last 2017.2.0b9.
    The component is still available but everything relying on legacy GUI get a nice pink shader error.

    If you ever plan to support that new version (hope so) it would be very useful to add a small migration tips (in comments, doc or FAQ) if you find a solution.
    I had to duplicate Background Removal script to adapt it for my own need and pretty sure it's gonna be though to follow :)
     
  9. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    Thank you for the info! I start getting tired of these never ending traps, Unity staff sets to developers. As far as I read, "GUILayer components are no longer added to the camera by default". What happens, if you add GUILayer to the camera that has to render the GUITexture object?

    If it is still pink, probably there may be really an error in the GUITexture shader. If they really want to get rid of this cool component, I would have to find a replacement or workaround. Any suggestions?
     
    Last edited: Sep 1, 2017
  10. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    fullscreen canvas + raw image / texture
    but removing GUITexture feels somehow strange to me - meaning I don't think they actually plan to remove it
     
    roumenf likes this.
  11. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    Thank you for the suggestion, @r618! Yes, I also think this may be a regression bug (as they call it), because they say the legacy GUI-components are now deprecated, not deleted or non-functional any more.
     
  12. Pointcloud

    Pointcloud

    Joined:
    Nov 24, 2014
    Posts:
    37
    Hi, any chance there will be some client / server support for the hololens? I'd love to use this for some networked telepresence experiments. I tried publishing a hololens app using the client / server solution without luck. Any suggestions? Thanks!
     
  13. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    I cannot test this at the moment, because I don't have HoloLens, but doesn't the KinectDataClient-component work on HoloLens side? It uses the low-level Unity networking API only. Nothing special. If Unity networking works on HoloLens, the KinectDataClient should work as well, I think.
     
  14. Fairennuff

    Fairennuff

    Joined:
    Aug 19, 2016
    Posts:
    88
    Hello. I wanted to know if I could possibly use this package to record just hand animations for a VR game? My hands are rigged on a humanoid model but has the avatar mask set to just be hands. I just to use the leap motion to record hand gestures and apply them to my character's hands.
     
  15. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    I suppose you mean the 'Kinect Mocap Animator'. This package was designed to work with mainly wit Kinect, and the LeapMotion was thought as additional finger-tracking sensor. To make it record only the LeapMotion tracked data, I think you can do as follows: Open KinectFbxRecorder.cs and in its Update()-function comment out this line: 'if(userId != 0 && avatarCtrl && liFrameTime != liRelTime &&', and then modify the next line like this: 'if(Time.time >= (fCurrentTime + fMinFrameTime))'. This should turn off the wait for valid Kinect body frames, and record only the available (LeapMotion) data.

    I cannot test this at the moment (will do it later), but it should work.
     
  16. Pyxis-Belgique

    Pyxis-Belgique

    Joined:
    Jul 1, 2016
    Posts:
    14
    Hi Roumenf
    Confirmed !
    Everything works fine again in Unity 2017.2.0f1 :)

    Is there any way to retrieve a vector field from a Kinect user ?
    I would like to influence the forces of a flow field (a common Particles FX in Processing).

    Using Motion Vector from a Skinned Mesh sampling lead to unwanted behavior (imprecision, hiccup, etc.).

    Thank you.

     
  17. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    Good! Anyway, I'm replacing now all gui-texture and gui-text objects in the demo scenes, with UI-components as r618 suggested. It takes some time, but the update will be out soon.

    I'm not quite sure what you mean by 'vector field from a Kinect user'. If you mean the user orientation, call 'KinectManager.Instance.GetUserOrientation()'. The flip-parameter is the opposite of 'mirrored movement'. From the orientation you can also get the forward, up and right directions of the user. Or, did you mean anything else?
     
  18. Pyxis-Belgique

    Pyxis-Belgique

    Joined:
    Jul 1, 2016
    Posts:
    14
    Nice ! Courage :)

    What I mean by Vector Field is getting the Vector of every Point Cloud from the user perspective (or at the very least from the sensor).

    Something like this :


    There's several ways to do this in 2D or 3D ( https://en.wikipedia.org/wiki/Motion_estimation ) but I was wondering if any helper was exposed from Kinect v2 or in the Kinect SDK itself.

    It would be perfect to directly access these values rather than analyzing n images (could be slow at this stage).
     
  19. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    If you enable the 'Estimate joint velocities'-setting of the KinectManager-component in the scene, you can then call 'KinectManager.Instance.GetJointVelocity()' to get the motion vectors of each body joint, between the last 2 body frames.

    If you'd prefer to do this for each 3d point of the detected body, start with KinectDemos/VisualizerDemo/KinectUserVisualizer-scene. The script that visualizes the body is called UserMeshVisualizer.cs. You can modify its UpdateMesh()-method to estimate the motion vectors between frames.

    And, not sure if this matches your needs, but there is also motion-blur effect in Unity. Take a look here: https://docs.unity3d.com/Manual/PostProcessing-MotionBlur.html If you combine it with the user visualizer, it may get close to what's in the video.
     
  20. Pyxis-Belgique

    Pyxis-Belgique

    Joined:
    Jul 1, 2016
    Posts:
    14
    Thanks for the tips Roumenf.

    I will look at the Mesh Visualizer option and seek for a fast & reliable way to estimate each 3D Points motion vectors.
     
    roumenf likes this.
  21. Pyxis-Belgique

    Pyxis-Belgique

    Joined:
    Jul 1, 2016
    Posts:
    14
    I notice more frequently since a few updates (both Unity & Asset) that the background removal fail randomly.
    Nearly all the screen get visible. It's like if the background was detected as a user or something.

    It seems to happen much more frequently in Edit Mode while simply navigate through the interface and checking data.
    It also rarely happens on build. Flushing users isn't enough to fix this and I generally need to restart the level.

    Is this a known bug or did you never experience this ?
    If not I will grab a screenshot next time I see this.
    Any info on what causing this is welcome.
     
  22. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    No, it's not a known bug. Please, look for suspicious messages in the console when this happens. I would appreciate it, if you can send me over (via WeTransfer.com) a representative project/scene + some instructions how to reproduce this behavior, or at least some screenshots + the Unity log-file (editor or player).
     
  23. alex_evp

    alex_evp

    Joined:
    Jan 29, 2017
    Posts:
    7
    HI Roumen,

    I want to get some info on HD face shape units and how I could possibly use them to drive blend shapes /morph targets in an existing 3d characters face.. I have the face manager added and that is giving good head tracking. I looked in that script an saw there are some lines relating to that.. I am not a strong programmer but if you can help me understand what would need to happen I can try and if fail find someone to help me with it. I have a script that lets me trigger blend shapes from a key press so I am maybe half way there.. if I can get the shape unit values instead of the key press then it could work..any advice really appreciated I know your busy! :)

    Thanks for such awesome work!

    Alex.
     
    Last edited: Sep 30, 2017
  24. 0113_0626

    0113_0626

    Joined:
    Sep 30, 2017
    Posts:
    2
    Hello Roumenf
    I wanted to track the face using kinect, I purchased this package.
    However, if you activate the face tracking demo, the following error occurs and you can not operate normally.


    System.NullReferenceException: Object reference not set to an instance of an object
    at Kinect2Interface.InitFaceTracking (Boolean bUseFaceModel, Boolean bDrawFaceRect) [0x000a9] in C:\Users\open\Desktop\kenkyu\unity\Kinect_Test\Assets\KinectScripts\Interfaces\Kinect2Interface.cs:1117
    at FacetrackingManager.Start () [0x00113] in C:\Users\open\Desktop\kenkyu\unity\Kinect_Test\Assets\KinectScripts\FacetrackingManager.cs:574
    UnityEngine.Debug:LogError(Object)
    FacetrackingManager:Start() (at Assets/KinectScripts/FacetrackingManager.cs:597)


    In order to solve this problem, please advise me what to do
     
  25. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    Hi Alex, please e-mail me and I'll send you back a script that utilizes blend shapes and shape animation units. In this case, you should only customize the blend shape names in the script. Generally speaking, you could do it by yourself too, if you apply the tracked animation units to the respective blend shapes.
     
  26. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    Hm, this looks strange. Line 1117 in my Kinect2Interface.cs is empty. Are you sure you have not modified the class?
    Could you please e-mail me your version of KinectScripts/Interfaces/Kinect2Interface.cs, so I can take a closer look. And, in the same e-mail, please mention the invoice number you got from Unity asset store, as well.
     
  27. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    I meant to send me e-mail, not to publish your invoice # or the package scripts here. Please delete them from your post.

    Apart of some white spaces, the script is no different than mine, as far as I see. The null-reference exception you get is due to non-created face-source, i.e. ' FaceFrameSource.Create(this.kinectSensor, 0, faceFrameFeatures)' returns null.

    Please run the 'SDK Browser v2.0' (part of Kinect SDK 2.0), and check if the 'Face Basics' & 'HD Face Basics' samples run normally and detect faces. I suppose there is an issue in your Kinect SDK, or some issue with the graphics card. If the SDK samples have issues as well, try to reinstall the SDK and/or update the graphics driver.
     
  28. 0113_0626

    0113_0626

    Joined:
    Sep 30, 2017
    Posts:
    2
    I apologized for posting the script and invoice number
    The post has been deleted.
    I run the 'Face Basics' & 'HD Face Basics' samples and found that I was able to recognize the face normally.
     
  29. morph_dev

    morph_dev

    Joined:
    Sep 28, 2017
    Posts:
    1
    Hi roumenf,

    The KinectSceneVisualizer demo scene has some overlay issues if I add any virtual content to the scene. Please check the images attached. How do I create a complete mesh(what's seen by the sensor) without any depth shadows and empty area? Untitled1.png Untitled2.png
     
  30. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    Well, I think this is "what's seen by the sensor". Regarding the empty areas, as far as I see they are mainly black objects. Maybe if you put more light in this room (in front and above the black objects), they will be better visible to the sensor.

    I personally would take a bit different approach in this case. I would display the color camera image on the background, and overlay it with the 3d-reconstruction of the scene. Then use the scene visualization mainly for collisions and occlusion of the virtual objects.
     
  31. chall3ng3r

    chall3ng3r

    Joined:
    May 27, 2013
    Posts:
    23
    Hello @roumenf,

    I am working on a game idea, and trying to use your Kv2 asset with Kv1 sensor. I am able to recognize gestures, but in particular, I need to detect the player's Run gesture speed.

    I need to know how fast player is running, (step count / gesture repeat speed). Also trying to estimate the distance player have Run.

    Currently I am playing with the SimpleGestureScript, but so far not able to correctly get the speed of player. Gesture "GestureInProgress" event's progress parameter gives 3, 7 & 8 values. Been trying to use them but not able to succeed so far.

    Any pointers in how I should easily get the steps count or speed of player running gesture?

    And thank you for awesome asset, bought it some time ago, and happy with the updates you've made to it.

    Update:
    I finally managed to get the left and right step by this simple logic in "GestureInProgress" method:
    Code (csharp):
    1.  
    2.         if (gesture == KinectGestures.Gestures.Run && progress > 0.5f)
    3.         {
    4.             if (progress == 0.7f && stepCount == 0)
    5.             {
    6.                 stepCount = 1;
    7.                 currentDistance++;
    8.             }
    9.             else if (progress == 0.8f && stepCount == 1)
    10.             {
    11.                 stepCount = 0;
    12.                 currentDistance++;
    13.             }
    14.         }
    15.  
    This logic makes sure currentDistance is added only once per step. With same logic running speed was also calculated as now I am able to get how many steps player took per X seconds.
     
    Last edited: Oct 13, 2017
    roumenf likes this.
  32. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    Well done! You're obviously faster than me responding.
    Let me explain a bit. When running is detected the progress reported for this gesture is 0.3. After that every time the right knee is detected above the left knee a progress of 0.7 is reported (consider it as right step), and if the left knee is above the right one, a progress of 0.8 is reported (consider it as left step). In this regard, your code is absolutely correct. Only the stepCount-variable is not really needed, as to me.
     
    chall3ng3r likes this.
  33. chall3ng3r

    chall3ng3r

    Joined:
    May 27, 2013
    Posts:
    23
    Thanks for your reply,

    Yeah, had to show demo to potential client. The idea was approved, and the BTL activity started from Friday to Sunday for a local brand. I managed to complete it within time :)

    stepCount variable is named incorrectly, I guess I was in real hurry. It makes sure that currentDistance variable does not get multiple additions per frame. stepCount lets currentDistance added one per step only when left joint changes to right, and vice versa. Otherwise there were 7-8 calls per sec before 0.7 value is changed to 0.8.
     
    roumenf likes this.
  34. hanhsl

    hanhsl

    Joined:
    Feb 2, 2015
    Posts:
    2
    How can hand display in front model 3d in fitting room (image below). In example , it display behind 3d model. I UserVisualizer and FittingRoom, but model display wrong position in screen. Plz help me
    upload_2017-10-18_15-7-59.png
     

    Attached Files:

  35. harlan2156

    harlan2156

    Joined:
    Oct 9, 2013
    Posts:
    2
    I have the same issue as a previous comment, in that my scene must use orthographic cameras for both the kinect camera and the scene camera. When changing from perspective to ortho, the skeleton/avatar scales in reverse, it grows larger as I move away from the camera, and shrinks as I approach the camera. The only location that the scale is correct, is the initial tpose location that I am using. Continuous scaling helps, but I get MAJOR arm scaling (almost to the size of the torso.
     
  36. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    Enable UserBodyBlender and adjust its 'Depth threshold'-setting to fit your case. Here is the tip: https://rfilkov.com/2015/01/25/kinect-v2-tips-tricks-examples/#t24
     
  37. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    The Kinect's color camera is not orthographic anyway. Hence the overlay will be always incorrect. Why do you use orthographic cameras in your scene?!
     
  38. harlan2156

    harlan2156

    Joined:
    Oct 9, 2013
    Posts:
    2
    I am using a projector and IR camera to detect interaction with the projection surface (screen) and generate a raycast from the same x.y coordinate in Unity to interact with the avatar (think zombie shooting game). For some reason, the perspective camera will have the correct location if the detection is dead center on the screen, but as you move away from hitting the center, the hits are detected (rays generated) in locations between the real world x,y and the unity x,y center. If I set both cameras to ortho, the hit detection is perfect x,y detected on the projector screen matches the avatar and unity x,y being projected.
     
  39. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    When you have some time, take a look at the KinectDemos/ProjectorDemo/KinectProjectorDemo-scene. Here is the tip regarding this scene: https://rfilkov.com/2015/01/25/kinect-v2-tips-tricks-examples/#t34 For demoing the avatar control in this scene, enable the U_Character-game object in Hierarchy.
     
  40. hanhsl

    hanhsl

    Joined:
    Feb 2, 2015
    Posts:
    2
  41. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    It is available for Kinect-v2 only.
     
  42. myselfshj

    myselfshj

    Joined:
    Mar 30, 2012
    Posts:
    31
    Hello @roumenf
    I am a PS fan, so I don't want to buy XboxOne.
    My question is if I have "Kinect for Windows" only , no XboxOne or Xbox360, can I use this Kinect MoCap Animator asset for animation record ?
     
    roumenf likes this.
  43. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    I also don't have Xbox One, and no plans to buy one, ever.
    What you need is a Kinect-for-Xbox-One sensor, an adapter between the sensor and your machine (separate device), and Windows-machine with USB-3 port, where you will connect the sensor. No Xbox is needed.
     
  44. SystemDevel

    SystemDevel

    Joined:
    Oct 5, 2013
    Posts:
    36
    Hi @roumenf

    Can i rotate the Kinect sensor 90 degrees and continues to detect skeleton?

    Thanks you.
     
  45. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    No, you can't.
     
  46. Kesaria

    Kesaria

    Joined:
    Apr 1, 2017
    Posts:
    6
    Hi @roumenf

    I just added your Kinect v2 package in my Unity 5.6.4.
    I getting the following 3 errors:

    1) Assets/Standard Assets/Windows/Kinect/AudioBeamFrame.cs(25,4): error CS0103: The name `Dispose' does not exist in the current context

    2) Assets/Standard Assets/Windows/Kinect/AudioBeamFrame.cs(10,31): error CS0535: `Windows.Kinect.AudioBeamFrame' does not implement interface member `System.IDisposable.Dispose()'

    3)Assets/Standard Assets/Windows/Kinect/KinectSensor.cs(25,13): error CS0103: The name `Dispose' does not exist in the current context

    Any Idea?
    I used your asset so many times, but i haven't seen this kind of error yet.
    I am stuck. I have to show something tomorrow using your asset. This is really very urgent.
    Please reply ASAP.

    Thanks.
     
  47. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    No idea. Looks like an internal .Net bug.
    Try to reinstall Unity and then re-import the package into a new Unity project.
     
  48. tech_unity

    tech_unity

    Joined:
    Nov 3, 2017
    Posts:
    1
    I got the same issue, I found that happened because the conflict with my current Kinect DLLs. After I removed my Kinect DLLs, the face tracking work normally. The SDK has its own DLLs for Kinect face tracking and it will extract those if there are no required DLLs existed.
     
    roumenf likes this.
  49. Montecillos

    Montecillos

    Joined:
    Nov 6, 2017
    Posts:
    2
    Hi, Kinda of a beginner to unity in general so this is a completely newbie question. I've been trying to freeze interactive objects on a single plane, however the usual method of freezing x y or z in the rigid body component of the object doesn't seem to work, Is there any other way to freeze an interactive object on a plane?
     
  50. vivalavida

    vivalavida

    Joined:
    Feb 26, 2014
    Posts:
    85
    Could you give some more details?
    What do you mean by freeze?
    Any particular demo scene that you are trying out?