Search Unity

  1. Megacity Metro Demo now available. Download now.
    Dismiss Notice
  2. Unity support for visionOS is now available. Learn more in our blog post.
    Dismiss Notice

Kinect v2 with MS-SDK

Discussion in 'Assets and Asset Store' started by roumenf, Aug 1, 2014.

  1. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    Hi, sorry for the delayed response. I'm on the road till the end of the week. You are experiencing interesting issues... I looked again at the code of CubemanController.cs, but could not find anything suspicious yet. And the detected user is controlled by the 'Player index'-setting of CubemanController and the 'User detection order'-setting of KinectManager. If you leave it to 'Appearance' the detected user should keep its index until it gets lost, or until other user occludes it. Here is more info on the available detection orders (and you can add your own, too): https://rfilkov.com/2015/01/25/kinect-v2-tips-tricks-examples/#t23

    Regarding the Cubeman issue: May I ask you to create a simplified project for me that illustrates the issue you have with the Cubeman's position, and then zip and send it over to me via WeTransfer.com. This could make locating of the issue easier and faster. Otherwise I'll try to reproduce your issue by myself, as soon as I get back home.
     
  2. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    Usually the disconnection tells the reason, too. So, what's the reason for disconnecttion?
    Also, make sure you use the same version of Unity on the server and client machines. Otherwise MISMATCH error is possible, due to (probably) differences in protocol implementations.
     
    snomura likes this.
  3. HarishDamodaran

    HarishDamodaran

    Joined:
    Feb 14, 2014
    Posts:
    3

    Thank you, I have sent you the files with the instructions on what to do to replicate the error.

    On further trouble shooting realized that when I walk away from the sensor, after my feet are lost when I walk back into the detection range there is around 2mts added or subtracted from my current feet position. Meaning I need to walk back further from (the limits of the detection range) or towards the sensor to align my feet in the same spot in the game.

    harish
     
    roumenf likes this.
  4. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    How is it now, with the updated CubemanController script?
     
  5. snomura

    snomura

    Joined:
    Apr 28, 2013
    Posts:
    4
    thanks for reply.

    The error code at disconnection was 0.
    I found out that there were times when sending a keepalive message from the client failed in some cases.
    When commenting out as follows, disconnection does not occur.
    in KinectDataClient.cs
    ```
    //if(connected && keepAliveIndex < keepAliveCount)
    //{
    // if(sendKeepAlive[keepAliveIndex] && !string.IsNullOrEmpty(keepAliveData[keepAliveIndex]))
    // {
    // // send keep-alive to the server
    // sendKeepAlive[keepAliveIndex] = false;
    // byte[] btSendMessage = System.Text.Encoding.UTF8.GetBytes(keepAliveData[keepAliveIndex]);
    // int compSize = 0;
    // if(compressor != null && btSendMessage.Length >= 100)
    // {
    // compSize = compressor.Compress(btSendMessage, 0, btSendMessage.Length, compressBuffer, 0);
    // }
    // else
    // {
    // System.Buffer.BlockCopy(btSendMessage, 0, compressBuffer, 0, btSendMessage.Length);
    // compSize = btSendMessage.Length;
    // }
    // NetworkTransport.Send(clientHostId, clientConnId, clientChannelId, compressBuffer, compSize, out error);
    // //Debug.Log(clientConnId + "-keep: " + keepAliveData[keepAliveIndex]);
    // if(error != (byte)NetworkError.Ok)
    // {
    // throw new UnityException("Keep-alive: " + (NetworkError)error);
    // }
    // // make sure sr-message is sent just once
    // if(keepAliveIndex == 0 && keepAliveData[0].IndexOf(",sr") >= 0)
    // {
    // RemoveResponseMsg(",sr");
    // }
    // }
    // keepAliveIndex++;
    // if(keepAliveIndex >= keepAliveCount)
    // keepAliveIndex = 0;
    //}
    ```

    in KinectDataServer.cs
    ```
    //case NetworkEventType.DataEvent: //3
    // if(recHostId == serverHostId && recChannelId == serverChannelId &&
    // dictConnection.ContainsKey(connectionId))
    // {
    // HostConnection conn = dictConnection[connectionId];
    // int decompSize = 0;
    // if(decompressor != null && (recBuffer[0] > 127 || recBuffer[0] < 32))
    // {
    // decompSize = decompressor.Decompress(recBuffer, 0, compressBuffer, 0, dataSize);
    // }
    // else
    // {
    // System.Buffer.BlockCopy(recBuffer, 0, compressBuffer, 0, dataSize);
    // decompSize = dataSize;
    // }
    // string sRecvMessage = System.Text.Encoding.UTF8.GetString(compressBuffer, 0, decompSize);
    // if(sRecvMessage.StartsWith("ka"))
    // {
    // if(sRecvMessage == "ka") // vr-examples v1.0 keep-alive message
    // sRecvMessage = "ka,kb,km,kh";

    // conn.keepAlive = true;
    // conn.reqDataType = sRecvMessage;
    // dictConnection[connectionId] = conn;
    // //LogToConsole(connectionId + "-recv: " + conn.reqDataType);
    // // check for SR phrase-reset
    // int iIndexSR = sRecvMessage.IndexOf(",sr");
    // if(iIndexSR >= 0 && speechManager)
    // {
    // speechManager.ClearPhraseRecognized();
    // //LogToConsole("phrase cleared");
    // }
    // }
    // }
    // break;
    ```

    ```
    conn.keepAlive = false; // L623
    ```

    For now, it looks like it works properly, but please let me know if you have any concerns due to commenting out this code.
     
  6. and_hor

    and_hor

    Joined:
    Jun 12, 2017
    Posts:
    1
    Bind 2D info plane to vertice on avatar mesh

    hi!

    sorry for that my description is not that precise, but I find it hard to put it into words. I made a sketchup of the effect I want to achieve:


    So the point should allways stay on the right vertice, even when the avatar is moving acording to the kinectv2 avatar controller.
    I tried it with picking a vertice, transforming into world coordinates and using a seperate render layer for the 2d elements. but its problematic with skinned mesh renderers. even tried skinnedMeshRenderer().BakeMesh but no luck so far. I just get the vertice position of the original T-pose, not of the kinect controlled model.

    Any tipps on how to proceed? or is there an easier way to do this?

    Thanks!
     
  7. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    Sorry, but I don't understand why it should be so complicated. I would parent a small sphere object to the avatar's hand-node in Hierarchy. It will serve as the reference point you need. I would even use the hand/wrist node of the avatar as such point. You can get the world position of the hand (or sphere) transform at any time. Then project this position on the screen - there is Camera.WorldToScreenPoint()-method to help you with that. And finally, display the 2d-elements on screen, with respect to the projection point. Or, maybe I'm missing something...
     
    Last edited: Jun 15, 2017
  8. admin_2SGamix

    admin_2SGamix

    Joined:
    Apr 27, 2013
    Posts:
    12
    Hi,
    I am using fitting demo of your package. There were some bugs when using background removal and fitting demo together. I resolved them.

    But I couldn't get the portrait mode to be working. It is only working in the editor but when I build and make an exe, it does not work. Kinect camera takes the full screen width. Can you please suggest how do I resolve this.
     
  9. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    Integration of the 1st fitting-room demo with background-removal manager will be simplified in the next release.
    The portrait mode has been utilized by many users so far, so I'm a little surprised. What exactly does not work? Could you please post a picture of the result you get, as well as the configuration page, when you start the exe. If you don't want to share them publicly, feel free to e-mail me.
     
  10. Shadeypete

    Shadeypete

    Joined:
    May 15, 2017
    Posts:
    2
    Hmm just realised I posted this in the wrong thread:

    I'm trying to get the outline of the silhouette of the user to generate and/or attract some particles.
    I have managed to do this using an avateered humanoid but I am hoping to achieve it using the actual camera image.
    Anyone got any ideas how I might be able to achieve this?

    I'm currently trying to get the pixels from the alphatexture so I can write a routine which finds the 'edge pixels' and stores them somewhere.

    Anyone know if this is a good way to go?
     
    Last edited: Jun 21, 2017
  11. Hertugweile

    Hertugweile

    Joined:
    Jun 21, 2017
    Posts:
    2
    Hello everybody and thank you to the author for the great and impressive Kinect-plugin. It works excellent - but I do have one question, if I may... I need to limit certain axises while rotating, because I have this LEGO-figure, which is somewhat limited in its movement.
    The arms should only rotate around one axis and the same goes for the hip and knees - but it seems, Unity ignores the limitations, set in 3D Studio MAX when exporting an FBX
    I tried to make some modifications to the Kinect2AvatarRot-method, but without success.
    Can any of you give me a push in the right direction, please?
     
  12. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    If you utilize the BackgroundRemovalManager in your scene, sensorData.alphaBodyTexture would be the right texture to process. It is the same size as the color texture (1920x1080), and its pixels are non-transparent (alpha != 0) there, where the user pixels are. You can get reference to it by invoking 'BackgroundRemovalManager.Instance.GetAlphaBodyTex()'. I would recommend to use a shader, to process this texture and create the other one - with the edges. Otherwise the processing will be slow. This is a classical CV task, so I suppose there may be ready-made shaders to do it.

    If you don't use the background-removal functionality, sensorData.bodyIndexTexture is the texture to process. It is similar to the alphaBodyTexture above, but its dimensions are those of the depth image (512x424) instead. This means you should map later the pixels from this texture to the pixels of the color-camera texture, if you look for the color camera pixels. Tell me, if you need more info regarding this mapping.
     
  13. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    Please try to set the limits in the 'Muscles & Settings'-part of the avatar definition. To do it, select your FBX in the assets, go to the Rig-tab, select 'Humanoid' as animation type, then press Apply and then Configure. There you will see the Muscles&Settings-tab. To apply the muscle limitations to the Kinect-controlled avatar, select its game object in the scene and enable the 'Apply muscle limits'-setting of its AvatarController-component. This setting is quite new and still experimental, so issues are possible. That's why it is disabled by default.

    If your idea was the Lego-figure to behave and rotate like in 2D, one other option is to enable 'Ignore Z-coordinates'-setting of the KinectManager. It is component of KinectController-game object in all demo scenes.
     
  14. digitalfunfair

    digitalfunfair

    Joined:
    Oct 21, 2014
    Posts:
    10
    Hey thanks I actually found the GetRawBodyIndexMap() function yesterday and wrote a basic script to loop through the pixels and extract the edge ones. Not sure if this is the most efficient solution but it seems hopeful at present!

    If I write a shader I will have to either get the pixels back off the GPU which seems counterproductive or write a gpu based particle system which seems daunting at present!
     
    roumenf likes this.
  15. KeithT

    KeithT

    Joined:
    Nov 23, 2011
    Posts:
    83
    Hi,

    Thanks developing this asset, it remains amazing value for money.

    I'm trying to use a tracked user's rotation in a calculation while the user is doing a full 360 spin in the field of view of the sensor. I am trying to use the same method as used in "FollowUserRotation" but am struggling to understand the results it outputs. Starting facing the sensor and turning one full rotation to the right and printing out Euler angles of rotationShoulders the following list shows what happens to rotation around y:

    "->" means nice gradual changes

    358
    359
    0 -> 45
    45.2
    45.8
    46.3
    47.4
    47.5
    46.9
    46.3
    45.1
    44.4
    43.5
    43.1
    42.7
    350.1
    321.9
    310.8
    118.6
    135
    127
    124
    122
    120
    117
    116 -> 102
    118
    126 -> 130
    115
    106
    102
    109
    153
    309
    307
    308
    310
    313
    315
    317 -> 358

    Am I missing something re how to interpret these results or is it broken?
    If broken any suggestion re how to track the rotation of a full spin?

    Thanks in advance for any assistance
    KeithT
     
  16. Hertugweile

    Hertugweile

    Joined:
    Jun 21, 2017
    Posts:
    2
    Sorry for my late reply - but thank you for your answer. I got it working. I use the 3D-version of the LEGO-dude, by the way.
     
  17. admin_2SGamix

    admin_2SGamix

    Joined:
    Apr 27, 2013
    Posts:
    12

    I solved both.

    1. For background removal, problem was the GUITexture Layer. When Fitting Room and Background Removal both were active. I was only able to see clothes, not the user. So this is what I did :
    a. Commented OnGUI part of BackgroundRemovalManager
    b. Used 2 GuiTexture, one for background and one for user. In OverlayController :
    Code (CSharp):
    1.  if (backgroundImage)
    2.             {
    3.                 if (BackgroundRemovalManager.Instance.removeBackground)
    4.                     backgroundImage.texture = BackgroundRemovalManager.Instance.GetSelectedBackGround();
    5.                 else
    6.                     backgroundImage.texture = manager.GetUsersClrTex();
    7.             }
    8.  
    9.             if (backgroundUserImage)
    10.             {
    11.                 if (BackgroundRemovalManager.Instance.removeBackground)
    12.                     backgroundUserImage.texture = BackgroundRemovalManager.Instance.GetForegroundTex();
    13.                 else
    14.                     backgroundUserImage.texture = null;
    15.             }

    2. For PortraitMode, it is not working with the build, working only in the editor as we were manually setting 9:16 in editor but that is not possible in the build. So I did these changes in PortraitBackground.cs
    Code (CSharp):
    1. void Awake()
    2.     {
    3.         if (instance == null)
    4.         {
    5.             if (isPortrait)
    6.             {
    7. //Setting 9:16 as aspect ratio
    8.                 int height = Screen.resolutions[Screen.resolutions.Length - 1].height;
    9.                 Screen.SetResolution((int)(height * 0.5627705f), height, false);
    10.             }
    11.             else
    12.             {
    13.                 int height = Screen.resolutions[Screen.resolutions.Length - 1].height;
    14. //Setting 16:9 as aspect ratio
    15.                 Screen.SetResolution((int)(height * 1.77777777778f), height, false);
    16.             }
    17.         }
    18.  
    19.         if (isPortrait)
    20.         {
    21.             instance = this;
    22.         }
    23.     }
     
    Last edited: Jun 29, 2017
    this-play and roumenf like this.
  18. digitalfunfair

    digitalfunfair

    Joined:
    Oct 21, 2014
    Posts:
    10
    I've got some processing of a kinect controlled avatar and mesh going on in the Update part of a script.

    I'm thinking that I'm probably doing more processing than necessary as the Update is running more frequently than the frame rate of the Kinect.

    If this would save some CPU cycles, what's the best way to check if a new frame has been acquired before doing the processing? I'm working with skeleton data. I think I saw a lastframtime property in the API?
     
  19. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    I cannot check your results now, because I'm far from any of my offices. But as far as I remember, these inconsistent results were due to Kinect mistakenly detecting one of your shoulders. Probably the one that gets hidden from the sensor's view, while you are rotating.
     
  20. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    KinectManager.Instance.GetBodyFrameTimestamp();
    Here is some more info about it: https://ratemt.com/k2gpapi/class_kinect_manager.html#a8f9bb858971c312aa2949e52805edca8
     
  21. KeithT

    KeithT

    Joined:
    Nov 23, 2011
    Posts:
    83
    Thanks for the answer. If I attach an avatar then it distorts/twists in on itself at around the 50 degree point and does not straighten up until about the same place on the other side of the rotation, so indeed perhaps is the shoulder tracking.

    Is this a known, unsolved issue with the Kinect? (If so it would bin the whole approach to what we are trying to do).
     
  22. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    Well, I've told about hundred times here on the forum, on the website, answering questions, etc. that Kinect doesn't track the users 360 degrees. They must face the sensor, in order to be tracked correctly. You can try to contact MS customer support with this issue, but according to my experience, they will be of little support, if answer at all.

    In this regard, I plan to do soon one more research concerning (at least small) improvement of the user tracking. If it succeeds and I manage to achieve any improvement, it may be part of a future release of the K2-asset.
     
  23. digitalfunfair

    digitalfunfair

    Joined:
    Oct 21, 2014
    Posts:
    10
    Is there a way of faking a Kinect input stream so I can develop my project without standing up every 30 seconds?!? It's killing my workflow!

    ie a Kinect Recorder whose files are seen as a live Kinect on playback?

    Apologies if this is obvious and I've missed it.
     
  24. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    :)
    There are two options:
    1. You can fake real Kinect with 'Kinect Studio 2.0'. It is part of Kinect SDK 2.0, and allows recording and replaying of the selected Kinect streams.
    2. You can utilize the KinectRecorderPlayer-component in your scene to replay recorded body movements (only body frames, no depth or color streams). To do it, first run KinectDemos/RecorderDemo/KinectRecorderDemo-scene and record the needed body movements. Then add the KinectRecorderPlayer as component to KinectController-object in your scene, and enable its 'Play at start'-setting.
     
  25. digitalfunfair

    digitalfunfair

    Joined:
    Oct 21, 2014
    Posts:
    10
    Ah I tried 2. but I needed some depth info.
    1. - guess it was obvious and I missed it!
    Sorry for wasting your time and thanks for the asset!
     
    roumenf likes this.
  26. Aziii

    Aziii

    Joined:
    Jul 7, 2017
    Posts:
    1
     
  27. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    What is your question?
     
  28. zs_3718

    zs_3718

    Joined:
    Apr 28, 2015
    Posts:
    1
    Hi,
    When I connect a Logitech webcam to pc, and use WebCamTexture with it to show the image(because I must put the camera far away), the kinect starts dropping frames seriously...
    How to solve thie problem?
    Thanks.
     
    Last edited: Jul 11, 2017
  29. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
  30. dennyroberts

    dennyroberts

    Joined:
    Feb 14, 2016
    Posts:
    7
    Hi! I've got a question. Gonna read through this long thread but thought I'd post first.

    I have a scene for multiple players with multiple avatars in a row. I have the players being detected from left to right, and avatars only appear when that number of players is detected (so if one person is there, only one avatar is on screen, and so on). The avatars are locked in place, so the attempt is to always have the order of the avatars on screen be the same as the order of the people who are controlling them.

    When a player leaves, I want the control of the avatars to also switch. Like if the middle player (P2) in a three player game steps out, P1 should stay on the same avatar, and P3 should become the new P2 and take control of P2's avatar, and then P3's avatar disappears.

    I can't get it to work this way... Instead P1 and P3 keep their avatars. It seems that although the User ID list and Player Index list are shrinking when a player exits, and also it seems that the Player Indexes are indeed changing, the control of the avatars stays as what it was prior to anyone leaving.

    Furthermore, at runtime, if I manually change the Player Index of an avatar in the editor, it has no effect on which player is controlling that avatar.

    Please help!!
     
  31. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    I answered this per e-mail last Friday, as far as I remember.
     
  32. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,302
    @roumenf ,
    I have a custom line/s skeleton drawn via GetJointPosDepthOverlay - which works OK.
    What can I do to ensure that newly added AvatarController with skinned model overlays it exactly ?
     
  33. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    Please open KinectDemos/FittingRoomDemo/KinectFittingRoom2-scene and look at the components of ModelMF-game object. The AvatarController is not enough for overlays. You would need the AvatarScaler-component, too. The major issue with skinned model overlays is that you cannot overlay the body parts directly. This would distort the model. The better approach is to scale it, according to user's bone lengths.
     
  34. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,302
    yup. figured meanwhile. thanks!

    I guess I didn't describe what I wanted exactly: I wanted to match existing 2d colliders - with positions obtained from GetJointPosDepthOverlay - ( let's say just wrist joints ) - with a model added afterwards ( locked in Z ) - this works to some extent, the issue I was having was the camera's Y had to be set to 1 for some reason in order to get meaningful overlap.
    Is there something I can look into ? - didn't want to begin modifying e.g. AvatarController/Scaler if there's easier way.
     
  35. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    If you want to lock the Z-coordinates, in order to use the components in 2D, enable the 'Ignore Z-Coordinates'-setting of KinectManager-component. It was also recommended, as far as I remember, to set the camera's Y-position to match the height of the Kinect camera above the ground.
     
  36. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,302
    I'll test this, thanks
     
  37. chokanche

    chokanche

    Joined:
    Jul 23, 2017
    Posts:
    2
    Hi,

    Is there a way to increase the height of the avatar's jump?
    Thanks,
    M
     
    Last edited: Jul 30, 2017
  38. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    The AvatarController-component (hence the avatar in the scene) reproduces the movement of the user. So, to increase the jump's height, the user needs to jump higher. There is 'Move rate'-setting of AvatarController too, which you can use to scale the movements. This will not only scale the height of the jump though, but all movements of the avatar in all directions. The last option is to enable 'External root motion'-setting of AC and move the avatar with your own script.
     
    Last edited: Jul 31, 2017
  39. AugmentedSpaceAgency

    AugmentedSpaceAgency

    Joined:
    May 11, 2017
    Posts:
    12
    Hello, I'm building a scene with skinned mesh avatar control and user body blend. In the FittingRoomDemo2 scene it work without problems, but in my scene it's just camera feed texture, the 3D model is invisible. I've copied my model in the demo scene and no problem, then I made a new scene using the demo scene, but after restarting Unity the problem is back. I've added the BackGroundLayer layer, as specified in the docs, but still can't use the user body blend, except the demo scenes. I'm using Unity2017.1 x64, it's a bug or know a fix?
     
  40. chokanche

    chokanche

    Joined:
    Jul 23, 2017
    Posts:
    2
    Thanks for the reply!
    M
     
  41. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    Hi, pay attention to the outlined settings of the game objects in the attached screenshots. I just tested creating a new fitting-room scene in Unity 2017.1 (copied some of the objects from FR-demo2 in there, to save some time). And it works as expected. It's only a matter of adjusting the settings of the game objects, as to me. 'Mixed' in MainCamera's settings means all layers except BackgroundLayer1, which is rendered by the BackgroundCamera.
     

    Attached Files:

  42. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,302
    @roumenf why is avatar's Z position changed wildly once detected ? - with Z-ignore on manager set
    it gets pulled towards the camera to match the user size, I would expect it to rather scale with respect to avatar's initial Z /0/ coordinate

    I didn't understand scaling when orthographic camera was used btw: the avatar got shrunk significantly for some reason

    Is there a reason for this ?

    For matching avatar with 2D colliders I ended up finding intersection with Z=0 plane wrt perspective camera - which solved it in LateUpdate - Update couldn't be used due to flickering

    Note: We needed to add horizontal offset to AvatarController too - it's a 2 players game and we needed to shift them apart some. there was intensive flickering too with (any) non zero offsets set - this went away with turned off continuous scaling on AvatarScaler for some reason. - this was really impossible to find out only from the editor tooltip, without intensive experimenting with many settings combinations
     
  43. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    May I ask you to send me (via WeTransfer.com) a zipped project with sample scene, demonstrating the issues above. This way I could look closely at what may have caused the issues, instead of trying to reproduce them by myself.

    The 'Continuous scale'-setting was meant to control how the avatar scaling should be applied - once when the user gets detected, or continuously on each update. It was not meant to be 'flickering or not', hence this was not included in the tooltip.
     
    Last edited: Aug 6, 2017
  44. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,302
    yeah, problem was that it was hard to find the culprit since the tooltip is really about something completely different :)
    with non zero offset it seems to alternate position quickly between original kinect user position, and offset one resulting in flickering
    there is nothing extra going on in the scene (I think), will try to make repro with all of the above as time permits, thanks!
     
    roumenf likes this.
  45. digitalfunfair

    digitalfunfair

    Joined:
    Oct 21, 2014
    Posts:
    10
    Anyone got any tips or experience of managing Kinect failures for long term installations? My plan is below but looking for advice before implementation, thanks.

    The Kinect add on checks to see if it is initialised so I thought it would be good to run a timer if it isn't and if that reaches a certain amount, say 2 minutes then reboot the app. But also need to set some flag so it doesn't end up in a boot loop. Then if it boots a couple of times without the Kinect working then restart the computer. If that doesn't work then revert to non Kinect demo mode.

    At the same time there are downloadable free apps to check and if necessary restart processes. Could use one of these? The hard part is testing it as it's hard to test when the Kinect fails except by unplugging either power or usb which might not be exactly the same.
     
  46. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    I would only recommend to store that flag (or counter) in a file, to make it persistent across restarts.
    By the way, if your app has multiple scenes utilizes the Kinect face model mesh (i.e. has the FacetrackingManager component with 'Get face model data'-setting enabled), keep in mind the face-tracking subsystem may cause memory leaks. Hence, you need to restart the app from time to time, e.g. every few hours.
     
  47. vivalavida

    vivalavida

    Joined:
    Feb 26, 2014
    Posts:
    85
    Hi @roumenf ,
    I'm trying to use the colour collider demo with background removal.

    I've been able to get it working with the kinectv2 but the colliders act weird when using the kinect v1,
    this happens soon after adding the 'background removal manager'.

    If I use the 'simple background removal' then there are no problems.

    While I'm good for now, I'd prefer to use 'background removal manager' if possible.
    Thanks.
     
  48. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    Hi, please open KinectScripts/KinectInterop.cs, find the MapDepthPointToColorCoords()-method, and in the first 'if' after 'sensorData.depth2ColorCoords != null' add ' && sensorData.sensorIntPlatform == DepthSensorPlatform.KinectSDKv2'. Hope this will resolve your collider issue ;)
     
    vivalavida likes this.
  49. vivalavida

    vivalavida

    Joined:
    Feb 26, 2014
    Posts:
    85
    Hi again,
    is it possible to access the accelerometer data for the V1 and V2?
    Thanks.
     
  50. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    This data is not available, as far as I remember.