Search Unity

UnityEngine.XR.InputDevice Feedback

Discussion in 'AR/VR (XR) Discussion' started by PanayotCankov, Jun 17, 2019.

  1. PanayotCankov

    PanayotCankov

    Joined:
    May 28, 2018
    Posts:
    16
    I've started moving some projects toward using the UnityEngine.XR.InputDevice capabilities and so far it looks pretty straight forward when targeting one platform, but certain things could've made my life easier. I hope you appreciate my feedback.

    The Oculus Quest touch controllers return CommonUsages.triggerButton true when the index finger touches the trigger button, I expected it to be some kind of true when it is pressed completely. I remember the vive controller's trigger button clicks at some point, given tat the CommonUsages.trigger is float to report the 'how much the button is pressed' I would expect the CommonUsages.triggerButton to be used for that point when the trigger makes the 'click' noise.

    The CommonUsages.primary2DAxis - it is a joystick for Oculus Quest touch controllers and a touchpad for Oculus Go controller. The interactions with a touch and stick are very different. For example scrolling - with joystick if you touch and move up you expect the page to keep scrolling, while for touch you expect to touch and move to behave like a mouse wheel.

    The Oculus Avatars, when I add the avatars and let them render the controllers their positions match the UnityEngine.XR.InputDevice positions for hands on Quest, but the controller for Oculus Go's movement is constrained to somewhat reasonable bounds and at some point mismatches the UnityEngine.XR.InputDevice a lot. It would have been nice if avatar systems could feedback hand positions after the constraints are applied so I could create a laser pointer for example attached to the right hand that would work both for Oculus Avatars and VRTK Avatars etc. For me it sounds reasonable for these systems to provide new soft input devices and have something like XRNode.RightHand that returns the raw hardware device positions and a XRNode.RightAvatarHand that is software driven by an avatar system plugin.

    With that said I love the UnityEngine.XR.InputDevice when I have to work for a specific controller, but it is not very friendly when I am trying to create a multiplatform experience or add 3rd party avatars. I can create a "MyInputDevice" that checks the name of the InputDevice's name and extrapolate the additional information such as "is touch", "is stick", "is pressed", that also has fields pointing to the Avatar and uses 3rd party APIs such as the: "this.avatar.GetHandTransform(this.hand, OvrAvatar.HandJoint.HandBase);" of the OvrAvatar. And abstract it in a way that works for multiple platforms. But then I feel like it defies the UnityEngine.XR.InputDevice.

    There is also the new InputSystem that has zero documentation on XR: https://github.com/Unity-Technologi...es/com.unity.inputsystem/Documentation~/XR.md Will it address some of these issues?

    What would be the currently recommended way to do reusable VR components for apps that works for WMR, Oculus GO, Quest, Rift, Vive, with and without avatars?
     
    StayTalm_Unity likes this.
  2. StayTalm_Unity

    StayTalm_Unity

    Unity Technologies

    Joined:
    May 3, 2017
    Posts:
    182
    Hello!
    I've been working on changing, adding, tuning these APIs and Feedback is greatly appreciated :)

    1) That's a good point, and one that @jackpr caught onto a while back. Both Vive and Oculus had a 'Trigger As Button' input, but the former was the 'click' 75% pressed state, and the latter was a capacitive touch. Starting with the new XR plugins architecture, Vive will still be set to the 'click' state, and Oculus will be set to a 75% threshold.

    2) This is something I'd like to fix, but I'm not sure the best way. We want to have all devices with a touchpad or thumbstick have the same 'primary' usage, but there is no good way to yet to provide context on individual features. Having PrimaryJoystick and PrimaryTouchpad would break some of the 'just get the 2D axis' simplicity. I could also add more context to individual feature info, but I want to do that carefully; it could easily blow out into a ton of weird, one-off bits of information and specifics. For right now, you can use device names and manufacturer's, but that doesn't work well when you want the game to work with platforms that may not yet be out, and on our side, if that is the way to go, I need to publish the best way to map names to devices. I need to make something official, but since the InputSystem uses the same info, my best guess at those name mappings can be found here.

    3) It's hard for us to map to third party toolkits. Communication on changes and updates between companies is not always the best, and it's hard for us to keep up with why our own APIs don't work well in conjunction with somebody else's. This means that we don't really target our stuff to match VRTK, MRTK, Oculus Avatar, etc... and instead try to get them onboard with reading their data through our abstraction, so at least then we get the same source values.

    That said, with Oculus Go specifically that's interesting. The Go controller is 3DoF, with Oculus providing a simulated arm model. This arm model is just reading the native position from the Oculus SDK. If the Avatar package doesn't sync up to that it means that it must be doing additional work to simulate that device's position. I can't really predict what that could be, and I'm not familiar enough with the avatar package to know if you can replace where they get their controller position from.

    4) We are working on it :). We think it's head and shoulders better than using the old UnityEngine.Input APIs. InputDevice showed up in 2019.1 as something simple, and we are slowly adding more features and making more well-rounded. 2019.2 added a few smaller things like OnConnected/Disconnected callbacks, and 2019.3 has some adjustment to usages, replacing the concept of device roles with something better, and tracking origin and boundary point APIs. The goal is one-stop cross platform, where you can connect an unknown device, as a developer check that it has the features you require, and map to those. But we are not quite there. I really want to avoid adding really device-specific things like 'isTouch', but I do agree that more feature-level context would really be helpful, and something I'd like to get down properly.

    5) As for the InputSystem. It's on my own personal to-do list, but with delays in that package, and it's slower, more experimental roll out, we wanted to get the InputDevice APIs out and ready. I will be going back to that to properly document how that works and what it can do. In the long run, the InputDevice data is going to be sent to the InputSystem, and the action/binding systems available there will be able to provide a stronger abstraction than we can do at the InputDevice level.

    6) Right now our suggestion is to use the InputDevice APIs, which in my opinion at least, provide a good, low level access, but if you want to use existing animated avatar systems, we suggest targeting the one that suits your needs best. I understand this is difficult right now because each avatar system targets a specific SDK. This is a problem for us too, and while too early to give any details, we are looking into providing a proper cross-platform avatar system. If you don't need built-in hand or controller articulation, the TrackedPoseDriver can map a game object to a controller, so you can use custom models that way, and animate based on InputDevice features. I do agree this should be a built-in, supplied utility for how common a feature it is, and we are working towards that.

    Hope this helps, and we do appreciate the feedback; Keep it comin'. I shared this with the rest of the team, and we are already talking about point 2 & 3, and I personally got reminded about point 5.
     
    Last edited: Jun 17, 2019
    ROBYER1 and PanayotCankov like this.
  3. ROBYER1

    ROBYER1

    Joined:
    Oct 9, 2015
    Posts:
    1,454
    Caused myself tons of confusion trying to use the new Input System with XR Interaction Package today, might get round to reporting issues later but they don't work together at all it seems, please look into migrating XRI to new Input System.