Search Unity

  1. Megacity Metro Demo now available. Download now.
    Dismiss Notice
  2. Unity support for visionOS is now available. Learn more in our blog post.
    Dismiss Notice

Input System Update

Discussion in 'Input System' started by Rene-Damm, Dec 12, 2017.

Thread Status:
Not open for further replies.
  1. Rene-Damm

    Rene-Damm

    Joined:
    Sep 15, 2012
    Posts:
    1,779
    Motors and other output controls can be freely added. For the Xbox controller, we'll probably add the trigger motors on top of the two basic ones it gets from Gamepad once output is fully hooked up. Unfortunately, on the "classic" desktop there isn't really a good way to talk to the trigger motors (at least that I know of). The HID driver for XInput controllers does make them available but is sort of unusable because it combines the two triggers into a single control (for the sake of DirectInput). And XInput only provides access to the two primary motors. On the Xbox, we'll hook them up. I assume on UWP it'll work fine, too.
     
    FiveFingers, frarf and Peter77 like this.
  2. frarf

    frarf

    Joined:
    Nov 23, 2017
    Posts:
    27
    In the old new input system, there was the concept of "ActionMap"s, pretty much just a bunch of actions bundled up for convenience. Although the specifics weren't exactly shown, I also recall that you were able to make maps out of specific assets. I'm wondering if there's anything, at least on the drawing board, similar to that. The current InputAction system you showed off seems great for jamming, but a bit messy for large, complex input devices (like a controller).
     
  3. dlackey_bh

    dlackey_bh

    Joined:
    Jul 18, 2014
    Posts:
    11
    I work in simulation, in which there is the concept of hardware-in-the-loop (HWIL). This is where you take hardware parts of the actual system you are simulating and hook them up to the simulation. Consider taking a sensor off of some piece of equipment and using it as an input device. We also have things like unique control panels that may have variants depending on system configuration, and it would be nice to just have one sim with a set of templates that support the variants. The hardware interface is very unlikely to be HID. Think serial connections or otherwise, and probably multiple connections at once. Will the new input system support using literally anything as an input device?

    I had made a request for this on the feedback site years ago. In that, I had mentioned Delta3D's extremely abstract concept of an input device being nothing more than a collection of floating-point values and binary states (buttons). Delta3D was developed as a simulation toolkit, so it supported the idea that anything can be an input device, I assume to support HWIL sims. Because of that, you could even do weird stuff like swap out your InputDevice derived class that connected to actual hardware controls for another InputDevice subclass that was driven by an AI, or a socket connection, or a pigeon pecking on a touch-screen, or whatever you want. The simulation wouldn't know the difference. This made integration with custom, or new-to-market, hardware super easy. I once made an InputDevice subclass that handled the Vuzix VR290 HMD's 3-axis gyro, which took values directly from the driver. Also hooked up a SpaceMouse in a similar way.
     
  4. Roywise

    Roywise

    Joined:
    Jun 1, 2017
    Posts:
    68
    Does that mean we'll be able to get the mouse or touch position multiple times between frames? Right now we're having some difficulty with drawing an acceptable-looking line when the FPS is low because the mouse position only updates once per frame which would mean only 15 points of the line will be set if the FPS is 15.
     
  5. dadude123

    dadude123

    Joined:
    Feb 26, 2014
    Posts:
    789
    The way I understood it there is a separate thread that handles lowlevel input stuff (for example polling for devices that need polling and don't send events) and it runs at a high rate (250hz).
    That thread marks the exact timestamps of all messages that come in (either by device events or from normal polling).

    You can't react to anything right at that moment, but your code will run (as usual) in Update(), and thats when you get a flurry of input messages (all the events that happened since the last frame that the input thread accumulated).

    You can then process all the input data, while also having access to the timestamps.

    So lets say your game runs at 15fps fixed, and you want to draw a line like in photoshop. In the current unity version you'd get really bad corners of course since the input only gets updated once per frame.

    Now in this new upcoming input system the input runs at 250hz and all input is recorded, and when the next frame is rendered you can either take the "current input" (aka the last known state of the devices), which would get you the same behaviour as in the current unity version; or you can actually inspect every single message that came in between the last Update() and the Update() you are in right now.

    So up to 250 mouse input events for you to draw your line with - and of course less or even none when nothing happened with the mouse.

    And then of course you have those "events" which are basically just a nice sligthly-higher-level-api-management-thing on top. Where you can define groups of inputs like holding keys and whatnot as "actions" and every frame they get updated/evaluated and potentially triggered (ie invoking the corrosponding handlers).

    At least that's how I understood it.
    Would be nice to get definite confirmation if that is indeed how it works exactly.
     
  6. Roywise

    Roywise

    Joined:
    Jun 1, 2017
    Posts:
    68
    That's what I actually ment and would make me pretty happy. I don't need the points faster than per frame but I need all the points between the frames to make a smoother line.
     
  7. Rene-Damm

    Rene-Damm

    Joined:
    Sep 15, 2012
    Posts:
    1,779
    Yup, agree, the standalone action thing in the video won't scale. I think it's good for something simple where you have something like 5 actions or so but beyond that, I think going with an asset is the better way.

    And yup, there's still a bundling equivalent to ActionMap assets in the old new system. There's still plenty that needs to be done around this but the fundamentals are there already. You can create an .inputactions asset in the editor, populate it with sets of actions, and use it from you game code.

    Yup, the system itself does. You can't have reference values stored on devices (e.g. your device state can't have a string value on a control) but otherwise a device is really just a blob of data that you update with blobs of data. The controls are just frontends to read out the data but have no say in what gets stored.

    So yup... anything goes :)

    For the use case you outline, it sounds like the native backends in Unity itself may not cover your devices, but it's easy to write new device support that picks up data from other APIs. Right now, this has to be done in C# though we're working on supporting native plugins in the future. So if you want to pick up devices straight from USB (or whatever the transport), you will be able to do that.

    You can do stuff like this with the one difference that in the system, devices are dumb. They have no inherent logic tying them to specific backends. So if, say, you want to create an artificial mouse with simulated input. You'd just create a stock Mouse and then feed it state from your simulation logic. End result should be the same, though.

    As source data, yes. For example, if the system delivers 5 mouse move events during the frame, you'll see all 5 mouse movements surfacing on the managed side.

    However, the state system still aggregates. I.e. for a frame, we consume all those 5 events and write them into state. If your game logic only queries state, it'll only see one value (though for mouse deltas, that should still be an accumulation and not just only the last value).

    Put another way, if you do need to observe every single state change regardless of how many there are in a frame, you either have to go to directly to events or use actions.

    Correct. Devices coming in as events are generally picked up on the UI thread, but polling devices sit on their own thread running at a frequency you can control through the API. HIDs sit on their own thread which consumes input at the speed produced by the device/system.

    Correct. Though events are delivered in a separate callback *before* Update() and FixedUpdate(). By the time those run on your MonoBehaviours, input has been updated.

    Yup, if you tap into the event stream directly, you'll see everything that has accumulated along with a timestamp on each event.

    Yup correct. The actual sampling would be dependent on the specific type of device and how it is picked up in the backend. For mice on desktops, we rely on the sampling done by the OS and delivered as events. But yeah, that sampling is pretty much guaranteed to be significantly higher than your frame rate even if that's a steady 60fps.

    Yup correct. On top of events there's the state system you mentioned above with equivalent function to Unity's current input manager. And then there's actions which internally are "state change monitors" (i.e. they grab controls based on bindings, set up monitors on them, and then when the monitors fire, they process the changes that have happened) which then trigger callbacks (and in the future will have another API without callbacks).
     
    MechEthan and dadude123 like this.
  8. AlanMattano

    AlanMattano

    Joined:
    Aug 22, 2013
    Posts:
    1,501
    Super Awesome! @Rene-Damm There will be the possibility to use the force feedback joystick (for windows or nerds)?

     
    Last edited: Dec 20, 2017
  9. Rene-Damm

    Rene-Damm

    Joined:
    Sep 15, 2012
    Posts:
    1,779
    Possibility, definitely. Actuality, possibly :D

    Output support will allow sending data to any kind of backend API but it's still unclear at this point how comprehensive backend support out of the box will be. We will definitely have XInput rumble support and we will definitely have HID output support. That much is clear. In practice, we found HID output to be of limited use in practice, though. Vendors often just end up putting heaps of vendor-specific controls on their devices and keeping their meaning secret. Or using other mechanisms that require leveraging device-specific SDKs.

    Anyway, guess my point is, certain devices may require hooking up custom SDKs or other backends but the system makes that easy and the result looks like any other device already supported.
     
    laurentlavigne and AlanMattano like this.
  10. Rene-Damm

    Rene-Damm

    Joined:
    Sep 15, 2012
    Posts:
    1,779
    //UPDATE: This is outdated. The "develop" branch in the GitHub repo can be used with any Unity 2018.2.5+ build or any Unity 2018.3 beta. A special build is no longer required.

    //NOTE: The preview build only works with the "master" branch in the GitHub repo. We're working on getting the latest backends into your hands so that everyone can run the "develop" branch.


    Hey everyone,

    As promised, here are preview builds of the editor that allow running the project found at https://github.com/Unity-Technologies/InputSystem for whoever is interested in giving it a shot.

    Couple notes...

    1) No QA has been performed on these builds and they're based on trunk. There's a good chance there's issues.
    2) Only Windows and OSX support the new input system in the build.
    3) There's some known issues (plus probably unknown ones).

    The ambition here is to get the process rolling. We'll provide updated builds as things mature. ATM things are still somewhat rough.

    If you find issues not already reported, feel free to open an issue on GitHub. And, of course, feedback welcome.

    Sometime early next year I'd also like to publish a more detailed roadmap but alas, step by step :)

    Have a merry Christmas everyone. And thanks everyone for your engagement. Much appreciated.

    Q&A

    How can I try this in my own project?

    ATM you will have to copy the system into your own project manually. The easiest way is to copy the Assets/InputSystem and Assets/InputSystem.Extras folders.

    IMPORTANT: You also have to enable the native backends for the new system. Go to "Edit >> Project Settings >> Player" and set "Active Input Handling" to "Input System (Preview)" or "Both" and restart the editor.

    What version of Unity is this based on?

    It is based on the upcoming 2018.1 beta.

    I plugged in a joystick and... nothing. Is this thing working?

    ATM only HIDs specifically recognized by the system are working.

    It's possible to create product-specific templates for HIDs but the fallback path that is meant to make sense of HIDs if there is no specific template is broken ATM (it will compute incorrect offsets for individual controls on the device as both the Windows and the OSX HID backend get the order of the elements on the device wrong; unfortunately, the HID APIs on both systems make that tricky to solve).

    I let go of the mouse and the position delta control doesn't go back to 0.0. Known?

    Yup, delta controls don't yet work correctly. They neither accumulate correctly (i.e. multiple mouse deltas occurring in the same frame don't add up but rather overwrite each other) and they don't get reset (so get stuck often). We're working on it. The new state system is making cases like this more tricky than it was before.

    I ran the tests and some of them failed. Known?

    All the TODO_ tests are expected to fail.

    Additionally, the tests are susceptible to interference from native platform backends ATM. E.g. if you have a noisy HID plugged in or sometimes even if you move the mouse around, that may cause some tests to fail. We're working on an isolation mode that shuts the native platform backends out during tests.
     
    Last edited: Oct 25, 2018
  11. frarf

    frarf

    Joined:
    Nov 23, 2017
    Posts:
    27
    Thanks for the answer! And this is great, didn't expect early builds so soon. Great work!
    Oh, I also ask if their's ever going to be action sequences implemented (like for fighting games).
    I see a double tap modifier in the repository but nothing related to sequences.
     
    Last edited: Dec 21, 2017
  12. Rene-Damm

    Rene-Damm

    Joined:
    Sep 15, 2012
    Posts:
    1,779
    Thanks :)

    One thing that's half in there already (ok, more like quarter) is the concept of "chained bindings". With that you could have stuff like "hold left trigger, then press the A button". So it'd give you sequences.

    Whether that in combination with action modifiers is going to be enough to go full street fighter, I'm not sure. Gut feeling is it would not necessarily be the most elegant system to do that kind of input, but should be doable. And maybe there's some further things around actions that can be done to make assembling more involved input patterns very easy.
     
  13. Ferazel

    Ferazel

    Joined:
    Apr 18, 2010
    Posts:
    517
    After watching your videos, I can tell that you have spent a lot of time on the system. Thanks for doing it. I'm sure many people will be glad with the many different ways that they can now interact with the input system. However, I feel that it is going to leave a lot of more casual users scratching their heads as what is the best way to approach input in their game.

    I love the idea of modifiers allowing you to more easily define overloads for input. I hope that those are also easily user editable and definable for us to create our own modifier triggers.

    I don't mind having the UI be abstracted out to an inspector. I'm feeling that putting the InputActions objects onto actionmap scriptable objects that we could then push and pop onto a stack is the desired outcome here.

    A problem for me is the "usage" actionmap. When you set an action for the "primary action" that is not the same if you are in-game or in a menu. I understand that you want to provide an automatic actionmap of sorts, but I feel that is really the wrong course of action. I would rather time be spent on making actionmaps easier to work with and creating an actionmap stack of sorts similar to what the previous input system was focusing on. I guess I'm confused about why make an abstraction of N/S/E/W button maps on gamepads and have this "Primary Action" actionmap? Are you seeing a need by users to support replacing a gamepad with a touchpen? Or a gamepad with a touchscreen? It just seems unnecessary to me.

    In future videos I would like to see a more "real-world" example of how you support multiple controllers with the same action map supported in the system (such as for a couch multiplayer). I would also like to see how best rebinding of actions at runtime can be done and serialized. Also displaying graphics for certain bindings in the game in a tutorial sense. I'm fine if you're not there yet, but to me (and maybe you) this would be a better sign this system is on the right track.
     
  14. Rene-Damm

    Rene-Damm

    Joined:
    Sep 15, 2012
    Posts:
    1,779
    Yup agree. There has to be a fairly obvious, minimal path to getting set up that requires no deeper understanding of how stuff works. I think with some solid polishing, it should be possible to get the .inputactions path there.

    Could you get into a little more detail what you mean by "create our own modifier triggers" and how you'd picture them being edited?

    In the bigger picture, we're trying to solve the case of making games work with hardware that wasn't a thing when the developer created the game or was otherwise not specifically thought of by the developer. I don't think this will be very relevant for gamepads and other "classic" forms of input that haven't changed in years. It might turn out useful in certain ways for general cross-platform input, too, but overall it's targeted more at the XR space where you can be pretty sure that next year there's a new controller you didn't know about when you released your game.

    The "usage" stuff is still in its infancy, though, and we still haven't quite figured a number of things out. However, it's completely optional. You never have to care about usages if you don't want to. The only case where we invariably use them ATM is for device usages which is only relevant in the XR space ATM (e.g. "left hand" vs "right hand" controller).

    A video going into more action stuff is on the TODO list. Some of the functionality isn't there yet (like, you can get textual display names for controls, for example, but ATM you cannot associate graphics yet with controls) but even with what's there ATM, it'd be enough to cover some of the use cases you mention. When I recorded the existing video about actions, they had pretty much just become barely functional.

    Concerning the action layer in general, our thinking is that for the initial public release, it's okay if this won't be 100% yet. The foundation of the input system in general and also for actions specifically need to be solid but beyond the foundation, our expectation is that actions will evolve much more iteratively than the rest of the system.
     
    recursive and Ferazel like this.
  15. Ferazel

    Ferazel

    Joined:
    Apr 18, 2010
    Posts:
    517
    Sure, I guess I was thinking that if I wanted to define a "SlowTapModifier" bit button differently in my game vs your game. I could make my own modifier, by using something similar to an animation curve with duration on the X axis and 0/1 on the Y. Then I would define how long the input needs to be held down by moving a flat curve from 0 to 1. Then moving it back to 0 if I wanted a trigger on release or keep it at 1 if I wanted to trigger it while being held. This obviously gets more difficult with vector2 inputs. This editing of modifiers could totally be a version 2 kind of thing it's just the first thing I thought of when I saw the system.

    In regards to the actionmap, I'm glad that there is more that hasn't been shown. To me actionmaps are a version 1 goal because they will provide the user with clear direction on how Unity expects input to be used in applications. As part of this I love to get these points clarified in a future video or tutorial scene.
    • How you envision users setting up an actionmap stack.Ex: One for menu navigation, one for foot gameplay, one for vehicle gameplay.
    • How we can dynamically register for new gamepads and how we assign those gamepads to players with specific actionmaps (such as a couch multiplayer game).
    • Actionmap binding UI example to support multi-lingual keyboards (getting the correct dynamic string from a keycode) and image-based classic gamepad actions.
    Thanks again for your time. I understand I'm asking for a lot of details that may not be fully fleshed out yet. My goal isn't to try to nail you down to a specific implementation and criticize you. Instead to highlight what I feel are common user implementations to keep prioritized in your minds when refactoring the managed code, and to try to keep these examples as straight forward to the user as possible. This is where I feel the majority of the user base will tread, and it really needs to have the most clear UI/UX.

    I look forward to more updates next year.
     
  16. JakubSmaga

    JakubSmaga

    Joined:
    Aug 5, 2015
    Posts:
    417
  17. Rene-Damm

    Rene-Damm

    Joined:
    Sep 15, 2012
    Posts:
    1,779
    Cool, thanks. Makes sense to me and I think it's lining up well with where things are headed. But yup, might be we won't get all the way there in the initial release.

    Agree action maps/sets need to be there from the get go. What's unclear at this point is precisely how much of them will be there of them, though :) My thinking is that if all they can do is pretty much what's there in the code now (which lets you do most of what the old input manager gives you plus some things it can't do) but that part is solid and reasonably polished, I think that should do for a very first release.

    I'll make sure to have those on the list for the next video. Not sure how extensive rebinding infrastructure on first release will be. Stuff like getting correct dynamic strings based on current keyboard layouts already works but there's still a number of pieces missing overall.

    Yup, everyone on the team agrees. Initial release may not get us all the way there but we'll get there.

    The final b1 release will have more changes going into it unrelated to input but just to be clear, the official beta will not have the native input stuff yet and won't work with the input system repo. Landing our native code in trunk and a Unity release is one of the very next things we're going to work on. For the time being, we'll have to stick to custom builds with our stuff added on top.
     
    frarf and JakubSmaga like this.
  18. frarf

    frarf

    Joined:
    Nov 23, 2017
    Posts:
    27
    Hey, if we find a bunch of bugs where and how do we write our bug reports?
    There's a few bugs I've found which are just reproduced by just using the input system (like how the example doesn't work, at least for me)
     
  19. Rene-Damm

    Rene-Damm

    Joined:
    Sep 15, 2012
    Posts:
    1,779
    Feel free to file issues for the bugs on the GitHub project. For now, just a mention of platform and some repro steps should be fine to get things started.
     
  20. hippocoder

    hippocoder

    Digital Ape

    Joined:
    Apr 11, 2010
    Posts:
    29,723
    Bit worried about the input *again* becoming this complicated thing. If I want street fighter, I should code it. Not being ungrateful but bloat and going through more steps can be confusing.

    Are you guys making game dev stuff or hardware input stuff?
     
    AlanMattano and Seb-1814 like this.
  21. AcidArrow

    AcidArrow

    Joined:
    May 20, 2010
    Posts:
    11,629
    I want to echo that statement.
     
    AlanMattano likes this.
  22. hippocoder

    hippocoder

    Digital Ape

    Joined:
    Apr 11, 2010
    Posts:
    29,723
    Veering offtopic:

    It's just I really like Unity but a lot of what they're adding these days makes it hard to keep up with the important stuff, in short a lot of noise is added for beginner and expert alike. Hard to see if you're working at Unity rather than in the trenches being bombarded with potential API bloat.

    Some examples:

    - networking HLAPI / LLAPI / SyncVars / whatever
    - Rune's Input (no offence since rewired also suffers bloat)
    - Cinemachine
    - Timeline
    - navigation (it's split in two places)

    These are all great things but their overkill nature means that smaller teams will have a much harder time trying to dig through cruft for practical and direct results. It's also causing a maintenance debt, a documentation debt on the Unity side and this is patiently obvious for anyone using any of the above mentioned.

    So I'd just like straightforward high performance input with vibration, runtime binding and great compatibility. Asset store can add the rest with this solid core.
     
    llJIMBOBll, eobet, Ryiah and 4 others like this.
  23. Ferazel

    Ferazel

    Joined:
    Apr 18, 2010
    Posts:
    517
    I agree with you, but in the past Unity has made API choices for the sake of ease and products had to suffer as a result, because there was no other way to get it done. I think Unity's philosophy going forward is to have a "easy path" and a "low-level path" for most of their systems. Before, all we had were GameObjects and all of the overhead associated with them. Next we will be getting a new data-driven ECS system. Before, we only had 3 render paths with obfuscated ways to modify them. Now, we will be able to write a good chunk of the entire rendering pipeline. You can tell that Unity doesn't want to be bundling up choices for devs and instead offering options.

    The goal though is to make sure that the "easy path" is still valid and usable and doesn't come with a lot of immediate API complexity or bloat. That's why I feel that focusing on the Actionmap system with the new Input API is important. Sure, I'm glad that there are different ways to access input for different types of applications that may require different mechanics. From what I'm hearing they realize this. However, I'm also glad that I don't need to only use Actionmap if I were developing a different type of application where it wouldn't make sense.
     
    Last edited: Dec 30, 2017
    scvnathan and recursive like this.
  24. Rene-Damm

    Rene-Damm

    Joined:
    Sep 15, 2012
    Posts:
    1,779
    Complexity is definitely a concern we are keeping an eye on and that we want to get right. And while there is quite a few requirements put on the system(*) -- which ultimately dictates a certain size --, we're trying our best to be mindful of what goes in the system and what can be built on top.

    Personally, I'm a big believer in "good toolboxes". A system doesn't need to be equipped with solutions to every problem but a good system should be equipped with the right tools to build solutions without having to rework the system itself.

    I think even the "chained bindings" thing can serve as a decent example. You *could* probably build a full street fighter style binding apparatus on top with this as the base but what's coming out of the box is simple and a natural extension to what's already there. And it's not working only one specific way. Instead, it's a tool that could be used in a variety of ways. One thing I would like to do with them, for example, is to allow various forms of gesture recognition to be built with them in action modifiers where controls are required to act in unison.

    However, whether chained bindings will *actually* make it into the final toolbox is still an open question. ATM a lot about the action stuff is still about finding a sweet spot with just the right set of tools to enable but not to overwhelm.

    Another example to me is the template stuff. You can build a much simpler way to represent devices. But templates turned out to be such a versatile and effective tool that I think it was the right tool to go into the toolbox.

    Concerning Unity API design in general, @Ferazel is quite right that Unity's approach itself is also changing a bit to cater better to the demands of power users while at the same time keeping the easy accessibility and straightforwardness that has made Unity attractive in the first place. So it has to be a system that comes with a certain level of prefabrication while at the same time providing extensive hackability.

    (*)That part has been a bit of a funny thing to me sometimes. Found that often when there was talk about "let's keep it simple", it ended up equating more or less to "let's make it only address the use cases I have". But once you make the rounds, turns out that even with something as "simple" as input, you end up with a relatively diverse set of requirements. And yeah, the old system was simple. But it was also falling way short. I think there's no way around the new system being more complex (BUT not necessarily more complex to use) given that we simply demand a lot more from it.

    Both. And TBH I don't see any other way. Unity is more than a hardware/platform abstraction. If the first thing you need when writing input code for your gamejam project is to download a 3rd-party asset store package to make Unity's input system more than a bare-bones hardware abstraction... that's no good in my eyes.

    I think the system *has* to have a means of abstracting away from raw input devices and dealing with "logical" game inputs. It shouldn't be something you have to use and it should be built as a toolbox rather than as a monolithic chunk of "the Unity way to do input", but I think without it the input system would simply fail to serve a substantial number of users well.

    That said, it may totally be that the system will ultimately ship as two separate packages, for example. One just devices&controls and the other all the action stuff. Let's see.

    Yup, that.
     
    callen, Shizola, elbows and 4 others like this.
  25. AcidArrow

    AcidArrow

    Joined:
    May 20, 2010
    Posts:
    11,629
    I mean, cool. If you can do all that without increasing complexity and not losing performance.

    I think what everyone here is afraid is that the more high level stuff becomes more and more complicated, which means they require time to learn to use properly and in the end are kind of limited.

    The above happens often (more or less) with Unity's other systems that have been mentioned in this thread.

    Imagine this: You spend time learning Unity's new high level tool (which has a non trivial learning curve), only to find out it's too bloated, or it's not fast enough, or it doesn't do exactly what you want it to do. And then you end up rolling your own solution/learn the low level api, which in hindsight is something you should have done in the first place.

    I am not saying this is the case with the new input stuff, I haven't played with the preview build yet to see any indications either way, I am just trying to explain the fear some people here expressed (me included).
     
  26. hippocoder

    hippocoder

    Digital Ape

    Joined:
    Apr 11, 2010
    Posts:
    29,723
    Something like a built in blueprint visual scripting tool that's Unity native would IMHO dramatically ease pressure on practically every other part of Unity, as all the HLAPI / UI stuff (cruft) is for people or can't or won't program in C#.

    That's offtopic though and not a reflection of the work being done with Input which is not out yet or final. Will bow out of thread now.
     
    Lars-Steenhoff likes this.
  27. Ferazel

    Ferazel

    Joined:
    Apr 18, 2010
    Posts:
    517
    I think that happens a lot for any "full-stack" solution. You start and think it's going to work for you, but as an engineer you want your app to be unique in some way that the full-stack didn't account for. At least now we have the option to use the lower-level API to roll our own. Which has not always been the case when working with Unity.
     
  28. AcidArrow

    AcidArrow

    Joined:
    May 20, 2010
    Posts:
    11,629
    (we're getting off topic, so I'll stop posting on this matter after this reply)

    I don't think that's the issue. I mean I get it, either the solution form Unity is so robust that it might not perform as good as I want it to and using it becomes almost as complicated as coding my own solution, or it's so specific it will probably not cover my use case.

    The solution is, cover the absolute basics and make it easily extensible so the users can code the rest.

    In the case of input the basics are being able to quickly get a controller up and running, compatibility with a lot of controllers, hotplugging figured out in a sensible way so I don't have to worry about it and something for vibration. How all these interface with my game should be up to me.
     
  29. Lars-Steenhoff

    Lars-Steenhoff

    Joined:
    Aug 7, 2007
    Posts:
    3,521
    Please make it work for two players easily for local multiplayer.
     
  30. Rene-Damm

    Rene-Damm

    Joined:
    Sep 15, 2012
    Posts:
    1,779
    That's definitely a solution that'll work well for some section of users and we're trying to ensure that no matter what else the system supplies, you'll always have the option of just restricting yourself to the basics (i.e. the devices&controls layer which pretty much comprises the features you described) and to ignore all the rest.

    But I think a significant section of users would be quite ill-served by a system covering only the absolute basics.

    To me a better approach is to create a system that *does* provide prebuilt solutions addressing various needs but at the same time assembles them out of pieces that can be reassembled, extended and deconstructed -- or even ignored entirely. You'll always have the option of just sticking to the APIs that cover only the basics but the system as a whole shouldn't restrict itself to just that.

    Ideally that should result in something that even if it fails to meet some specific needs, it can be bent to them.

    The use case is on the radar and the core mechanic needed for this case (and similar cases of contextualizing actions to specific devices) is there but for now, it does require manual coding of some setup code. Later there may be some helpers to simplify that.

    The previous iteration of the input system had a built-in device assignment system but we want to avoid having something like that in the core and rather provide tool APIs/components to easily construct that on top of the core.
     
  31. Lars-Steenhoff

    Lars-Steenhoff

    Joined:
    Aug 7, 2007
    Posts:
    3,521
    How about a input box in the input system gui how many local players you have? Just type the numer 1, 2, 4.
     
  32. dadude123

    dadude123

    Joined:
    Feb 26, 2014
    Posts:
    789
    The basic lowlevel features will be a fixed part of unity, but will the highlevel things be something you download using the new package manager? (Or manually like PostProcessing, Cinemachine, ...)
     
  33. Rene-Damm

    Rene-Damm

    Joined:
    Sep 15, 2012
    Posts:
    1,779
    What would that do, though? There still needs to be something that acts on that number and e.g. associates devices with players (at least on platforms other than consoles) and stuff like that. ATM the approach with action sets is that the system allows you to design sets but leaves it to you where and how they are used.

    There'll be at least one follow-up video on the action stuff and I would like to also go over how you'd set up local co-op with action sets in that video.

    Both are delivered through the package manager -- ATM in a single package.

    There is a small API that comes with Unity's native runtime (UnityEngineInternal.NativeInputSystem). For now there are no plans to document this API but it is accessible to anyone who'd want to venture to go that low-level (it mostly transfers raw memory buffers).

    The input system package contains both the lower-level part of the system (devices&controls&events) as well as higher-level parts (the action system which is optional to use). At some point, we may break it into two packages or (more likely) may break it into two DLLs but even now, if you don't use the higher-level parts, the only extra cost is some extra C# code that sits there going unused.
     
    hippocoder and dadude123 like this.
  34. AcidArrow

    AcidArrow

    Joined:
    May 20, 2010
    Posts:
    11,629
    Cool. I was never trying to imply I am representative of the whole Unity userbase, I was just trying to present my POV. And to put the matter to rest, since I probably made a bigger deal out of it than it is in this thread, reading your replies on this thread, I generally feel like you're on the right track. So that's that :)
     
  35. dadude123

    dadude123

    Joined:
    Feb 26, 2014
    Posts:
    789
    I have a few questions about keyboard input.
    Will there be a good mapping to figure out what key name was pressed?

    For example at the moment there's a need for this: https://i.imgur.com/1vQZgOw.png

    Will the new input system have a built in converter that works on all systems, with all keyboard layouts and system langauges? Because all of that sorta breaks the mapping (requiring tons of special cases).

    Or is that out of the scope of the input system?
    Meaning only device<->action api is currently in the focus?
    And what about the XBox360 controller (for example), will there be a way to get name of keys like "Analog Left", "B", "Down", "Back", "Right Trigger" ...?
     
  36. Rene-Damm

    Rene-Damm

    Joined:
    Sep 15, 2012
    Posts:
    1,779
    Yup. And it's already (mostly) working. KeyControl changes its displayName property according to the current keyboard layout. The current layout (if that is even of interest) is available from Keyboard.

    So if you do something like

    Code (CSharp):
    1. GUILayout.Label(Keyboard.current.a.displayName);
    it will automatically display what's on the key according to the current layout.

    Assignment/naming of keys itself is done by physical layout. I.e. the "A" key (we use the US layout as reference for naming) is always the key to the right of the caps lock key.

    The same system applies here. The name of the "A" button on the gamepad is always "buttonSouth". However, the displayName will be that of the actual controller (ATM this isn't actually set by the XInput controller template but the mechanism to do this stuff is all there).

    One thing that is still missing to complete the picture here is -- literally -- pictures. Or, more precisely, in general the ability to associate resources (images or even models) with controls. My current plan is to have something like a "resourceName" property on controls in the core system which can then be used to look up resources in a manager. This manager would sit outside the input system itself and there would be a reference implementation which can be used or ignored at will. Something like that. In the end, the thing in entirety should be able to supply both images for use in UIs (of individual controls as well as for entire devices) as well as models for display in the scene (most useful for VR). Once we have that kind of stuff, we'd also be able to have much better in-editor UIs for setting up bindings. But getting off-topic here :)
     
    orb likes this.
  37. dadude123

    dadude123

    Joined:
    Feb 26, 2014
    Posts:
    789
    Wow, that's awesome. It's exactly what I hoped for. (Even though this comment scares me a bit! :p)
    The thing about linking images to the devices is pretty neat as well.
    So to get from an arbitrary (KeyCode+ShiftState+AltState) combo to the a "displayName", I'd just enumerate all the inputs the keyboard device has to find the right keycode, and then get .displayName (or .shiftName / .altName depending on the states I have).

    I'm impressed that this wasn't overlooked.
     
  38. Rene-Damm

    Rene-Damm

    Joined:
    Sep 15, 2012
    Posts:
    1,779
    Unfortunately, so far I found no way to get shift and alt names on Windows (that's the bad news in that comment). On OSX, it's possible to pass in modifiers when querying the textual name of a key so shiftDisplayName and altDisplayName give the expected result. On Windows, these don't work which makes me wonder whether it's useful to even keep these properties.

    .displayName is unaffected and works on Windows as expected (though it does give uppercase versions of letters which I think is another nuisance with the Windows API).
     
  39. orb

    orb

    Joined:
    Nov 24, 2010
    Posts:
    3,037
    This is something too many systems screw up something horribly, like using tilde for console commands…except they use the symbol, not the key to the left of 1 on the main section of the keyboard, so now you have to hunt it down in your non-US keymap - if it exists at all. +1 point.

    I'd say keep only the features which translate directly across operating systems (on the same classes of input devices) in the main API. If you can find a way to make use of OS-specific features without affecting the others that's nice, but you'll probably end up masking the actual operations to something more convenient in the API we use anyway at the higher level portion of it. Don't bother if it's easier to just do the same on all systems in your glue code.
     
  40. Rene-Damm

    Rene-Damm

    Joined:
    Sep 15, 2012
    Posts:
    1,779
    Hah, I sense a fellow ex-Torque user? :D
     
  41. dadude123

    dadude123

    Joined:
    Feb 26, 2014
    Posts:
    789
    Same thing in the half-life1, 2, CS:Source as well... that was so frustrating :p

    Btw: Getting the real key names at all is already a big improvement. Displaying "Shift+A" is perfectly ok as well. Getting vk_E + Alt-Gr = '€' is just a bonus that's not really needed, so no problem as long as the un-shifted/un-alted key names are there.
     
  42. rz_0lento

    rz_0lento

    Joined:
    Oct 8, 2013
    Posts:
    2,361
    Will you support Windows.Gaming.Input api devices on Win10 platform? WGI is available outside UWP as well so it's not limited to windows store apps. Reason I'm asking this is because XInput API only supports those two rumble motors on xbox gamepads, but WGI gives you access to those additional two trigger vibrations on xbox one gamepad.
     
  43. Rene-Damm

    Rene-Damm

    Joined:
    Sep 15, 2012
    Posts:
    1,779
    For UWP definitely. For plain old Windows, not at first but I think when the UWP backend will actually be worked on (soon), we'll assess how much code we can share and things will become clearer.
     
  44. orb

    orb

    Joined:
    Nov 24, 2010
    Posts:
    3,037
    It was the best of times. It was the worst of times.
     
  45. Player7

    Player7

    Joined:
    Oct 21, 2015
    Posts:
    1,533
    So I just checked the github project on 2018.1b2.. bunch of errors.. so that answers question for me.. but when will it work ..is it planned for 2018.1?
     
  46. Rene-Damm

    Rene-Damm

    Joined:
    Sep 15, 2012
    Posts:
    1,779
    It's not planned for 2018.1. We're still in the process of getting all platforms to catch up.
     
  47. Player7

    Player7

    Joined:
    Oct 21, 2015
    Posts:
    1,533
    ok then
     
  48. dadude123

    dadude123

    Joined:
    Feb 26, 2014
    Posts:
    789
    There's a guy here (https://forum.unity.com/threads/detecting-users-mouse-sensitivity.513519/) asking about the mouse input sensitivity, which reminds me of something strange:

    In windows my mouse sensitivity is set just fine (obviously :p) but whenever I use some scripts from the standard character controller / camera; or the camera stuff from cinemachine, the movement is always at least 20x too fast.

    So I have to scale it down a lot. Now I wonder why that is? Are the default values just bad (on purpose, to force you to change them)? Or is there something else going on?
    Is unity doing some sort of processing to the input events?

    I mean the normal Input.mousePosition returns the correct values, and the deltas are fine too.
    So I guess it can only be how the scripts get their values (most use Horizontal/Vertical axis).

    But wouldn't the guys who made the camera script from the standard assets or the cinemachine guys have noticed that something is off while testing?

    Or does everyone at Unity use an xbox controller for their tests? haha :D
     
  49. wuzibu

    wuzibu

    Joined:
    Dec 15, 2014
    Posts:
    10
    I'm looking forward to it! Could you guys consider the UI as well? I have a 3 player local coop running and every player is able to open his own ingame UI (inventory and skills). It was a pretty big hastle to get that running before that new system. I would be happy if it doesn't take weeks again for the same task :)
     
  50. Rene-Damm

    Rene-Damm

    Joined:
    Sep 15, 2012
    Posts:
    1,779
    My guess with the off-the-shelf scripts one problem is simply that there isn't a way to adjust mouse sensitivity. But I wouldn't be surprised if the axis handling in the old system is compounding the problem somehow.

    My hope is that for the new system we can provide better defaults by trying to adjust for mouse precision but I have doubts that mouse precision reporting is going to turn out to be very reliable. I think apps dependent on deltas will always be better off coming with user adjustable sensitivity settings. If the code manages a good default, that's great. But if not, the user should be able to adjust.

    But TBH my reply ATM is only a partially qualified one as I haven't looked too deeply into this yet... Good you're bringing it up.

    Could you elaborate a bit @wuzibu? Do you mean going a bit more into detail in the video on how to hook up your own per-player UI code to action maps? What were the things that made work so time-consuming with Unity's current systems?
     
Thread Status:
Not open for further replies.