Search Unity

Input System Update

Discussion in 'Input System' started by Rene-Damm, Dec 12, 2017.

Thread Status:
Not open for further replies.
  1. Rene-Damm

    Rene-Damm

    Joined:
    Sep 15, 2012
    Posts:
    1,779
    //UPDATE: For instructions on how to install it via package manager, please see here.

    //UPDATE: Mentions of preview builds are outdated. You can use the "develop" branch in the GitHub repo with any Unity 2018.3 version or 2019.1 beta. No special build is required.


    Hey everyone,

    First, we’re sorry we’ve been radio silent for a bit. We wanted to make sure we had more than just a “Hey, we are still working on it.” So… what’s been happening?

    After an evaluation by several teams inside Unity, we found things falling short in a number of ways, chief among which were performance implications of the event model we had chosen. After reviewing what we had to change, it became clear that while our C++ parts were headed in the right direction, our high-level C# layer was going to have to be fundamentally changed and this meant a rewrite. Having gone through what’s already been a rather protracted period of development, it was a bitter pill to swallow but we really think it was the right thing to do given that whatever becomes final will be final for some time to come.

    After spending time back at the drawing board, we’ve made rapid progress implementing the new system and are getting close to having feature-parity with the previous system. We’re excited about how things have turned out and are seeing solid progress with the issues we’ve previously identified. We’ve also gone through another round of internal reviews getting a much more enthusiastic response.

    However, to get to the next stage, we want to open development up to a wider audience to make sure we’re hitting all the right targets. To this end, we’ve made our C# dev repo public as of today and will be providing preview builds of the editor shortly (our goal is to have them available before Christmas) so you can run the system yourself and be involved in shaping its final form.

    Be aware that things are still under heavy development. What you’re seeing isn’t 1.0 or even 0.7. This is not a polished “final” release and all accompanying materials are work in progress.

    Beyond the preview builds, our plans (disclaimer blablabla) are to land our native backend changes in a Unity release and to make the C# code available as a Unity package. By that point, anyone will be able to use the system with a public Unity build.

    Of course, throughout that process we will listen to feedback and adapt. We’re trying our best to take extra care we’re getting it right.

    Q&A

    What are the main differences to Unity’s current input system?
    Whereas Unity’s current system is closed off and only internally has data about device discoveries and events that happen, the new system sends all data up to C# for processing to happen there.

    This means that the bulk of the input system has moved into user land and out of the native runtime. This makes it possible to independently evolve the system as well as for users to entirely change the system if desired.

    Aside from this fundamental architectural difference, we’ve tried to solve a wide range of issues that users have with the functionality of Unity’s current system as well as build a system that is able to cope with the challenges of input as it looks like today. Unity’s current system dates back to when keyboard, mouse, and gamepad were the only means of input for Unity games. On a general level, that means having a system capable of dealing with any kind of input -- and output, too (for haptics, for example).

    How close to feature complete is the system?

    There still remains a good chunk of work to be done on actions and especially their various editing workflows. Output support (rumble/haptics) has a design in place but implementation is still in progress. Also, while desktop platforms are starting to be fully usable, there still remains a good deal of work on the various other platforms. Documentation also needs major work. There’s lots of little bits and pieces still missing in the code. And, finally, there’s a stabilization pass that hasn’t happened yet so a good solid round of bug fixing will be required as well.

    Beyond that we’re working on equipping the system to function well in the world of C# jobs and the upcoming ECS.

    However, we don’t think the system has to be 100% feature complete to be useful to users and instead are aiming for a baseline set of functionality to be fully completed by the time anyone can use the system with a public Unity build. From there we can incrementally build on it and ship updates through Unity’s package system.

    How can I run this?
    The C# system requires changes to the native part of Unity. ATM these are not yet part of a Unity release. We will make preview builds of the editor based on our branch available shortly which can then be used in conjunction with the C# code in the repository. As soon as we have landed the native changes in a public release, everyone will be able to run the code straight from a normal Unity installation.

    Are action maps still part of the system?
    Yes. While the action model has changed and there’s still work to be done on actions (and *especially* on the UI side of them but also with things like control schemes and such), actions and bindings are still very much part of the system. Take a look at InputAction and InputActionSet in the repo.

    In the old model, actions were controls that had values. In the new model, actions are monitors that detect changes in state in the system. That extends to being able to detect patterns of change (e.g. a “long tap” vs a “short tap”) as well as requiring changes to happen in combination (e.g. “left trigger + A button”).

    Is it still event based?

    Yes. Source data is delivered from native as an event stream which you can tap into (InputSystem.onEvent). The opposite direction works as well, i.e. you can send events into the system that are treated the same as events coming from native.

    Is it extensible?
    Yes. Being able to add support for new devices entirely in C# has been a key focus of the system. We’re still polishing the extensibility mechanisms but the ability to add new devices without needing to modify the input system is already there.

    What were the performance problems of the previous event model?
    The previous event model was very granular. Usually, one control value change meant one event. Also, events had fully managed representations on the C# side which required pooling and marshaling. Events, once fully unmarshalled, were sent through a routing system which additionally added overhead. This was compounded by a costly way to store and manage the state updated from those events.

    In the new event model, all state updates are just memcpy operations and events contain entire device snapshots. Event and state data never leaves unmanaged memory and there is no routing. This model is also much better equipped to work with the upcoming C# job system (where a C# InputEvent class will become unusable).

    I’m seeing things in a namespace called ‘ISX’. What’s that about?
    This is temporary. Ideally, we would like to use the UnityEngine.Input namespace but given that’s a class in UnityEngine.dll, that comes with problems. We’re still thinking about the best approach here or whether to just use a different namespace inside UnityEngine but it’s still TBD.

    The name itself comes from the fact that the system initially had the internal name “InputSystemX”.

    What are the plans for migrating to this from the old system?
    For now, the two will stay separate and exist as two independent systems in Unity side by side. The existing input system in Unity is such a fundamental part of pretty much every Unity project that we cannot safely consider a migration until the new system is fully ready in terms of both functionality and stability.

    Projects will have a choice of which system to use (including using both side-by-side) and will be able to turn one or the other off (with the new system being off by default for now).

    Once there is truly no reason anymore to use the old system over the new one, we can start thinking about what to do about the old one.

    Are the APIs that are already there reasonably close to final?
    At this point, it’s still too early to tell. There may still be significant changes.
     
    Last edited: Dec 20, 2018
    makaka-org, NotaNaN, DrViJ and 46 others like this.
  2. PhilSA

    PhilSA

    Joined:
    Jul 11, 2013
    Posts:
    1,926
    Awesome news, thanks!

    I really like the approach of releasing these things as external packages (just like the Post-Process stack). It gives easy souce code access and prevents bloating the engine with tons of things. Other systems that would benefit from being released as external packages are the UI and Unet HLAPI. These are two things that I often wish I could easily modify without having to recompile a dll

    So basically I wouldn't mind if this remains not-integrated into Unity. Maybe add them to a list of "official packages" that users can select when starting projects, if you're worried about visibility
     
    Last edited: Dec 13, 2017
  3. dadude123

    dadude123

    Joined:
    Feb 26, 2014
    Posts:
    789
    Cool!
    I've got a few questions, some of them were already answered in the wiki, so here's the rest:


    1)
    Will we be able to do custom constraining of the cursor?
    In the best case that'd be SetCursor(sceenPos);
    I want to have the cursor wrap around the edges just like when you drag a value in the unity editor (getting close to an edge will put your cursor on the other side of the screen)

    2)
    I've read the wiki and I've watched some of the videos as well, but it is still not quite clear to me how much delay exactly there will be between a physical input and the earliest point in the system where we can react to that.
    Will there be some "slow" (as in less than mouse update rate, <1000hz) buffering going on somewhere that delays the events in a "flush every Update()" fashion?

    3)
    Here: https://github.com/Unity-Technologies/InputSystem/wiki/Technical-Notes
    It says that there's a third option. But I don't get it.
    Either you have events, or you poll, right?
    Or is the event triggered by some user method call like "UpdateAllInputEvents()" or something like that? Is that what's being said here?

    If not, then how is that different from simply having a polling api and a event based api available at the same time? I think I'm missing something here. Would be nice if you could elaborate on that.

    4)
    Integration with the job system sounds nice. But I don't quite understand in what scenarios that would make sense. Input isn't something that takes a lot of time to calculate, so the only thing I can imagine would be that we'd get full async events pushed to us on a Job-Thread (for absolutely zero delay, which would be awesome).
    Is that right? Can you give an example scenario here?

    5)
    This is more like a comment.
    You said you didn't just want to say "hey we're still working on it".
    But I think if you take a look at the forums, the people would have reacted completely differently if you just did that :)
    You could have said "Hey, still working on it, need to do a full rewrite bc performance sucks atm, the experienced people here will know that its for the best, cheers".
    That would have reduced all the negative and inflammatory comments by 90%


    Anyway, I'm really happy about the way things are.
    The new system looks really really promising, I like it.
    And I'm happy that you guys decided to rather do a rewrite than to accept GC-pressure and/or lower than optimal performance.
    My only concerns are SetCursorPosition and input-delay, and those are minor.
     
    Last edited: Dec 13, 2017
  4. Xarbrough

    Xarbrough

    Joined:
    Dec 11, 2014
    Posts:
    1,188
    Great news, thanks! :)
     
  5. Rene-Damm

    Rene-Damm

    Joined:
    Sep 15, 2012
    Posts:
    1,779
    Thanks @dadude123.

    ATM I'm still figuring out how to best handle pointer positions especially with respect to multiple displays and the fact that the input system can be used in EditorWindows as well (not just in game code).

    What you can already do is put custom processors on pointer position controls which modify the stored value the way you want to when it's being queried. Also, I think that once output support is complete you'll be able to set the value on arbitrary InputControls -- though that needs figuring out how it'd work with actions (which need to be able to observe state changes).

    I'll have a think about your use case and how the API could best support it.

    By default, yes, there's buffering. However, unlike in the old system, the buffering happens on the source data, not the aggregated data. What this means for mouse input on Windows, for example, is that it would sample mouse input at probably a higher rate than your framerate (IIRC the default on Windows is 120Hz) so there'd usually be multiple state updates for a frame, all of which come through in C#. Also, where it's on us to sample rather than the system (XInput, for example), we're making sure sampling happens at user-controlled frequency rather than framerate.

    Preconfigured input updates, where we flush out data, ATM happen right before fixed updates and right before dynamic updates. One or the other will be able to be turned off.

    Buffered data can be flushed out manually -- which, however, ATM only benefits you in the case of data that isn't fetched from OS-supplied event queues that are tapped on the main thread's player loop.

    Finally, there's work under way to give C# game code control over the arrangement of the player loop so with that in place, scheduled input updates can also be moved around within the loop.

    Events are actually the third option. The notes aren't worded well. It's talking about 1) callbacks, 2) polling, or 3) events where events in this case means a "give me all the events that have accumulated" style rather than "we notify you whenever there's an event".

    Yup, indeed, the concern isn't the processing overhead of input but more the logic that is affected by input. Say your entire game logic is chunked into nice jobs with clear dependencies. But then, part of that logic depends on the state of the input system. If you can't get to that data in your jobs at the point you need it, then you have that ugly sync point that forces you back on the main thread. You can put it up front to before scheduling any jobs but that's still a constraint on what you can do in your logic and what you can't.

    Believe me, this has led to some heated internal discussions :)
     
  6. Hertzole

    Hertzole

    Joined:
    Jul 27, 2013
    Posts:
    422
    Very good news! Definitely looking forward to seeing this develop!

    I do also have some quick questions about this:

    1. Will/Does it have controller rumble support?

    2. Can it handle multiple gamepads and if so, how? Any chance for an example in some way?

    3. During the development for this, will there be any example scenes/projects provided so we can see how things are done in Unity? IIRC the original prototype also provided some example scenes and scripts.

    4. Is there a way for us to detect what the user is currently using? Like some event that fires whenever the player switches from gamepad to keyboard. If not, there should be! It makes it easier to adapt the game to whatever the user is using right then and there.

    Very excited to see how this turns out! Looks very promising! Just please don't kill this new system! :)
     
  7. Rene-Damm

    Rene-Damm

    Joined:
    Sep 15, 2012
    Posts:
    1,779
    Yup, working on it :) On the native side, we have the foundational pieces in place but on the managed side, we're missing the part where you can write into state and have the updated state reach the underlying backend. It won't be in place for the preview builds but it's high up on the list of things to get finished.

    Yes, though "handling" can refer to a lot of things :)

    You can have arbitrary many gamepads (or any other type of device -- there's no device count limits in the system).

    Code (CSharp):
    1.  
    2. // Find the last gamepad the user used.
    3. var gamepad = Gamepad.current;
    4.  
    5. // Find all gamepads.
    6. var gamepads = Gamepad.all; // In the API but not yet implemented; will throw.
    7. gamepads = InputSystem.devices.Select(x => x is Gamepad);
    8. gamepads = InputSystem.GetControls("/<Gamepad>"); // Every device using the gamepad template.
    9.  
    With actions, you can have a set of actions, for example, and then use that same set for a 4-player local coop scenario:

    Code (CSharp):
    1.  
    2. InputActionSet UseActionsWithGamepad(InputActionSet set, Gamepad gamepad)
    3. {
    4.     var clone = set.Clone();
    5.     clone.ApplyOverridesUsingMatchingControls(gamepad);
    6.     clone.Enable();
    7.     return clone;
    8. }
    9.  
    10. // Determine which gamepad to use for which player in some way specific to your game.
    11. var player1Actions = UseActionsWithGamepad(myGameControls, gamepad1);
    12. var player2Actions = UseActionsWithGamepad(myGameControls, gamepad2);
    13. var player3Actions = UseActionsWithGamepad(myGameControls, gamepad3);
    14. var player4Actions = UseActionsWithGamepad(myGameControls, gamepad4);
    15.  
    16. // In reality, you'd probably have a component representing the input for one player and then have four player GOs with that component...
    17.  
    There'll be more refined ways to work with multiple devices of the same type and sets of actions being applied to them but the core of it is there.

    Absolutely. ATM what we have is.... nothing much. We're hoping to get the R&D content team to help us out there. As we get closer to full release, there will definitely be more polished supporting material and more examples to work off of.

    ATM it's possible by listening to the input stream. Events tell you which device they are for so you can see when the user changes from one to the other. However, noisy devices make that tedious (e.g. the PS4 gamepad will constantly spam the system due to have sensors in the device).

    Eventually, there will hopefully be more refined mechanisms.

    :D No one wants to see that happening. Least of all anyone who went through the reboot...

    We're very confident we addressed what needed to be addressed and are on a really good track going forward.
     
    Last edited: Dec 13, 2017
  8. orb

    orb

    Joined:
    Nov 24, 2010
    Posts:
    3,037
    Has the mythical Unity input system beaten Spider-Man's number of reboots record yet? It looks like this iteration at least has more than just a cool new suit :)

    (Insert monthly rant here about Unity's age relative to the suckiness of its input system.)
     
    ThaiCat, Senshi and Thomas-Pasieka like this.
  9. hippocoder

    hippocoder

    Digital Ape

    Joined:
    Apr 11, 2010
    Posts:
    29,723
    Hopefully http://www.isthenewinputhereyet.com/ won't be up long.

    @Rene-Damm One of the great things about rewired is the colossal amount of joypads it just recognises and maps to the internal layout. So my code just uses internal names (we prefer xbox naming) and that maps spatially to whatever pad is plugged in so if the internal name is .LeftBumper it'll always be that place.

    Any plans for it, because without, I don't think Unity's new input is worth using because using Rewired, you can just deploy on steam and practically nobody needs to map their weird logitech or ageing ps3 pad.

    Rewired has some 900+ mappings so far, a few no doubt just variations but still, it's actually something that raises the quality of a lunched product in a real gamer-perceived way vs just invisible input behind the scenes, that the gamer doesn't experience.

    Thanks!

    PS. Hopefully it'll be as simple as InControl to use (rewired takes a few too many editor-side setups for my taste even if it is more powerful).
     
  10. Rene-Damm

    Rene-Damm

    Joined:
    Sep 15, 2012
    Posts:
    1,779
    I don't think that's fair. In at least the latest iteration, Spiderman also got an annoying child actor with lots of unfunny one-liners.

    Agreed that consistency of mappings is a big deal. If you can't rely on the A/south/cross button to be in the spot you expect it to, that's not much good.

    IMO this has two axes: 1) consistency across platforms and b) consistency across a specific interface (especially HIDs on desktops).

    1) is something we're working on to address from the get-go. Unity's current input system is 90% platform-specific code with huge variations given that all the interpretation of "what does this input mean?" happens on a per-platform basis. What we're going for now is both much less platform-specific code as well as more consistency between platform-specifics. I think this step alone will make a big difference.

    2) I think will partially be something that will build up over time as we add profiles for devices to the system. We did license InControl's roster of profiles but things have changed so much that we'll have to re-evaluate at some point what we can still bring over in some form. Consistent support for XInput and PS controllers will be there from the get-go, though.

    It does pertain mostly to a very specific segment of input devices, though (namely gamepads and joysticks). The system overall aims to address input in a more comprehensive picture.
     
    Daerst, orb, MechEthan and 1 other person like this.
  11. hippocoder

    hippocoder

    Digital Ape

    Joined:
    Apr 11, 2010
    Posts:
    29,723
    Really good work. S*** happens sometimes during dev, but we're all rooting for team input. After all you even have your own website now :)
     
    Diet-Chugg likes this.
  12. Eideren

    Eideren

    Joined:
    Aug 20, 2013
    Posts:
    309
    As you probably already know, the old Input system has a pretty high latency compared to other engines ( https://forum.unity.com/threads/inp...-new-at-least-mouse-does.404512/#post-3002560 ), I know that you already talked about delays in terms of input buffering but what average delay can we expect from physical device input to rendering (at ~60fps for example) using a default setup ?

    Do we have access to actual precise raw axis this time, because, on my end at least, Input.GetAxis() and GetAxisRaw() reports very imprecise mouse input (always rounded to the nearest *.5 value, nothing in between) while other input systems I tried on raw C# or C++ weren't having the same issues.
    Here's the script I used to reproduce it:
    Code (csharp):
    1.  
    2. using UnityEngine;
    3.  
    4. public class InputTest : MonoBehaviour
    5. {
    6.    void Start()
    7.    {
    8.       float p = 1.2741f;
    9.       Debug.Log("Here's a random float to string to compare formatting:" + p);
    10.    }
    11.    void Update ()
    12.    {
    13.       Debug.Log("GetAxis:"+Input.GetAxis("MouseX"));
    14.       Debug.Log("GetAxisRaw:"+Input.GetAxisRaw("MouseX"));
    15.    }
    16. }
    17.  
     
    goncalo-vasconcelos likes this.
  13. Ferazel

    Ferazel

    Joined:
    Apr 18, 2010
    Posts:
    517
    Thank you very much for the update. Please keep them coming!
     
  14. Senshi

    Senshi

    Joined:
    Oct 3, 2010
    Posts:
    557
    First off, awesome to hear that this is still happening! I agree an in-between update would have been very welcome, but oh well. My questions are less about the input system itself, but might as well.

    I am really liking this trend of using Unity Packages here, but my main concern with them revolves around the discoverability of the packages. I would love to see them more closely integrated with the editor itself. Since Unity phones home and checks for updates every so often anyway, I would love to see it fetch information about available packages and create entries similar to the regular Standard Assets (i.e.: Assets > Import package > InputSystem (Opens in Asset Store))

    Since the system also deals with hardware outputs, how about just UnityEngine.IO?

    Regardless, great work so far! Cheers!
     
    landon912 and orb like this.
  15. superpig

    superpig

    Drink more water! Unity Technologies

    Joined:
    Jan 16, 2011
    Posts:
    4,659
    We agree, and we're working on something in that direction :)
     
    landon912, Matt_D_work, orb and 2 others like this.
  16. RSpicer

    RSpicer

    Joined:
    Feb 27, 2014
    Posts:
    9
    Are you planning to support multiple controllers for applications like simulation (where my users likely have a control stick/wheel, some pedals that have three or more axes, and maybe a button box or three)? Understand that this is a pretty niche case, and from talking with the Rewired folks I understand that consistently identifying USB input devices especially in the context of cross-platform code is a pain. That said, my mgmt and I would be over the moon if Unity let me query against things like the USB VID/PID after Gamepad.all so I could roll my own persistent mappings.
     
  17. JakubSmaga

    JakubSmaga

    Joined:
    Aug 5, 2015
    Posts:
    417
    https://forum.unity.com/threads/what-is-packman.482001/#post-3135334 ;)
     
    Senshi likes this.
  18. Brad-Newman

    Brad-Newman

    Joined:
    Feb 7, 2013
    Posts:
    185
    Cool stuff. XR crossplatform input is a hassle currently devs have to juggle. Is native XR input handling dependent on this new input system being completed, or is XR input going to be a part of this WIP branch?
     
    Akshara likes this.
  19. Rene-Damm

    Rene-Damm

    Joined:
    Sep 15, 2012
    Posts:
    1,779
    We don't yet have hard numbers for you as this is aggregating a host of possible setups but I definitely think in addition to giving you a certain level of control, the system also needs to give you an idea of what to expect.

    It does depend quite a bit on what device we're talking about. For things that get tapped on the main thread ATM (stuff like the mouse on Windows), there is a platform-specific distance between that and where the player loop brings input into C# land -- which happens as the first thing in fixed and dynamic updates. There's stuff in the works to give more control over the placement of input processing which would allow you to move things closer.

    For asynchronously collected input, we either pick things up as fast as the system produces (for devices that notify us) or pick things up at user-controllable frequencies. If you want the freshest possible set of async data, you can explicitly flush things out yourself. Out of the box, the system does that kind of stuff for head tracking where we need an extra update right before rendering.

    Is that at least a somewhat satisfactory answer? :)

    Yup, you do. We perform as little processing on the native side as possible and send the data up to C# as raw as possible. There's all kinds of conditioning you can do in the C# system for delivering final values (deadzone processing being the most obvious example) but you always have access to the raw unprocessed source data we picked up.

    When the preview builds are out, it'd be great if you can give it a whirl and see if the data does match your expectations.

    Glad the packman guys jumped in this thread and yup, 100% on board. The existing package management solution we have in Unity is weak and I'm super happy about what's brewing there ATM. Just looking at .NET dev in general with NuGet (or any other contemporary dev ecosystem for that matter), it's really something we're missing. But that's about to change :)

    That's an interesting idea. Let me have a think on it.

    The one worry we have with the namespace is that we assume users (especially new ones) will likely expect anything input-related to be called something with "input". Which would get doubly confusing if there is something called UnityEngine.Input and it's *not* the right thing to use. But maybe we're overestimating how much importance that really has (definitely interested in hearing opinions).

    Absolutely. We consider being able to accurately map whatever input device is connected to Unity to be a key aspect of a proper input solution.

    For HIDs, the Rewired folks are definitely right that it can be tricky and will often require resorting to a database of per-product data about how to make sense of a specific device (thus big controller matrices like http://guavaman.com/projects/rewired/docs/SupportedControllers.html).

    ATM we have a two-pronged approach that will hopefully have you covered.

    Product-specific templates can be built to specifically deal with individual devices. This is the database-style approach. You can see an example here.

    Additionally, there's a fallback path that represents a best effort when there's no product-specific template in place. Using the HID descriptor, it tries to figure out how to best represent the device in Unity. ATM this is still very bare bones. You can see it here.

    Note that HID data from the platform is now available 1:1 in managed code (we are literally working with raw HID reports in C# code). If what comes out of the box proves insufficient, you can always set up support for your specific HIDs entirely in user space without having to modify the input system. Vendor and product IDs are available to you (see here).

    That said, HID is where it ends. We do not ATM have support for other USB device classes. I.e. if your device is USB but not a HID, it won't get picked up ATM.

    We are closely working with the XR team in Bellevue and are working towards converging our efforts when we land the native changes in Unity and a public release. At that point, XR input should be a first-class citizen in the system next to the other types of devices.
     
    Last edited: Dec 13, 2017
    JakubSmaga likes this.
  20. hippocoder

    hippocoder

    Digital Ape

    Joined:
    Apr 11, 2010
    Posts:
    29,723
    A possible idea is that the new input use the same namespace as Input. So we have Input.X.
    Then later on, when Input is depreciated, both Input.X and Input point to the same thing.

    Or, actually. Now you've done it. Precious input code harmony is never achievable again :/
     
  21. Senshi

    Senshi

    Joined:
    Oct 3, 2010
    Posts:
    557
    Absolutely! This is the first I'm hearing about it and it sounds very promising indeed. Also thanks @ Superpig and JacobSmaga for the info/ links!

    That's a very diplomatic response, haha. I completely understand (and share) that concern, though I fear this problem will exist regardless as long as the old Input class remains included (and visible right out the gate).

    If I were a new user and see both Input and InputSystem classes side by side, I would probably jump straight to Google. But if the new one's hidden behind an additional namespace and (Asset Store) package I don't think I'd even know of its existence. So at first glance I actually expect the discoverability problem to be wider than the namespace. In fact, once I know it exists and that I need to add a using directive, I'm not sure how much it would matter what it's called (within reason). And if the new namespace is included by default this would hold especially true.

    It's a tricky issue for sure. Also, since both will exist side-by-side for a while, will Input at least be marked as deprecated/ legacy? If not, what if both input systems would live within a UnityEngine.Input namespace? Old scripts could easily remain valid with the automated upgrade process (just import the namespace) and the clash would be avoided (then: UnityEngine.Input.Input class). [/stream of consciousness]
     
  22. Rene-Damm

    Rene-Damm

    Joined:
    Sep 15, 2012
    Posts:
    1,779
    While it's technically possible to do so, the compiler will still spam about the naming conflict between the UnityEngine.Input class and the UnityEngine.Input namespace (as it's still there, even if just used as a parent).

    If we're okay to live with the conflict, we could actually even make the jump straight to UnityEngine.Input. But it does kinda seem like not solving the problem but rather just living with it. If it was spamming just for the InputSystem DLL compile, I think I'd be okay with it. But it'll spam for everyone actually using the system and that's... not nice :)

    That's a good point. And I think you're right that having the two systems coexist side-by-side will create some confusion no matter the naming of the new system.

    Unfortunately, I think we won't be able to mark the old system deprecated until users really have an out-of-the-box, zero-functional-regression, and reasonably-finished alternative. Otherwise it's a bit like putting Mecanim v1 out there and immediately marking all existing animation features deprecated. I think that would have caused a lot of chagrin.

    The problem is that the existing UnityEngine.Input has methods and properties. These cannot be moved anywhere but to within another class. So the only way to move both systems into the UnityEngine.Input namespace would require leveraging the script updater to move all the method and property uses around in existing code.
     
    hippocoder likes this.
  23. Albertomelladoc

    Albertomelladoc

    Joined:
    Aug 1, 2017
    Posts:
    12
    This seems really nice! Thanks for the update. Will the vibration for Gamepads be updated? Is it planned to enable custom durations of the vibrations in HandHeld.Vibrate?
     
  24. Matt_D_work

    Matt_D_work

    Unity Technologies

    Joined:
    Nov 30, 2016
    Posts:
    202
    Yes, we're working on leveraging ISX for crossplatform XR input :)
     
    MechEthan and Akshara like this.
  25. Rene-Damm

    Rene-Damm

    Joined:
    Sep 15, 2012
    Posts:
    1,779
    Yup. Implementation is still ongoing but API-wise the idea looks like this ATM:

    Code (CSharp):
    1. Gamepad.current.leftMotor.value = 0.5f; // Go half-speed on low-frequency motor.
    Still a number of things to sort out, though.

    The idea is for a device to be able to determine its own set of haptics-related/output controls just like the device already determines its input controls. This would go all the way up to things like a device having a control representing an audio buffer into which you can shove a waveform pattern.

    While in theory this could be used to give a device representing the current handset a more sophisticated haptics interface including duration, I think we're missing a bigger piece there. I think ideally there'd be a system that can control playback of haptics effects against simpler output APIs and thus give you control over duration, for example, regardless of whether the underlying device supports that directly or not.

    However, that part is still a good stretch away from where the code is ATM.
     
    hippocoder likes this.
  26. BlackPete

    BlackPete

    Joined:
    Nov 16, 2016
    Posts:
    970
    As long as both input systems are available, confusion will be inevitable. So I'd say don't overthink this, and just pick a name for the new namespace and move on. We can always assign a new name ourselves using a using statement if we deem it necessary. Or just pick a jazzy codename like "UnityEngine.JazzyInput" or something to make it as distinct as possible from the old system. :D

    I do feel that "IO" is too high level because in .NET, System.IO is where a lot of file/stream IO live, which can be confusing in itself.
     
    hippocoder likes this.
  27. Skjalg

    Skjalg

    Joined:
    May 25, 2009
    Posts:
    211
    Will you be able to set the fps to 1 when rendering static screens and still be able to poll input faster than 1 time per second when using this?
     
  28. BlackPete

    BlackPete

    Joined:
    Nov 16, 2016
    Posts:
    970
    This is one thing I'm looking forward to -- the ability to refuse input from an Xbone controller when I'm expecting to read from a Touch controller (the fact they share the same axis IDs is troublesome...)
     
  29. Rene-Damm

    Rene-Damm

    Joined:
    Sep 15, 2012
    Posts:
    1,779
    Heh, good point. Did like the shortness of "IO" but didn't think of the ambiguity with System.IO.

    You're probably right. I like the idea of just going with "UnityEngine.InputNew" (immediately obvious which one is old and which one is new) and then at a stable point in the future, using the script updater to both relegate the current UnityEngine.Input to something like UnityEngine.Input.Deprecated and to move UnityEngine.InputNew over to UnityEngine.Input.

    This is an interesting one. We've been bouncing possible solutions for this kind of usage scenario around. Our current thinking is to add a function that will block until either a specified time has elapsed or until there's input available either background or foreground (read OS message pumps) feeds. With that in place, we think there's multiple avenues this can be exploited to set up throttling scenarios. However, no work has happened yet to put this in place and it will unfortunately require additional per-platform work. I want to give it a try on just Windows but will have to get some other pieces done first.

    So.... while probably disappointing to hear, all I can offer ATM is that we have it in the picture but there's a good chance it won't make it into the initial rollout.
     
  30. movra

    movra

    Joined:
    Feb 16, 2013
    Posts:
    566
    I'm out of the loop. What C# jobs? Upcoming ECS?
     
  31. BlackPete

    BlackPete

    Joined:
    Nov 16, 2016
    Posts:
    970
    Full story here:
     
    movra likes this.
  32. MadeFromPolygons

    MadeFromPolygons

    Joined:
    Oct 5, 2013
    Posts:
    3,982
    Irritating to hear yet again, but thankyou for letting us know, thats what we wanted.
     
  33. Roywise

    Roywise

    Joined:
    Jun 1, 2017
    Posts:
    68
    @Rene-Damm I've asked this before but I'd like to know if Pen / Stylus input is being thought about. For our project we have to support Pen / Stylus input with pressure values coming from the Pen/Stylus etc.
     
  34. Rene-Damm

    Rene-Damm

    Joined:
    Sep 15, 2012
    Posts:
    1,779
    Thought about and worked on :)

    There's a Pen device class that has pressure, tilt, and twist controls. ATM only the Windows backend actively implements pen input but that'll change. Note that since the system works in both play and edit mode, it'll be possible to write editor extensions supporting pressure sensitive pen input.
     
  35. Dreamback

    Dreamback

    Joined:
    Jul 29, 2016
    Posts:
    220
    This is great, my company had a lot of problems using steering wheels and pedals (from three different manufacturers), along with steering-wheel force feedback, in Unity. We had to use a third party DirectX library that was poorly documented and had many problems. And event-based input is just a better way to go for high-performance games.

    Looking forward to seeing this evolve!
     
  36. LaneFox

    LaneFox

    Joined:
    Jun 29, 2011
    Posts:
    7,532
    Dare I ask, would ~6 months be reasonable to expect this in a release?
     
  37. Rene-Damm

    Rene-Damm

    Joined:
    Sep 15, 2012
    Posts:
    1,779
    Heh, that sounds familiar. Force-feedback on the desktop has proven to be a bit of a mess. For HIDs, there's the pretty decent and comprehensive PID spec but I have yet to encounter a device that *actually* does anything but implement but a few trivial controls from it (like the Windows Xbox controller HID driver does but turned out to be a no-go due to its left/right trigger combining).

    And given Microsoft's fragmentation of their input API space (abandoning both DirectInput and XInput), it seems vendors often just go with custom vendor-specific, proprietary protocols and then providing SDKs.

    Of course you can ask... and I would answer that it is a reasonable expectation :)

    Given our prior failings, we don't want to be committing to anything but the very next milestone (which ATM is public builds so people can do more than just marvel at the prettiness of our C# code) but a reasonable expectation is... reasonable :)
     
    MechEthan and LaneFox like this.
  38. orb

    orb

    Joined:
    Nov 24, 2010
    Posts:
    3,037
    So what you're saying is that the new API will try to do as much as possible as one-line method calls? :)
    Sane defaults and only a few one-liners to tweak sensitivity would be ideal.

    This is where I expect most of the grunt-work to happen. I just hope for the core parts to be ready for testing first - desktop mouse+gamepad, mobile touch, low-level access to allow remapping of keys.

    Every developer has a computer. 100% of testers are able to hammer at mouse and/or touchpad input ;)

    Excellent.

    Input devices have a lot of categories. Will the system try to categorise any recognised controllers as gamepads, throttles, flight sticks, wheels etc., or will it only have a list of recognised controllers connected? Some way of knowing what sort of devices are connected, beyond just how many axes and buttons there are, could be useful.

    Another thing I've pondered is whether it's useful to disable parts of the input system we don't use. Could we save some cycles by not polling devices if the engine was set to only care about a subset of events? Mouse, gamepad or touch screen only, for instance.
     
  39. Baste

    Baste

    Joined:
    Jan 24, 2013
    Posts:
    6,338
    The largest problem we have with the current Input system is that it loses all information about what's happened when you change scene. It's currently not possible to detect that the player is holding the sprint button as you enter a new scene, and have the player character sprint from the get-go. We have to require the player to re-press the button. That really hurts the game feel.

    All the other issues you are fixing are issues where the old system is poorly made and bug prone. The scene loading issue makes it impossible to do something that players expect.

    You're probably working on it, but I think this is large enough that it might be worthwhile to fix in the old Input system before the new one is ready.

    On Rumble - how are you planning on supporting different kinds of rumbles? The XBoxOne and Switch controllers' rumble motors are so fundamentally different that an API that tries to support both at the same time will fail at both.

    Could you also make versions of these APIs that doesn't allocate? That looks like I need to get a copy of the array of input devices every frame to figure out if the player's pulled out a USB controller.
    Which is what we're doing now and it's no fun.
     
  40. BlackPete

    BlackPete

    Joined:
    Nov 16, 2016
    Posts:
    970
    I just wanted to highlight this in with blaring sirens. Avoiding garbage collection is absolutely crucial in our project and we've already bent over backwards in our code to avoid creating garbage in order to avoid the GC collect spikes because the FPS *must* be buttery smooth for our project (it's not a game).
     
    dadude123 likes this.
  41. Rene-Damm

    Rene-Damm

    Joined:
    Sep 15, 2012
    Posts:
    1,779
    I think ultimately we'll get there. Good defaults, APIs allowing for compact code. At this point, our main focus is still on giving you pieces that are combinable and tweakable in a way that allows you to achieve the exact result you need. I'd rather have APIs that take a line or two more of code to use but can be made to do exactly what you need than having super compact APIs that fail you as soon as you need something slightly different from what they do. I think nice wrappers we can always add along the way as it becomes clearer how the system is commonly being used.

    For devices it does not specifically have a template for, it will make a best effort to come up with a sensible fallback. Depending on the type of device that can take various forms. If even the fallbacks fail, however, the device will not get added. It'll be on a list of "discovered devices" and if the system can make sense of the device at some point in the future (e.g. if someone registers a template for it), it'll pull the device back up and add it.

    Code (CSharp):
    1. InputSystem.devices
    will only contain devices the system could make sense of.

    It would be easy to expose the list of InputDeviceDescriptions the system is keeping for all devices that have been discovered in addition to
    Code (CSharp):
    1. InputSystem.devices
    . Not sure, though, what problem in particular are you thinking of?

    I think that's definitely useful. ATM there's three levels of control. Whether that's enough, I think we'll have to see.
    1. Tell native backend "I'm not interested in this device; don't send me its data". The mechanism is in place but it isn't hooked up yet. I think this will be especially useful for very noisy devices that just spam the event queue with events and generate needless work if you don't use them. This mechanism should automatically kick in if a device is not recognized by the system.
    2. InputPlugins. This will allow stripping entire chunks of functionality out of the system. Your game/app is keyboard/mouse only? Exclude the XInput plugin, for example, from builds.
    3. You can kill templates manually.
      Code (CSharp):
      1. InputSystem.RemoveTemplate("Mouse");
      and mouse support in the system (including mouse devices that have already been created) is gone.
    Very much agree. In the new system, there is no clearing of state in response to scene changes. And something like that will most definitely not get added.

    In the first step, output controls will be entirely device-specific and there will probably not be an effort to unify/abstract things. That'll mean, for example, that the Oculus Touch controller will likely have a different haptics interface than the Vive controller. Same probably for Xbox gamepads and the Switch controller.

    After that, I'd really like to get to a point where we can provide a more unified interface that internally knows how to make stuff work on devices with different capabilities but the baseline to me is that you are able to have access to the capabilities of the specific device you are working with.

    They actually don't :) Or, well, the Linq Select() statement does and the last GetControls() call does (although there is a version where you provide a pre-allocated List).

    GC is definitely big on our minds. The system as is already does a lot to stay away from allocating and to optimize allocations where it actually does have to allocate on the managed heap (for example, control hierarchies internally use shared arrays to minimize the number of heap objects in use and to improve scanning performance).

    Code (CSharp):
    1. InputSystem.devices
    does not allocate. You get a ReadOnlyArray directly accessing the system's internal array of devices.

    Code (CSharp):
    1. Gamepad.all
    , once implemented, also won't allocate.

    So, yup, proper GC behavior is definitely crucial. Some parts still have some way to go (enabling actions, for example, does cause garbage whereas it really shouldn't) but we're working on it and will keep having our eyes on that "GC Alloc" column :)
     
  42. BlackPete

    BlackPete

    Joined:
    Nov 16, 2016
    Posts:
    970
    This might seem like a quibble but:

    Are you really using strings to identify stuff with? I'd really like to avoid using strings as much as possible and instead go with interfaces/enums/whatever that can be strongly typed.

    I'm hoping you're just using this as an example for posting purposes, but I just wanted to confirm :D
     
    MMOARgames and dadude123 like this.
  43. Rene-Damm

    Rene-Damm

    Joined:
    Sep 15, 2012
    Posts:
    1,779
    Yup, in that case we are. And I think in this case it actually makes sense.

    Templates can be constructed *from* types (using reflection) but that's just one way. They can also come in from JSON or be constructed on the fly from template constructors. And they can come in over the network from connected players. There is no 1 C# type == 1 template correspondence.

    So using compile-time identifiers for templates won't work. We could certainly add a wrapper method

    Code (CSharp):
    1. InputSystem.RemoveTemplate<Mouse>();
    and probably will do that but in the end, all it'd do is find a string name for the template to remove.
     
    LaneFox likes this.
  44. orb

    orb

    Joined:
    Nov 24, 2010
    Posts:
    3,037
    I'm thinking that we'll be seeing a lot of exotic, weird controllers in the near future for VR systems. If you know characteristics of a device you can make educated guesses in a game (or other sim) to automatically present a usable control scheme, even if you haven't prepared for the particular controller. Maybe this will converge into some sane default setups like it has with gamepads eventually, but already we have everything from sensors and gyros to seats which spin in all directions.

    Everything listed so far is looking very promising, so I hope for an alpha to pop up soon. But ironically, the only project I have time for right now uses a GUI with boring old buttons and no other input, so I'd have to think up test cases :p
     
  45. Rene-Damm

    Rene-Damm

    Joined:
    Sep 15, 2012
    Posts:
    1,779
    Ah yup, that is indeed an interesting problem. Within the XR space, the XR team is working on solutions to make games work with devices they weren't specifically built for.

    Within the input system, we've introduced a new mechanism called "usages". The idea is that instead of talking about a control with a certain name (like button "b"), you're talking about a control with a certain usage. Like, "PrimaryTrigger" or "Back". In this approach, a device not only describes its available controls but also the intended usage of those controls. Any bindings that involve usages will then be able to work with the device even if they didn't know about the specific type of device at design time.

    We're only at the very first step of figuring out how *exactly* this going to be used but it looks promising. It'll probably take more than just this mechanism to fully solve the problem but let's see.

    Note that this also works for devices (which are controls, too). So you can have one device with the usage "LeftHand", for example, and another device with the usage "RightHand". XRController already uses that to make static XRController.leftHand and XRController.rightHand accessors available (see here).

    Overall, turns out usages have a surprising number of... usages. Like, if you talk about "PrimaryStick" and "SecondaryStick" instead of "leftStick" and "rightStick", you can support a lefty gamepad layout simply by swapping what's the primary and what's the secondary stick.
     
    MechEthan and orb like this.
  46. aFeesh

    aFeesh

    Joined:
    Feb 12, 2015
    Posts:
    35
    This all sounds fantastic!

    1. Will we have the ability to carry input across scenes? Currently if I load a new scene it wipes any active input.

    2. The road map mentions this system will be frame rate independent. Is that still a priority?

    Thanks!
     
  47. Rene-Damm

    Rene-Damm

    Joined:
    Sep 15, 2012
    Posts:
    1,779
    Yup, no more resetting between scenes.

    It is. Where we have to poll (XInput, for example), we poll on threads at user-settable frequency. Where we get events, we surface all of them including timestamps. There's still a state system meaning that ultimately events are aggregated into frames worth of state but the source data is there and you can go directly to it if need be. Also, actions observe every state change, even if multiple happen in the same frame.
     
  48. interpol_kun

    interpol_kun

    Joined:
    Jul 28, 2016
    Posts:
    134
    Nice to hear it from you. But I am really concerned about the best Input system in Unity.

    After seeing your videos (they are from October) and reading GitHub I can tell that the system is quite overcomplicated. I have some questions and complaints. I will try to compare the new input system with another popular engine, sorry for that, but I should.

    1. Is JSON just a kinda middle-state way to create a custom device and manage your axis? I find the old system quite a nice start. Let's look at PlayerInput Mapping from the other engine. You can create a custom mapping with custom names and even give them the scale. So the movement will be quite simple. I don't tell you the Unity should be the same. I am telling you about the nice UI system to create Action sets and manage them not from a raw JSON or code, but from the UI. Think about designers and other people who want WYSIWYG approach. But there are only 2 ways to create Action Set: JSON or code.
    2. Modifiers have no way to tweak them for non-programmers. It's quite a nice and flexible idea, but it should be opened to all kind of people in the team. Especially game designers. We have InputManager, why not to give people easy tools to create their own rulesets of actions based on their mappings? Think about that, because there is should be an easy way to tweak all that settings, modifiers, actions, action sets and bindings.
    3. So what's the problem with the InputManager or some similar concept of storing all your mappings, actions and other things? The easiest way to set up user input is something like
      Code (CSharp):
      1. InputManager.Bind("Fire", Fire);
      or
      Code (CSharp):
      1. InputManager.Fire += Fire;
      Just easily bind the function to the event from the manager.
    4. Why do we need in-editor tool to bind actions and why do we need to create them in code? What's the problem, again, with the InputManager? Why it's not better to store all your mappings in a single manager? That could create some unwanted behaviours.

    Correct me if I am wrong. But I don't quite understand the current state of a new input system. And what's more important I do not understand why it works like this. Maybe I misunderstood you, but that's how I see it.

    My suggestions about map bindings:
    There should be some way to actually map buttons with a visual approach. Like you are trying to create your mapping. You have a drawn keyboard, you push the button with your mouse cursor and viola, the button is selected. Some similar concept with gamepads. Like you have a basic drawn gamepad with buttons you can select and bind. That will be cool.
    The easier and the faster approach is to provide a drop-down menu with human-readable names.
     
  49. Rene-Damm

    Rene-Damm

    Joined:
    Sep 15, 2012
    Posts:
    1,779
    You don't need to fiddle around with JSON directly. To create an action set asset, right-click in the project browser and select "Create >> Input Actions". You can then edit the actions in the editor UI. The thing is still pretty bare bones but you can already create sets, populate them with actions, and assign bindings to them. We're working on making this thing be a polished UI. Consider what's there ATM programmer art :)

    At the device level, you don't have to deal with JSON except if creating new profiles for input hardware.

    Sure. Again, consider the UI that is there preliminary. ATM there's no way to edit parameters on the modifiers, for example. The parameters are there in code but there's simply no UI for it yet.

    Overall, the action stuff is by far the least finished part of the system. We'll continue building it out and eventually it'll be nice and shiny but our main focus for now is getting the foundation of the system solid.

    The old input system kinda taught us that this central kind of system is really bad. The InputManager asset has consistently been a pain in the behind for asset store devs as well as for team workflows. Also, the approach where you have a single place where you have to define *everything* doesn't scale well to projects that have lots of different action contexts. Thinking of FarCry, for example, where the actual binding used for "Fire" may move around on the gamepad depending on whether you're driving or walking.

    I think what you get right now, though, is actually much nicer. Say you have created an asset MyGameControls.inputactions with the workflow outlined above. If you tick the "Generate C# Wrapper Class" box in the importer section, you'll get a nice C# class that handles all the lookups etc. So in your code, all you do is

    Code (CSharp):
    1. public MyGameControls controls;
    2.  
    3. public void Awake()
    4. {
    5.     controls.fire += Fire;
    6. }
    7.  
    8. public void OnEnable()
    9. {
    10.     controls.Enable();
    11. }
    12.  
    We're working on an alternative API for InputActions that doesn't involve callbacks. With that API, you'd simply tap the action in Update() and process its changes.

    You don't need to create them in code. As mentioned in the video, it's just one workflow. I think it's a nice workflow for prototyping and gamejamming where I think an .inputactions asset just adds complexity. But the "proper" workflow is to go with an asset.

    As for needing an in-editor tool... hmm, why would you *not* want one? :)

    Heh, completely agree :) And it's where we would like to end up. I think it's important to bear in mind where the system is in its development. So far, the majority of the work has gone into making the guts of the system work. The UIs that are there are little more than bare bone implementations just to have *something*. We'll have time to make UIs polished but our primary concern at this point is getting the device&control level stuff including native backends into a Unity release and into user's hands. After that, making things nice and polished instead of just functional will become a focus.
     
  50. Peter77

    Peter77

    QA Jesus

    Joined:
    Jun 12, 2013
    Posts:
    6,618
    Can we access any motor or do you provide access to hard-coded motors only?

    I'm asking because the Xbox One Controller has 4 motors in total. The left and right motors, but also those tiny ones under the triggers.
     
    TooManySugar likes this.
Thread Status:
Not open for further replies.