Search Unity

Input System Update

Discussion in 'Input System' started by Rene-Damm, Dec 12, 2017.

Thread Status:
Not open for further replies.
  1. eobet

    eobet

    Joined:
    May 2, 2014
    Posts:
    176
    Ah, of course. It did. :)

    I think the confusing thing is that input doesn't just go to 0 when the game loses focus, you still get some input...
     
  2. Baste

    Baste

    Joined:
    Jan 24, 2013
    Posts:
    6,334
    How does this work, exactly? Does the system generate Events (as in Event.current), or are you supposed to subscribe to inputs? Does it support keymaps? Can you specify what keymap to use in the editor?

    Ideally, the inputs to Unity (like ctrl+number to open specific numbers or whatnot) would use this system, and be configurable.
     
  3. Rene-Damm

    Rene-Damm

    Joined:
    Sep 15, 2012
    Posts:
    1,779
    Yeah, that sounds buggy. My guess is it's fixed by a change on the native side that hit recently.

    The system works in edit mode the same way it does for play mode and provides the same APIs (except that InputActions are not supported in edit mode ATM). E.g. you can do Pen.current.pressure.ReadValue() in your EditorWindow code.

    Separate state is kept for edit and play mode and one or the other is active depending on whether game code or editor code is running. Some under-the-hood trickery handles the problem of different coordinate spaces and orientations in EditorWindows vs game code so that you don't need to manually convert pointer positions, for example, between different coordinate spaces.

    There is some stuff in the works here. It's not tied to the new input system, though. Overall, input in the editor will likely remain tied directly to platform UI input.
     
  4. awesomedata

    awesomedata

    Joined:
    Oct 8, 2014
    Posts:
    1,419
    I also want to point out Mecanim's interface == bloat > 9000 and its painful API access in general is terribly bloated (its scripting is heavily GUI-influenced and non-modular), and tends to be quite verbose to simply "check values" in various places. All the essential functions one might need through scripting don't exist without one having to program them through this (usually awkward) API interface. The functionality that does exist require 4-5 lines of code (minimum) to get at just about any functionality.

    This is made even worse with the new Playables API -- I am fighting with this terrible API right now, and just dread the whole idea of having to fight with future API systems "designed" by Unity in a similar way... D:
     
    Last edited: May 17, 2018
  5. awesomedata

    awesomedata

    Joined:
    Oct 8, 2014
    Posts:
    1,419
    @Rene-Damm

    This looks like a workable system so far. -- The design looks like it is finally shaping up quite well! -- Unfortunately, like @hippocoder, I have just a few concerns and reservations about the process it took (and seems to always take) to reach this point...

    First -- and no offense intended toward you guys (I know you're trying hard to appease everyone!) -- but I think a lot of the danger of the current approach your "design process" has been taking is the simple lack of production-based / power-user-vetted input from heavy API / tool users who are at least moderately-seasoned with your "old" tools (and those tools' scripting APIs) who use them almost daily in a production-based environment.

    To be honest, I really wonder why Unity won't hire devs like me (who use regularly use existing APIs to develop useful and interesting tools for Unity with) to help develop the initial designs of their new tools and API systems? I mean, if I already develop useful and innovative tools using the "old" system, why shouldn't I (and others like me) be the ones you ask for guidance in the tool design process instead of the larger community as a whole?

    Maybe I'm wrong, but it seems like Unity tends to be completely disconnected from the users who actually *use* their systems heavily. I too am worried about the approach behind the scenes, but not because of the "cool street-fighter controls" being possible (thinking bigger isn't always a bad thing!), but for the fact that I notice you guys relying heavily on forum posts for "usability" feedback. Ask any Asset Store developer here if you don't believe me, but "forum feedback" is only useful when there's a problem with an existing system -- *not* at design time for a brand-new one! Most users only know vaguely what they want -- but there are power-users who develop tools based on your APIs -- and this development occurs usually because they want to use/make their own tools because existing tools are either non-existing or insufficient!

    If you guys want input, then I've got a proposal: Why not get someone on the team who is heavily experienced with the old systems, who uses it every day to some extent, and let THEM be in charge of that initial list of the "improvement" feature sets in the future? This, I feel, is the best way to get useful input and stop wasting engineering man-hours on a bloated feature-set users simply cannot work with! -- And don't hire just ONE member of the community, but hire THREE or even four of these types of people for a short time to get the design off the ground! -- One guy to do any user-facing design, another guy to deal with the scripting API approach, and a third / fourth developer to be the intermediary dev(s) that keep everyone's egos in-check, who have a say in both aspects. Just seems like the "right thing to do" to me.

    Most asset store devs make tools for extra income. To get them to offer valuable input, pay them a useful monetary bonus, let them put the experience on their resume/cv, and get your awesome new system design plans that will be something everybody who uses it will absolutely love -- all *without* wasting time going back to the drawing-board when people tell you all your hard engineering work was for nought due to the simple (but fatal!) flaws in your initial design.

    What say you guys (or your bosses?) on this?

    Saving time & money usually makes the "man-in-charge" happy...
     
    Last edited: May 17, 2018
    Player7 and interpol_kun like this.
  6. hippocoder

    hippocoder

    Digital Ape

    Joined:
    Apr 11, 2010
    Posts:
    29,723
    I trust @Rene-Damm and know he will deliver excellent results. This reboot of input has taken time but I think they'll do it right.
     
    awesomedata likes this.
  7. Rene-Damm

    Rene-Damm

    Joined:
    Sep 15, 2012
    Posts:
    1,779
    Well, first and foremost, we say that it is a concern we wholeheartedly share :) And second and... backmost (??), that the picture visible here on the forum thread(s) is a partial one.

    Deeper embedding with game teams and faster iteration through tighter feedback loops is an effort that is being pushed inside of Unity R&D on several fronts. From enabling it technically (package manager and overall Unity development structure being part of that) to enabling it 'culturally' (with our CTO being its fiercest proponent).

    So, in regards to Unity as a whole, it's a weakness we see in our own past and ongoing development efforts and a weakness we are working to address.

    Which... goes for the input system, too.

    There have been collaborations (arguably not enough) and various feedback loops with the previous new input system (which in various ways are feeding into what's happening now as well) and feedback in the end was why it didn't get released but rather went into a redesign.

    For the current system, we're in the process of talking to gamedevs and finding collaborators (and if you think you'd be a good fit, we'd be more than happy to have a conversation :)) willing to try things out and iterate with us to find what doesn't work and what does.

    Now, you could argue that this should have happened earlier. In my experience, it's far easier to get productive feedback, willing collaborators, and iterate if you already have *something* working based on initial research but are willing to redo and reshape whatever needs it. And with this being a reboot of an initial new input system, we already had a great deal of feedback to incorporate.

    Where we are now is the product of a learning experience, a learning experience that continues and that leaves me without hesitation to rip out and reshape whatever we still find inadequate. If tomorrow we find that the conceptual approach of the action stuff isn't holding up in practice, I have no qualms ripping the stuff out and rewriting it. We're out in the open at this point not to show an end result. We're out there to find collaborators and get us to an end result.

    The other thing we want to get right, too, is to not develop for one use case. Especially with input it's easy to end up in a niche. Concurrent cross-platform *and* cross-input game dev is kinda rare so we're trying to sample from multiple niches.

    I think it's important to not read too much into the forum thread here. Valuable user feedback and ideas can surface on the forum and it's an easy way for us to keep a portion of interested users appraised about what's happening, but it's by no means a 'design tool' of sorts for us. We do not equate a forum thread with finding out what works and what doesn't.

    Most of the communication and collaboration that's happening isn't visible on the forums. Stuff like us hiring the principal input guy behind some of the FarCry titles (hey Tom), us talking to specific game teams, us talking to other companies doing input stuff, us having in-house teams bang on and complain about the stuff, etc.

    Like, make no mistake, there's still tons of room for us (and me) to improve, but we're working on it :)

    That's what did happen with input, I'd say. Except it wasn't a single guy but rather a whole bunch of guys with a whole bunch of experience doing a whole bunch of different things.

    Ok, this thing here got kinda longish... hope it provided at least some useful insight. Let me know if I've missed your point somewhere.
     
    Last edited: May 18, 2018
    Elringus, vutruc80, MechEthan and 9 others like this.
  8. awesomedata

    awesomedata

    Joined:
    Oct 8, 2014
    Posts:
    1,419
    @Rene-Damm

    I apologize in advance for the length of this, but please understand that everything I say here is necessary for you guys to hear. This may not all apply to you personally (the majority of it does apply to one of your views though, which are clearly shared by many of your colleagues, and also why I felt the need to write this all out instead of just PMing you). but I would appreciate it if someone further up the chain (who has access to the other teams developing big features for Unity) to put their eyes on this post too.




    That makes sense, and it is to be expected. I am glad there is more "behind-the-scenes" going on than meets the eye. However, some of the effects from "behind the scenes" even we can see, and because of those effects, not knowing the "why" is very difficult to stomach when it comes down to business. That lack of "why" we see those effects makes some of us very concerned about trusting our time to Unity. (I'll get into some of the "effects" seen later in this post.)


    I am incredibly glad to see this.

    You're probably the first Unity Team member I've seen (outside of the "tech" videos offering up another new "feature" or "system" in Unity that required /having/ to admit the older system it replaced was poor-enough to require a rewrite) to fully put the hubris aside and admit that there is a weakness within the walls of Unity. I've seen Unity-Team devs come and go, but I really hope you stick around. Although there is one major point I disagree with you on (I'll get to that in a moment, since it takes up the length of this post), but I want you to know that I highly-appreciate that you took the time to address the issues I brought up in my previous post without you "stone-walling" me (which seems like it's the go-to "response" from UT these days). Doing what you've done here is the only way progress can ever truly be made so that everyone can start seeing Unity as "cool" again. :)


    So this leads me to my major sticking-point with Unity's development these days.

    In regards to the input system being in a "beta 2.0" now, I completely respect and appreciate you and your team for incorporating our feedback into your 2.0 design -- it really does show in the more elegant design! Because of that, I've got nothing negative to say about the input system's progress (despite it being very early!) since its (current) design really does sound like you guys are back on the right track!

    This leads me to my next point -- "What is the 'right' track?" -- IMO, it's not the issue of whether or not the feature is great in the end that bothers people about poor systems in Unity -- it is generally the overall "uncertainty" of the minimum standard of "what exactly IS going to be delivered in the end" that causes me (and MANY other devs) to fear the worst about "new features". This is because the "design" of said features rarely seems to have a solid list of bullet-points for the value to be offered (or a list of possible sticking-points) to us end-users. We tend to be forced to rely on faith that the resulting "great feature-set" will cover our many (also-unpredictable) use-cases for said "new" features. However, because there are NO expecations set -- we all have *great* expecations.

    This is a classic case of the problems arising from not setting clear and realistic expectations to those you offer a service to. Now they can say "You didn't deliver what you promised!" simply because you were never clear (in writing, of course!) about what exactly it *was* you had promised. They now have a blank-check they can cash at your expense -- and get away with it! -- all because it was *you* who gave it to them and told them "write what you think is fair." lol

    Of course most people aren't this terrible -- unless you make them angry.

    Here is a good example of how that might occur:

    I decide to make a game "with Timeline integration" (to use a real-world example). I want to know that anything I want (or try) to do with Timeline is going to be supported. If there are potential ways to use Timeline that aren't supported upon its official release, then I want to know before I design my entire game concept around the use of the "Timeline" tech that any important features (such as the "Events" feature shown in various videos) will not be included upon release, and that Unity still needs a serious overhaul under the hood for it to be properly supported. If there is some other internal workflow "issue" that puts the "feature" at risk (such as the internal pipeline being unable to support arbitrary code-execution on multiple platforms at the time of release), then I need to know that it is risky to expect that feature in a timely manner and how long it could take to receive it (and then be pleasantly-surprised if it arrives sooner).

    However, in the case of the "Timeline Events" feature that was held back from us for so long -- it was actually NOT some deeply-internal, highly-technical, heavily-integrated engineering problems that prevented the release of the "Timeline Events" feature. -- No -- it was a "design" issue -- currently holding it back until 2018.3 now.

    This is the kind of thing that happens on a regular basis. Since Mecanim, since the "new" 4.6 UI system, since pretty much everything in the recent years of our beloved Unity to some extent.

    That being said:

    While I have no issue with "design" being heavily malleable -- it is the fact that there is no "technical feasibility study" put out to the general public before you guys "have something to show" -- And therefore, many man-hours are wasted all because of the "concept" itself was flawed in some way. And to clarify -- the "concept" is not simply a "new input system" but is instead a list of bullet points (considered as a whole) that defines what the "new input system" actually IS -- This would look something like the following (which allows for heavy malleability while also being firm as to what it offers (without being specific at to HOW it offers it -- allowing a lot of technical creativity under the hood by you guys):

    • Works for both Editor and realtime in-game input detection
    • Should be easily modifiable for new types of input (i.e. VR motion detection as well as gamepads)
    • [RISKS] Has a system for "creating" a custom device type and mapping input for it
      [RISKS]
      1. internal programming to support this may not exist
      2. could require some serious development time to deliver this feature
      3. might require multi-team collaboration
    • Supports all major platforms' currently-supported input devices
    • Adds support for a list of device descriptions that can be patched into a runtime game executable to add new types of input support (i.e. adding head-tracking to an FPS game that is currently awaiting a specific device to support it to start being manufactured)
    • Offers the ability to check detailed input chains at once (such as street-fighter's "press down, down-diag forward, and hold forward for 10ms, then press "button 1" quickly 2 times)
    • [RISKS] Input chains can have "replaceable shortcut labels" that reference buttons/directional-inputs/etc. substituted for other buttons/etc using the shortcut labels (labels are either strings that represent hashes or direct hashes representing a reference to the input slots and controls)
      [RISKS]
      1. possible "development-hell" feature
      2. long-term or lengthy "design" processes might risk in it being cut or released prematurely.
    • (etc. etc. continued here, until it stops being technically-feasible in the timeframe alotted to you guys)
    Then, once an overall list of "value" and "risks" like the above are assessed and agreed upon by your team, release THAT list of bullet points to the community (via forum post, etc.) rather than as a buggy half-finished "beta" with "features" that were pointless to begin with. What?? The community didn't want you to waste effort on making it have "replaceable shortcut labels" over larger amount of device support out of the box?? Now it's too late. "Sorry guys" is all you can say at this point. That design flaw could have been avoided before you guys wasted the effort (and our time) in the initial phases of development, delivering sub-par device support that the devs themselves would have to supplement on their own in the end (and your priorities could have been "tweaked" had we just known about them in advance! Give us enough of a chance to see your priorities on such features so that we busy devs can speak up about them!)

    In the case of Mecanim (which is an even better example of why there should have been a detailed bullet-point list like this before its development), after it shipped, we wanted to know "What happened to the flexibility of the Legacy system?" -- Some, even today, would argue that it is better in almost every way. Had a list of bullet-points like this been released prior to Mecanim's earliest designs, people would have asked "Where is the bullet-point that says "flexible scripting API to allow skipping the internal state machine system you provide so the user can roll his/her own?" or "parameters are added via Mecanim's visual interface and cannot be added via scripting due to technical concerns" etc. etc.?" A mockup showing us the visual workflow would have been even better. -- The lack of a list of bullet-points and mockups showing workflow like that just means that an expensive, slow, (and, sorry for being harsh, but to many, a somewhat "useless") system was developed instead of a more robust, lightweight, and flexible system that could even piggy-back a bit off of the Legacy code. The way it was written, it was completely detached and did all sorts of stuff, had all kinds of bells and whistles -- but very little of what regular users like myself wanted to do with it (i.e. a simple way of playing, scripting, and controlling animations.) The base Legacy system could have just used more optimization, adding advanced features such as blending/state-machines that could be added/removed from code or via editor-scripting if I wanted to modify or use them at all.

    I know this is not your fault, but it does follow your philosophy of "make something first, let users play with it, then refine it later if they hate it" -- Mecanim was fundamentally broken in its design. Humanoid integration into the Mecanim system was inherently flawed -- it should have been a module (that could be tweaked) rather than a requirement. I hope the above example with Mecanim shows how that is not particularly a great idea sometimes -- especially for larger systems (like animation) with many potential use-cases (such as simply posing a character or doing IK/FK or retargeting automatically on non-humaniods, etc. etc. etc.).



    The philosophy of "just make /something/ and show it to people" in an attempt to "wow" us while also gathering our feedback on what didn't "wow" us so much has backfired on so many occasions for Unity these days.

    The reason why is that people want something that will fit their needs and not their fantasies -- and imagination/fantasy is always prettier than reality, but reality always catches up. The reality is that Mecanim is a system that is inflexible and obtuse/bloated/slow, and Timeline was incomplete and released too early with its main feature to most (Timeline Events) completely MIA -- and it even took away existing features (our Animation Events) with its inception -- all without warning! Had the developers in charge listed a bullet-point list with the "RISKS" (written in a candid and considerate way to the game-developers who might use it eventually, with a note about Animation Events being a possibility of removal) that let us know exactly what they wanted to deliver, and where their stumbling blocks are that might eventually make US stumble too, we would be so much more appreciative of their efforts on this feature.

    So perhaps you can understand why I am against the "just make something and show it to people" idea and why I feel it is worse than simply a "shot in the dark" approach. After all, it may not matter to YOU that you must rip it all out and start over again, but it matters to US as to how long that "ripping and re-writing" takes because WE have to wait on you (and if Unreal has what we need already, for example, maybe we might choose to instead go learn about that in the meantime, instead of dealing with all the uncertainty of a "new" feature we're not sure will fill our needs). Sorry to sound so harsh, but it is a fact -- The faster you guys can develop a solid design, the faster we can use a solid technology design to speed-up our development efforts. If you are slow to make this technology -- then we are slow to use it. None of this stuff is really "future-tech" anymore, and there are even individuals making their own game-engines that are beating us to the punch these days! This may be hard to take, but maybe you can see now why this "bullet list" I mentioned above to describe the concept to us is a total necessity for enough of us to make it count! Before you guys even write your first lines of code -- I beg that you would PLEASE make that list of required technology first and show THAT to us -- with workflow diagrams (if you really want to "wow" us) -- and use THAT to see if it fits our needs (instead making us wait for you guys to finish coding for months just to trash it and try again after a few more months because the design priorities were wrong, and now potentially making us wait YEARS for your revisions to finally make it into Unity properly, while we still must rely on complete faith in you guys not to botch the design up again -- With my list idea, you literally cannot fail at the design when the design specs are already wholeheartedly accepted by the community as a whole!)


    No game developer worth his salt (or your time) will ever ignore a detailed bullet-list of promised functionality with clear RISK-assessments. This is especially true when the workflow mockups are solid and the suggested API workflows are sound and easy-to-understand.

    The true reason we beta-test is to check that solidity and soundness for ourselves! -- we want to see whether it fits our needs! -- If you can provide this via a list (instead of after months of wasted work!), we will begin to notice that Unity is progressing fast again and try harder to keep up pace with its developers on the bleeding-edge. No offense, but when we "beta-test" we don't usually care much about squashing bugs -- We really just want to see (for ourselves) whether your system does what we want (or are expecting) it to do. If you guys have a great idea for an interface feature (i.e. jaw-dropping dragging-dropping of states/button-inputs/input-events/shortcut-labels/timeline-events) that you feel might "wow" us, draw us a thumbnail or gif of the proposed workflow -- and we'll figure out how well that will work for us in production -- We will tell you if there's a problem (or if we want something else instead)!

    I feel like you guys listen to big AAA studios waaay more than you listen to us tiny indies.

    This is a mistake.


    Regarding new ways to work (that work for everybody -- not just our AAA corporate job-structures), it's usually the "tiny indies" who are innovating on workflows in our industry -- not the big AAA studios.

    Unity used to be OUR software -- software that listened to indies -- software that helped us make games. Now it has lost that glimmer due to us feeling like you do not listen to us anymore -- only your new AAA friends. I know they have lots of money -- but it is us who made you what you are.


    Sure, you might argue that your current approach of "internal-development" (with AAA advisors at your beck-and-call) is a "more-concrete" way of getting UX feedback. -- However, on a system with a highly-mutable codebase (that could be ripped out at any moment) that users can touch/use but might not be near representative of the same experience in the end (and also has a very-high development-time cost overhead), trading /that/ version of "more-concrete" for a "more-concrete" bullet-list of features that, although they don't yet have a physical form users can test, the re-development time-cost is next-to-ZERO with the blessing of the community. This means, if all of the major points of the design are nailed-down here in "pre-development", actual development time would be very straightforward, users would no longer question your competence, and a highly-mutable API/codebase would be mostly unnecessary because "concrete" API examples of doing things described in the "bullet-list" are provided beforehand.

    Unity's strength is that we can customize our own interfaces for Unity's API. As long as the API is good enough to cover any use-cases in the bullet-list (and remains flexible in areas where it could be used for other things), you guys are good with us! API design means it can be mostly theoretical, and it can easily be "implemented" without yet being legitimately implemented quite easily! If the API design is solidified ( "more-concrete" ), then the interface should then become really fast to make. You want something more visually-fancy? Either you guys can add in some visual / functional flair during the "polish" phase, or provide a nearly 1:1 scripting API so that others can implement that "fancy" themselves (perhaps even via editor-scripting "overrides" or whatever). This is the kind of "more-concrete" I feel people would much-prefer (even if they can't get their hands on the system until later) especially since the current version of "concrete" is actually not very "concrete" at all, since even the "hands-on" early-access "beta" experience typically "lies" about the UX (due to the inherent malleability of "beta" or "alpha" anyhow) -- and that's why the bullet-point list above should never have to change after it is solidified. Thus it will /never/ "lie". The bullet-point desciptions should be as candid and forthcoming about the "risks", "tradeoffs", and "rewards" each "feature" listed is bringing -- and then let users decide on a "final" version of that bullet-list (and debate if we must). With all concerns out of the way (and any workflow-mockups necessary to convey the concept more clearly where more heady or abstract stuff is involved), we all should be good to go for a REAL "democratization" of game development.

    Let "beta" really be about bug-testing a semi-user-ready module that runs a much smaller-risk of introducing even more bugs (due to a feature or programming concept having to be ripped-out or change somewhere entirely (especially when this is under the hood!) to fit the "new" version of the maleable UI/UX design rather than the other way around) -- Again, Mecanim suffered from this "buggy" state for a long time after its release -- and I'd put money on it that this "malleable" process was behind that (when stuff was added under the hood to support user-requested features). Had Mecanim had a list of things that users wanted from the outset, a proper mockup of workflow thumbnails and implementation details (such as showing that parameters were not able to be added programmatically or that using the API to check states was such a hassle, or that adding states via script would be an issue, etc. etc.), users would have been able to "fix" Mecanim before it was ever so hopelessly broken. :(


    As hinted at above with the bullet-list of features -- I feel like this should change.



    "Democratizing" game development is only possible where there is enough transparency for "the people" (the heart of the "Democracy" itself) to have a say. After all -- if the Unity engineers are the "Electoral College" of the video game development "democracy", don't let yourselves be the ones to prevent "the people" from having their final say in how their games are going to be developed.

    I feel like we should be the first to know, and the last to have a word on the subject of any major new features.

    Who else is with me on this?
     
    Last edited: Jun 29, 2018
    vutruc80, mbbmbbmm, Player7 and 3 others like this.
  9. hippocoder

    hippocoder

    Digital Ape

    Joined:
    Apr 11, 2010
    Posts:
    29,723
    You had me at TLDR.
     
  10. interpol_kun

    interpol_kun

    Joined:
    Jul 28, 2016
    Posts:
    134

    I extremely agree on that. The UT's approach to developing things seems very poor like they have some sort of communication problem internally and with the community.

    To prove that let's remember how long the old team was developing the new input system before they all got reformed? It's good that we have Rene here with us like he's the rare type of UT stuff who always takes the community criticism. That's good: I have no problem with him. I want to really question other stuff. The Input system development stretched out a bit.

    However, that's not the only problem. The new Terrain Update was delayed until someday after they showed us all the new features at GDC Roadmap Talk.

    Tilemap was buggy at launch, and we got no new features after the nine months. No rule sets (Implement for yourself/use GitHub project), no optimization and workflow improvements.

    People criticize Shader Graph Custom Nodes because of its poor design.

    So what should we expect next? Now we are all waiting for a new Input System, VFX Editor, Nested Prefabs, Prefab Editor, Terrain and other cool features. But how many of them will not be delayed again, how many of them will meet our expectations based on your (UT) words?

    Some people think that I am a hater. But I use Unity, and I love it, but I feel bad for all what happening around the development process. It looks like a lot of new features are being thrown out hot and soon become forgotten.

    The only thing that keeps me from panic is the new performance feature-set.
     
    awesomedata and FROS7 like this.
  11. hippocoder

    hippocoder

    Digital Ape

    Joined:
    Apr 11, 2010
    Posts:
    29,723
    Hey guys, it's gone offtopic and also on-topic. So from this point only posts about input will continue in this thread.

    But... Let's start a new thread on general discussion with mod blessing, and you both should copy your replies over to that thread. I'll leave it up to awesome data to start it off on there. Thanks for understanding and I'm sure it'll strike a chord with users, so feel free to duplicate your answers in a new topic there.

    As for this thread, it's way off base because we need to give room for other people to reply and communicate with Unity staff. Thanks (and don't reply to me here please - pm if it's at all necessary)
     
  12. awesomedata

    awesomedata

    Joined:
    Oct 8, 2014
    Posts:
    1,419
    That's fair -- I definitely don't want to derail this topic. Thanks man! :)

    Here it is:

    https://forum.unity.com/threads/continued-from-input-system-thread.532501/
     
  13. Rene-Damm

    Rene-Damm

    Joined:
    Sep 15, 2012
    Posts:
    1,779
    Let's not forget, though, I was part of that old team from day 1. I was 50% of why the project sunk.

    So a large part of why we're here is because a) something in Unity did work to prevent something getting released that had a high chance of making users unhappy (and someone in Unity endured quite a bit of abuse from me to make sure we're doing right by users; sorry again Ralph) and b) we acknowledged we screwed up and looked at why and how we could do better.

    Is there still a chance of us failing in some way? Absolutely. At the end of the day we're mostly just a bunch of dudes and dudettes writing code. But are we working to continuously improve? You bet.

    Ok, time for me to get off the soap box :)

    //EDIT: Gah, probably should've posted in the other thread. Sorry @hippocoder.
     
  14. recursive

    recursive

    Joined:
    Jul 12, 2012
    Posts:
    669
    Sooo, back to input.

    I see there's some fun new stuff with actionmaps in the repo.

    I've been chasing another rabbit hole the past two weeks (building a event/object lifecycle management framework for my input handling and some other systems I started on), but I wanted to ask is this leading to the new "callback-free" input action handling?
     
  15. Rene-Damm

    Rene-Damm

    Joined:
    Sep 15, 2012
    Posts:
    1,779
    The hope is that it will. What's there ATM is little more than an idea but it works something like this

    Code (CSharp):
    1.     public void Actions_CanProcessActionsAsEvents()
    2.     {
    3.         var gamepad = InputSystem.AddDevice<Gamepad>();
    4.  
    5.         var map = new InputActionMap();
    6.         var action1 = map.AddAction("action1", binding: "/<Gamepad>/leftStick");
    7.         var action2 = map.AddAction("action2", binding: "/<Gamepad>/leftStick");
    8.  
    9.         using (var manager = new InputActionManager())
    10.         {
    11.             manager.AddActionMap(map);
    12.  
    13.             map.Enable();
    14.  
    15.             InputSystem.QueueStateEvent(gamepad, new GamepadState {leftStick = Vector2.one}, 0.1234);
    16.             InputSystem.Update();
    17.  
    18.             var events = manager.triggerEventsForCurrentFrame;
    19.  
    20.             Assert.That(events.Count, Is.EqualTo(1));
    21.             Assert.That(events[0].control, Is.SameAs(gamepad.leftStick));
    22.             Assert.That(events[0].time, Is.EqualTo(0.1234).Within(0.000001));
    23.             Assert.That(events[0].actions.Count, Is.EqualTo(2));
    24.             Assert.That(events[0].actions, Has.Exactly(1).With.Property("action").SameAs(action1));
    25.             Assert.That(events[0].actions, Has.Exactly(1).With.Property("action").SameAs(action2));
    26.         }
    27.  
    The per-action callbacks are still there and can work in tandem with this but the idea with InputActionManager is that you have a point in your frame where you pick up the accumulated "action events" and then run your own logic deciding to what to respond and how.

    ATM we're still in a phase of figuring out what tools exactly are needed to enable different use cases all the way up to stuff like "I have two possible actions to perform based on this one input but it's a raycast based on the orientation of the device that will decide which is the right one".
     
    Last edited: May 23, 2018
    recursive likes this.
  16. recursive

    recursive

    Joined:
    Jul 12, 2012
    Posts:
    669
    Excellent, until this dropped, I was going to write an ECS system that would effectively do the same thing with the callbacks, just accumulating events until some point early in a frame, then process them and mark them for deletion. So this basically will make that more efficient, since I won't have to deal with the callback overhead and can bulk-instantiate events based on the amount in the accumulated buffer! Thanks!
     
  17. awesomedata

    awesomedata

    Joined:
    Oct 8, 2014
    Posts:
    1,419
    I like the idea of the "action-maps" being processed as events. Might be nice if we could do the same with physics and animations too.

    That said, the "pluggability" I see in your code is more along the lines of what I was talking about in the other topic -- In fact, I see the system mentioned above being useful for all sorts of things outside of just input.

    I would highly-suggest modularizing the parts that you can (especially the organizational parts that deal with mapping data into buffers). That would be useful for any sort of streaming input data (not just devices, but entire classes of data) that needs to be frame-independent and processed later.

    As I said in my most recent post before this -- a form like this (but based more on "general functionality" than use as "device input") could enable systems like this to be useful for all kinds of stuff. For example, one "use" could be tilemap- or collision- streaming (or really any kind of streaming data that needs to be collected but processed later in an easily-manageable way). Most of that data is mapped somewhere, so if we could design our own classes to handle this "mapping" of the data, then "devices" could be shaped like anything, and their outputs could be sent out as anything we wanted. Maybe an "input" is actually a gameobject or a state, rather than a device ID, and maybe that input has an output that is another state, class, or gameobject. These are only examples, but I can't see why the "form" this takes couldn't allow for that. After all, the "Playables API" was a great step in this direction for Animation -- so why can't it work for other game systems?


    A good way to approach this might be in sending a particularly-formatted data structure to a central device-processing-and-output class (and being able to override this and write your own "gameobject factory class", for example) could lead to some really nice use-cases outside of the standard "device input" usage -- and the "devices" could be extended as well -- but a class might need to be provided to process custom device types out of the box to make using the stuff a bit easier (i.e. myDevice.AddAxis(ref axis) , myDevice.TrackMotionOnAxis(myDevice.Axis(axis) , etc. etc.).


    Regarding "use" cases:

    Again, I know I might have come off as a bit rude at some point, but I assure you I'm really trying to help with this. The method mentioned over here about focusing on "functionality" rather than "uses" has a lot of merit if you give it a chance.

    When building your toolset/API to accommodate (relevant) granular functionality, and then build tools on top of that granularity to make it more manageable to work with (while also making those tools as close to modular as possible while still retaining their pluggability), you'll find that the "functionality" provided by those tools will easily handle all the "uses" you can imagine -- and then some. What I offer here is definitely a different way of thinking about something that "looks" the same as something else, but it is actually fundamentally different way of thinking altogether when you get into the weeds a bit. It is that subtle-but-fundamental difference, that "shift" in thinking, that makes this useful and is why I keep trying to explain it in these posts. I know it's hard to see, but it's so d*mn important for "general-use" tools like Unity to function as "generally" as they are able to! D:


    I know it might not sound like it, but many of us do highly appreciate your efforts to make stuff that functions in a way that we can use easily. Even moreso when we can use your tools for just about anything! -- especially for the stuff you (or we) didn't even plan for! -- and all that takes is putting a little extra effort into the design of the body/form of what you are trying to make -- and this goes x1000 when you're trying to accommodate for more "general-use" / "user-specific" scenarios.

    I hope this all makes sense.
     
    Last edited: May 23, 2018
    FROS7 likes this.
  18. Rene-Damm

    Rene-Damm

    Joined:
    Sep 15, 2012
    Posts:
    1,779
    Maybe it would. I don't know. TBH I have not seen many successful attempts at "let's generalize the heck out of this". Most of the time it seems that people end up with something that is an awkward and complicated solution for everything instead of a good solution for something in particular.

    Anyways, while there will be some interesting new avenues for input in combination with ECS, our chief aim here is to solve the problem of input in Unity.

    I think we're actually pretty close in how we think about this and it's mostly terminology getting in the way. What you describe is pretty much what we're going for.

    We're not looking for a system that comes prefabricated to cover X amount of use cases that we've identified. Instead, we're looking to build a toolbox that allows users to build their own systems from the parts available to them. We want some level of prefabrication/zero-setup available as a jumpstart and easy entry but the system as a whole aims to provide a hackable toolbox.

    However... deciding what has to be added to the toolbox IS driven by actual use cases. Without trying to solve specific problems, I cannot see how one can arrive at useful tools. Functionality is meaningless without use cases informing it.

    Anyways, I get the feeling we're continuously ending up in discussions at the meta level here. I very much appreciate the level of reflection and outside perspective, but I feel we're running around in circles a bit.
     
    Last edited: May 24, 2018
  19. Sangemdoko

    Sangemdoko

    Joined:
    Dec 15, 2013
    Posts:
    221
    Hi,
    So I've read a bit on this new input system and I watched the videos. But I'm not sure how to set up a multiplayer project. I would like to test something really simple such as a character select screen. The first controller/keyboard to press a button is assigned to the next available character. I assume that should possible although I do not know how.

    But now imagine a character is controlled by only two actions: left and right. And I would like the possibility to have 8 players in many configurations, example: 4 players on a keyboard, 2 players sharing a gamepad(xbox) each using a joystick and 2 players using each a different controller(ps4 and other) D-pad.
    Here I assume once an action is pressed the counter-part is set automatically (example press left/right to join, and the system knows what the lef/right counter-part is, whether it is an axis going the other direction or another digital button).
    Having a way to visually show the players what buttons were selected for each character actions would be very useful.

    Is something along those lines possible with the new input system right now? I would like to make some simple local-multiplayer games but I found the current input system quite limiting. I was looking into rewired before I came across this forum. Since I make games in my free time, for fun, I would prefer a free option to solve my problems.
    If the new input system is close to being ready to do what I want, I can wait a bit. If not I guess I can either try rewired ( although I am not even sure it can solve my problem) or make a workaround with the current input manager which I would prefer to avoid.

    Thank you for your time,

    Santi
     
  20. mdsitton

    mdsitton

    Joined:
    Jan 27, 2018
    Posts:
    66
    @Rene-Damm One question with this input system would there be a way of plugging other input sources that unity may not support into it from user code, such as a midi device for example?
     
    awesomedata likes this.
  21. recursive

    recursive

    Joined:
    Jul 12, 2012
    Posts:
    669
    @Sangemdoko - it's possible, but a bit under-documented at the moment.

    For gamepads, you can get all of the connected gamepads and either listen to them separately, or bind actions to them by cloning an action map and binding the cloned actions to individual devices.

    I have some code at home that handles hot-connecting/disconnecting gamepads and tagging the events with whatever player index was associated with the specific gamepad, but it was written for the older Action Sets and I need to go back and properly test it with newer code from the repo.

    @mdsitton - yes, it's possible to queue your own event state from a custom-defined device in C#, and this will act just like a Native device. You can look in the source files, I believe some of the input unit tests even do this. The main challenge with a MIDI device is you'd probably want to setup a well defined interface set of InputControls and ensure the bindings are well-defined like the built-in abstractions for GamePad/Keyboard/etc.
     
  22. Rene-Damm

    Rene-Damm

    Joined:
    Sep 15, 2012
    Posts:
    1,779
    ATM it's possible but not yet in a way that it's meant to eventually work.

    What you can do ATM is to listen either to all input events (InputSystem.onEvent) or to listen for state changes on devices (InputSystem.onDeviceChange). Either way you'll see input activity but it'll take some processing on your end to figure out which activity is the one you're interested in.

    There's plans for better APIs especially as they are needed for doing good rebinding UIs. For one, being able to discern things like sensor noise from real user activity should be available out of the box. Also, more elegant APIs around the underlying rather raw functionality.

    There's some stuff in the binding system for actions around this (you can set up a digital axis from two bindings using an "axis composite") but it still needs a bunch of work. Also, there aren't good APIs yet to monitor for which set of bindings is used (e.g. is the user going for the gamepad bindings or the keyboard bindings?).

    Overall, much of this ventures into action territory that is still a good stretch away from being ready for prime time.

    In that setup, I would strongly advise not waiting and going for Rewired instead. The new input system is still changing a lot and will remain in preview for a while longer.

    Pretty much what @recursive says. I'm not familiar enough with MIDI to be able to say whether it fits the device model of the system well enough but overall, you can make up whatever devices you want as long as the state behind them can be represented as a simple binary blob. Whatever devices you make up will pretty much appear just as "native" as the devices coming from the Unity native runtime. You can also feed data against any device you want including ones coming from the native runtime and ones you created.
     
  23. SAM-tak

    SAM-tak

    Joined:
    May 14, 2014
    Posts:
    7
    How to get frame rate independent game pad input with new inputsystem?

    I tried to use new inputsystem. but it seems still frame rate dependent and I can't access buffered raw inputs.

    Perhaps, is "still event base" means frame rate dependent?
     
  24. Rene-Damm

    Rene-Damm

    Joined:
    Sep 15, 2012
    Posts:
    1,779
    Collection of gamepad input is framerate independent and the frequency of sampling (where necessary; e.g. XInput) can be changed. There's no direct access to buffered data in the current API but InputSystem.onEvent shows you the data one by one as it is processed and InputSystem.Update() allows to flush out and process the buffer on demand.

    (BTW I'll be OOO for a couple days; sorry ahead of time if responses will be slow)
     
    SAM-tak likes this.
  25. SAM-tak

    SAM-tak

    Joined:
    May 14, 2014
    Posts:
    7
    Thank you for answer.

    I tried it, and InputSystem.onEvent has actually called by higher rate than frame rate!

    Thanks
     
  26. SAM-tak

    SAM-tak

    Joined:
    May 14, 2014
    Posts:
    7
    How to detect Up trigger in onEvent?

    I can detect down trigger by
    ButtonControl.ReadValueFrom(eventPtr)
    , but I cannot detect up (release) trigger because I cannot understood which InputControl did a passed InputEventPtr send for.
     
  27. Rene-Damm

    Rene-Damm

    Joined:
    Sep 15, 2012
    Posts:
    1,779
    The events are full device snapshots and thus only reflect the current state of the device at the time of the event. As such, they won't tell you by themselves what changed on the device.

    One way to find out is to compare what's in the event to the current state of the device. I.e. compare the value of the trigger you got with ReadValueFrom to the value that ReadValue gives you.

    Another way is to use actions to do this automatically. Like listening to events manually using onEvent, actions are able to observe every state changes on a device even if there's multiple in a single frame.
     
    SAM-tak likes this.
  28. SAM-tak

    SAM-tak

    Joined:
    May 14, 2014
    Posts:
    7
    Thank you for answer, and a problem solved!

    Yeah, I made a stupid mistake, I was filling with empty data at no event period... :)
     
    Last edited: Jun 18, 2018
  29. eugene4552

    eugene4552

    Joined:
    Jun 28, 2018
    Posts:
    4
    Hi,

    I tried new input system and honestly I am impressed (Input debugger window and OnDeviceChange event solve 90% of any input problems :)).

    But there is one thing I would like to ask: How can I get raw data from unknown gamepad? I tried device.ReadValueAsObject(), but this method always returns byte array with zeroes.

    I want to implement dynamic gamepad calibration. When user connects unknown gamepad, the game shows calibration window, where user presses corresponding buttons on his gamepad. My first idea was to listen to all axes/buttons from auto-generated gamepad layout (device.allControls) and create new layout with corresponding bit and byte offsets for each button/axis of unknown gamepad. But the problem is that auto-generated gamepad layout doesn't have all axes.

    So the second approach is to read byte array from unknown gamepad and find differences between neutral gamepad state (when all buttons and sticks are released), and current state (when user pressed left trigger for example). But, as I said, (byte[])device.ReadValueAsObject() is constantly filled with zeroes.

    Please, help.
     
  30. optimise

    optimise

    Joined:
    Jan 22, 2014
    Posts:
    2,129
    Hi @Rene-Damm. Will new input system happen as a package in 2018.2 or 2018.3?
     
    Last edited: Jul 7, 2018
  31. recursive

    recursive

    Joined:
    Jul 12, 2012
    Posts:
    669
    @eugene4552 - I've gotten values from actions mapped to various parts of the gamepad (an action assigned to left stick, one to right stick, axis for each button).

    What you could do for now is this:
    * Create an InputActionAsset that represents the gamepad layout you are interested in, using the generic gamepad bindings, then specific ones for specific devices (you can create your own layout bindings and register them with the system). There should be fields on the device class that tells you the vendor and device identifier string, although I'm at work and cannot look it up. Map specific ones and provide the generic as fallback.
    * When you begin your calibration, you can use an InputActionManager to record and buffer the action events and read from it once per frame to update the calibration window, filtering out by action, you can read the accumulated values in a while loop. This is what I've been doing, although I'm in the middle of refactoring my code to integrate better with ECS.
    This can be more performant than using the change events, although for your use case you may just be able to get away with the action change delegates and read the specific value changes (Vector2 and float for the most part).
    * With the action objects, there's at least an easy way to reliably read the previous state that works (it has to for actions to be able to determine the need to fire).

    Word of warning, according to @Rene-Damm, the InputActionManager API is in a state of flux and may change heavily over the next few months.
     
    eugene4552 likes this.
  32. Caio_Lib

    Caio_Lib

    Joined:
    Mar 4, 2014
    Posts:
    83
    Ryiah likes this.
  33. petersvp

    petersvp

    Joined:
    Dec 20, 2013
    Posts:
    63
    Okay. Hello @Rene-Damm. Im fellow dev from Gameloft/Vivendi but also an indie.

    About year ago I prototyped a local multiplayer game that had support for multiple mice hooked on the same PC, with its own cursors. Since Unity is, of couse, uncappable of supporting this out of the box, I had to write it from scratch in C++ / Win32 / RawInput directly.

    Check the first few seconds of this video:


    While the entire thing was lost somewhere, I at least kept the C++ DLL code and the C# Manager script.

    You can check them out here:

    C++ DLL: https://pastebin.com/0Szi8ga6 - In this C++ find the HWND_DESKTOP and replace it with HWND_MESSAGE (important so the lib is non-focus-controlled), then put that file in C++ solution and compile it to DLL. Place that in Assets of an unity project.
    C# MouseInputManager that uses this lib: https://pastebin.com/4h3CqpYy
    I don't keep some repro project but basically you need to have a prefab with cursor image and set that to the Cursor variable to the mouse input manager, and all this residing in a canvas (attached to the mouseInputManager itself)

    This code is only a proof of concept, even though I was about to use that in production but I just cancelled the game after time due to it not being that fun.

    My question is: Will the new input system have full RawInput abstraction on Windows, and will it support multiple mice and multiple keyboards? There are engines already supporting these. Best example would be Trine!

    Please expose THAT to us if not already done. i'll fire an issue in gitHub for that as well.
    Code (CSharp):
    1. [InputControl(name = "pointerId", layout = "Digital", offset = InputStateBlock.kInvalidOffset)] // Will stay at 0.
    In your GitHub, remarks section in some code:
    Adds a scroll wheel and a typical 3-button setup with a left, middle, and right button.
    This is, again, the most common case. There are mice with extra buttons, and the Win32 APIs support sometimes even more than 6. Gaming mice are common and the New Input System should handle theirs 4th and 5th button with ease as any AAA game does [and my Indie game wants to do so, too].

    Is there a mode in which the system check every single key on every single device and report that, so we can assign controls freely?
     
    Last edited: Jul 11, 2018
    dadude123 likes this.
  34. asenetpro

    asenetpro

    Joined:
    Sep 12, 2013
    Posts:
    17
    I like to know is there a an input preview build for the latest 2018.2 not the 2018.2.b2 build and if not are you planning to build one so we can test the new input using the latest 2018.2 final release... Till the 2018.3 Finale release with the official Unity New Input System (Preview). Or do we have to download 2018.3 Beta
     
  35. rz_0lento

    rz_0lento

    Joined:
    Oct 8, 2013
    Posts:
    2,361
  36. Rene-Damm

    Rene-Damm

    Joined:
    Sep 15, 2012
    Posts:
    1,779
    Yikes. Lots to catch up on. Sorry for the long delay guys.

    So, from what I gather, you want to take the layout that's automatically generated for unknown gamepad HIDs (disclaimer: this code still needs a lot of refinement), prompt the user to identify the various gamepad controls by actuating them, and then generate a better layout for the device. Correct?

    If so, in principle, this is a decent approach. Albeit also one that's a little tricky to get working reliably. You'll have to account for noise on the controls (eventually the input system will have a feature for that) and for some axes, you'll have to infer min, max, and midpoints as the HID spec doesn't impose any rules there (a trigger and thumbstick axis can look exactly the same in HID yet one is centered at 0 and the other is centered at 0.5; for your use case it may be enough to just base that assumption on the control you are asking the user to actuate).

    Code (CSharp):
    1. device.ReadValueAsObject()
    should give you a copy of the current raw memory of the device (and thus, if the device is receiving proper data, shouldn't be all zeroes). However, I don't think that's a useful method in your case. For HIDs, it'll give you raw HID input reports and there may be all kinds of noise in there.

    That's likely a problem with the HID fallback code not recognizing the usages on those controls yet.

    In the debugger window for the device, there's a button "HID Descriptor" if the device is a HID. It shows you a tree view of the device's HID descriptor. Could you check which controls are getting ignored and which HID usage pages and HID usages they have?

    Overall, I think you're looking in the right direction here. If the HID layout generator catches all relevant controls, then going through allControls and finding the ones the user actuated should be a workable first step.

    There is a package ATM on the staging server and there will be one on the production server soon. We haven't yet gotten into a steady release cadence and don't yet have good verification for the packages in place, so ATM the input system packages are a bit hit and miss. As we progress, things should get more usable and probably appear more regularly. However, the input system package will remain in preview for the duration of the 2018.2 cycle and probably for most or even all of the 2018.3 cycle.

    So, where we are ATM is sort of half-way. The C# system doesn't care about how many mice you have. Multiple gamepads, multiple mice, multiple whatever, it's all the same to the system.

    However, the underlying C++ Windows backend is only halfway there. ATM the code only reports one mouse, one pen, and one touch and aggregates the input from each type respectively. Supporting multiple devices of each such type is on the TODO list and something I'd really like to have. Unfortunately, it's unclear ATM when this will get picked up.

    Yup, indeed. So, the system is built to allow "abstracting" devices by going for a quasi-lowest common denominator. The "Gamepad", "Mouse", "Pen", etc. layouts are all built that way. They basically say "this is what a common device of type X looks like".

    However, on top of that, you can build whatever you want. ATM we don't have anyone building more specific mice, but there's plenty of precedence for gamepads. For example, there's a version extending the basic "Gamepad" layout to describe a DualShock layout (which has additional controls and outputs) and then another version extending that even further to describe a DualShock layout specific to just the PS4 console.

    For the mouse in particular, I imagine that we will actually extend the base "Mouse" layout itself and add a couple buttons even though they are not present on all mice. The current state layout already has plenty space for more buttons.

    Whatever comes through from a backend (i.e. through IInputRuntime or from whoever else does QueueEvent) you can expose freely. What's turned into controls in layouts does not restrict what's transmitted and stored in memory.

    For the Windows backend in particular, we need some more refinement of pointer support and the stuff we report.

    The packages should be compatible with any 2018.2 version (final or not).

    There may be a point where we will switch packages to require a 2018.3 beta. Depends on whether we need native API changes (those have been exceedingly rare but they do happen every now and then).

    Note that at this point, we're not yet sync'ing input package "releases" with Unity releases (as in on the day that version X of Unity is shipped, we have an "official" package targeting that version). All we do is require a certain Unity version to work. As stated, ATM that is any 2018.2 version.
     
  37. asenetpro

    asenetpro

    Joined:
    Sep 12, 2013
    Posts:
    17
    :cool:Thank you very Much That said it all.....:D
     
  38. eugene4552

    eugene4552

    Joined:
    Jun 28, 2018
    Posts:
    4
    Quite right:)

    Maybe I am doing something wrong, but I wrote small script, which prints device raw data and all device controls' values:
    Code (CSharp):
    1. using System.Collections;
    2. using System.Collections.Generic;
    3. using UnityEditor;
    4. using UnityEngine;
    5. using UnityEngine.Experimental.Input;
    6. using UnityEngine.UI;
    7.  
    8. public class GamepadProcessor : MonoBehaviour {
    9.     private Text gamepadInfo;
    10.  
    11. #if UNITY_EDITOR
    12.     private void Awake()
    13.     {
    14.         EditorApplication.playModeStateChanged += (change) => {
    15.             if (change == PlayModeStateChange.ExitingPlayMode) InputSystem.onDeviceChange -= OnDeviceChange;
    16.         };
    17.     }
    18. #endif
    19.     private void OnEnable()
    20.     {
    21.         InputSystem.onDeviceChange += OnDeviceChange;
    22.     }
    23.  
    24.     private void OnDisable()
    25.     {
    26.         InputSystem.onDeviceChange -= OnDeviceChange;
    27.     }
    28.  
    29.     void Start () {
    30.         gamepadInfo = GetComponent<Text>();
    31.     }
    32.  
    33.     private void OnDeviceChange(InputDevice device, InputDeviceChange change)
    34.     {
    35.      
    36.         if (change== InputDeviceChange.StateChanged && device.layout.StartsWith("HID::") || Gamepad.current==device)
    37.         {
    38.             var deviceData = (byte[])device.ReadValueAsObject();
    39.             string ByteArray = string.Empty;
    40.             foreach (var Byte in deviceData) ByteArray += Byte.ToString() + '.';
    41.          
    42.             gamepadInfo.text = "DEVICE DATA: " + ByteArray+"\n\n";
    43.  
    44.             foreach (var control in device.allControls)
    45.             {
    46.                 gamepadInfo.text += control.name + "(" +control.valueType.ToString() +") = "+ control.ReadValueAsObject().ToString()+"\n";
    47.             }
    48.         }
    49.         if (change == InputDeviceChange.Added)
    50.         {
    51.             if (InputSystem.TryLoadLayout(device.layout).extendsLayout.Equals("HID"))
    52.             {
    53.                 print("Unknown gamepad connected. Would you like to calibrate it now?");
    54.             }
    55.         }
    56.     }
    57. }
    When I press buttons or move sticks, device data stays constantly filled with zeroes
    Output.png

    In my case this is HatSwitch.
    HatSwitch.png

    Is there any way to get HID Descriptor data via script?
     
  39. Rene-Damm

    Rene-Damm

    Joined:
    Sep 15, 2012
    Posts:
    1,779
  40. Holonet

    Holonet

    Joined:
    Aug 14, 2017
    Posts:
    84

    I'm curious on the roadmap for this situation. Personally, I'm currently making a local multiplayer game and have run into just that issue with Windows and its device/USB port juggling, and the resulting headache of assigning device to player.

    Is there going to be a player interface object or something that will have the device, etc... as properties, or should we just go ahead and make that ourselves?

    I'm a little green compared to y'all as a programmer, I think, but I was trying to craft an approach here the way things currently are... Something like:

    * Create an object for each player (up to the max #) and have an "enabled" property, set by how many human players are there.
    * Assign an InputActionSet to them based on default controls
    * Subscribe to event for unplugging, then null out the device property on the player object
    * Subscribe to event for plugging, then iterate over the connected devices when it happens, then re-assign the device by selecting the one which is not assigned to other player objects, and pray no joker messes with more than 1 plug at a time :p

    So a couple questions...
    Do methods like Gamepad.all return connected joysticks, or any devices that Windows (for one) has installed? Also, is there an event we can listen for when a device is unplugged/plugged into a USB port (I assume there must be, but having trouble finding it in the documentation)? Lastly, generally speaking, is there any obviously better approach that I'm not thinking of?

    Just on that note actually, is there any updated documentation? I grabbed the stuff from GitHub, but unless I'm missing something (possible!), the classes seem to be describing properties and, very briefly, what they are, but I'll be darned if I can find methods...

    Generally, just want to de-murk a picture of handling this, code necessities notwithstanding :).
     
    Last edited: Jul 20, 2018
  41. Rene-Damm

    Rene-Damm

    Joined:
    Sep 15, 2012
    Posts:
    1,779
    ATM it's "roll your own". For now, the focus is on surfacing the data and providing APIs to allow solving the problem in various ways. Some stuff is there already, some stuff still needs to be improved (e.g. certain specific gamepads where we get very clear port IDs from the platform surface that in their API but I'd like to have that available more universally).

    Eventually, based on those APIs, there should be something available out-of-the box to manage players and device associations for you, should you chose to go with the prefabricated solution.

    It returns all connected gamepads. If a gamepad is disconnected, it disappears from the system. Although not without telling you first :)

    InputSystem.onDeviceChange

    And yeah, our documentation does need a good deal of work to answer questions like this easily and obviously.

    Right now, it's pretty much up to this. If something gets added, use the information on the device to try figuring out whether it's a device previously used by a specific player (and thus should go to that player again) and if not, figure out which player to give it to.

    What's on the Wiki, on YouTube and in the Documentation folder is pretty much what's there ATM. All of it is in a mixed states of up-to-dateness ATM. With the code still changing a lot, documentation hasn't yet received a major focus.

    The API docs are generated from source and checked into the repo under Documentaton/ but are updated infrequently. Also, our current docs use Doxygen which has lots of issues with C# (for example, it doesn't handle generic types properly which results in InputControl<TValue> losing all its docs and InputControl losing everything but the toplevel class description). We're in the process of creating tooling for documenting packages and hope to be able to switch to this solution sometime soon.
     
  42. Rene-Damm

    Rene-Damm

    Joined:
    Sep 15, 2012
    Posts:
    1,779
    Hey guys, we've started overhauling the action editor and have landed an initial version. It's a first step and an ongoing process but IMO the new version already looks much more attractive than the programmer art UI I did :)



    There's additional editor features that we have planned (e.g. ATM there isn't really a way you can separate different control schemes) but the idea is to take it step by step.

    Feel free to let us know what you think :)

    ////EDIT: BTW, the new UI also finally brings the ability to edit binding composites (WASD controls and such) in the editor.
     
    Last edited: Jul 21, 2018
    OhLawdie, dearamy, 5argon and 5 others like this.
  43. Holonet

    Holonet

    Joined:
    Aug 14, 2017
    Posts:
    84
    I tried adding an actionmap, then an action & binding. I saved it...then went back to add another action (I think that was it), and I got this:

    upload_2018-7-20_20-49-54.png

    So I closed the UI box there and re-opened it...and the stuff was there...but then I finished making the actionmap, saved, closed...and now I can see the action map in the project view, as well as the actions, but if I try to click "Edit asset" and open it up again, it's blank. Using the latest Unity, I believe - 2018.2.0f2, Windows 10 x64.

    EDIT: So I tried deleting the Input Manager entirely and recreating it. Something I didn't notice before was it seems to take a while when I click Save before appearing in the Project View. This time, I tried recreating some of the actions, and clicked Save, and waited. After like 30 seconds, I got this:


    upload_2018-7-20_21-6-1.png

    ...and it keeps counting up, even if I close the tree view. It stopped after I deleted the whole thing again.
     
    Last edited: Jul 21, 2018
    GilbertoBitt likes this.
  44. orb

    orb

    Joined:
    Nov 24, 2010
    Posts:
    3,037
    Starting to look how I'd do it. I need free time to play with it, because it looks less likely to make me punch my monitor now. That's a significant improvement as my monitor is expensive.
     
  45. kilik128

    kilik128

    Joined:
    Jul 15, 2013
    Posts:
    909
    it's always preview or we can use it ?
     
  46. dadude123

    dadude123

    Joined:
    Feb 26, 2014
    Posts:
    789
    Hey there,
    we've got a game using unity 5.5 and we might want to update to 2018.* eventually.

    The project is only for desktop platforms and supports mouse+keyboard or gamepad.

    However we already have a sort of input manager that does things like providing "abstract input" (like "CharcterMovementVector" and "CameraRotation") that's sort of disconnected from the actual input because you can also remap the controls to whatever you want ingame.

    It also has properties we can set to disable whole input groups. Like for example "MenuControls" (containing things like "MenuLeft/Right", "MenuClose", "ToggleSelectedOption" ...) get turned on and "CharacterControls" gets turned off.


    Is there anything that could help us in regards to input? Or should we just keep the input system we have just like it is?
    Maybe it will give us a small performance boost because we're not using the translation of the unity input manager, and then our own translation layer ontop of that for remapping/settings?

    One thing we'd like to have is being able to do some sort of axis mapping to buttons, which we didn't implement yet.
    For example our "CameraRotation" can only be assigned to predefined things like "RightMouseButtonDown + MouseXDelta", but you can't do something like having Q/E increase or decrease the camera rotation (and even if you could, there'd be no way to control how fast the two buttons would modify the CameraRotation value).

    Is that something that will be easier to do in code with the new input system?
    Like I remember a sort of json-like or path-like setup being talked about, can that be used at runtime to remap controls?

    The optimal way I can imagine would be to have an API with which I could express something like "Give me a float; which gets increased by Q and E by 22.5 per second; alternatively LeftBumper/RightBumper".
     
  47. Rene-Damm

    Rene-Damm

    Joined:
    Sep 15, 2012
    Posts:
    1,779
    It'll still remain in preview for a while. Our expectation is to go out of preview around 2019.1.

    One avenue that could be worth exploring is ignoring all the action stuff in the new system but using its device layer instead of the APIs in UnityEngine.Input and hook that up to your remapping layer. However, if you already have things working and it's working well enough with the old input manager, then personally I'd just go with that.

    The action system does have a concept of "composite bindings" (another candidate to get a rename; "composites" just seems confusing) which allows things like the axis arrangement you are describing. Needs refinement (like the whole action stuff does) but the basics are working.

    Yup. JSON or programmatically, both works. Here's a test in the system that sets up an axis composite programmatically.

    Still needs some fleshing out for additional options and tweakability (ATM you'd probably have to register a custom composite or custom value processor to do exactly what you describe) but in essence, that's how it works.
     
    dadude123 likes this.
  48. Rene-Damm

    Rene-Damm

    Joined:
    Sep 15, 2012
    Posts:
    1,779
    @Holonet Sorry, forgot to reply. Our UI guy will be back from vacation (summer time...) shortly and take a look. There's several problems around how the editor handles and saves the asset that need to be addressed.
     
  49. TitusMachine_

    TitusMachine_

    Joined:
    Apr 24, 2017
    Posts:
    13
    Will the eventsystem for GUI also support the new input system on the 2019.1 unity version?
     
  50. Jonathan-Westfall-8Bits

    Jonathan-Westfall-8Bits

    Joined:
    Sep 17, 2013
    Posts:
    265
    Sorry to ask the infamous question, but was wondering when we might expect the next version of the UI for the input system. The current version has some lovely UI bugs that fuse the option menus together.

    I saw the comment about your UI guy hitting vacation time for summer, but that comment is from a month ago. I really am looking forward to using the UI for the input and seeing more API put out on the github wiki.
     
Thread Status:
Not open for further replies.