Search Unity

  1. Full schedule for #UniteBerlin is now available! Featuring talks on our roadmap, hands-on labs and much more! Check it out!
    Dismiss Notice
  2. Unity 2018.1 has arrived! Read about it here
    Dismiss Notice
  3. Scriptable Render Pipeline improvements, Texture Mipmap Streaming, and more! Check out what we have in store for you in the 2018.2 Beta.
    Dismiss Notice
  4. ARCore is out of developer preview! Read about it here.
    Dismiss Notice
  5. Magic Leap’s Lumin SDK Technical Preview for Unity lets you get started creating content for Magic Leap One™. Find more information on our blog!
    Dismiss Notice
  6. Want to see the most recent patch releases? Take a peek at the patch release page.
    Dismiss Notice

Input System Update

Discussion in 'New Input System' started by Rene-Damm, Dec 12, 2017.

  1. eobet

    eobet

    Joined:
    May 2, 2014
    Posts:
    115
    Ah, of course. It did. :)

    I think the confusing thing is that input doesn't just go to 0 when the game loses focus, you still get some input...
     
  2. Baste

    Baste

    Joined:
    Jan 24, 2013
    Posts:
    2,967
    How does this work, exactly? Does the system generate Events (as in Event.current), or are you supposed to subscribe to inputs? Does it support keymaps? Can you specify what keymap to use in the editor?

    Ideally, the inputs to Unity (like ctrl+number to open specific numbers or whatnot) would use this system, and be configurable.
     
  3. Rene-Damm

    Rene-Damm

    Unity Technologies

    Joined:
    Sep 15, 2012
    Posts:
    253
    Yeah, that sounds buggy. My guess is it's fixed by a change on the native side that hit recently.

    The system works in edit mode the same way it does for play mode and provides the same APIs (except that InputActions are not supported in edit mode ATM). E.g. you can do Pen.current.pressure.ReadValue() in your EditorWindow code.

    Separate state is kept for edit and play mode and one or the other is active depending on whether game code or editor code is running. Some under-the-hood trickery handles the problem of different coordinate spaces and orientations in EditorWindows vs game code so that you don't need to manually convert pointer positions, for example, between different coordinate spaces.

    There is some stuff in the works here. It's not tied to the new input system, though. Overall, input in the editor will likely remain tied directly to platform UI input.
     
  4. awesomedata

    awesomedata

    Joined:
    Oct 8, 2014
    Posts:
    447
    I also want to point out Mecanim's interface == bloat > 9000 and its painful API access in general is terribly bloated (its scripting is heavily GUI-influenced and non-modular), and tends to be quite verbose to simply "check values" in various places. All the essential functions one might need through scripting don't exist without one having to program them through this (usually awkward) API interface. The functionality that does exist require 4-5 lines of code (minimum) to get at just about any functionality.

    This is made even worse with the new Playables API -- I am fighting with this terrible API right now, and just dread the whole idea of having to fight with future API systems "designed" by Unity in a similar way... D:
     
    Last edited: May 17, 2018
  5. awesomedata

    awesomedata

    Joined:
    Oct 8, 2014
    Posts:
    447
    @Rene-Damm

    This looks like a workable system so far. -- The design looks like it is finally shaping up quite well! -- Unfortunately, like @hippocoder, I have just a few concerns and reservations about the process it took (and seems to always take) to reach this point...

    First -- and no offense intended toward you guys (I know you're trying hard to appease everyone!) -- but I think a lot of the danger of the current approach your "design process" has been taking is the simple lack of production-based / power-user-vetted input from heavy API / tool users who are at least moderately-seasoned with your "old" tools (and those tools' scripting APIs) who use them almost daily in a production-based environment.

    To be honest, I really wonder why Unity won't hire devs like me (who use regularly use existing APIs to develop useful and interesting tools for Unity with) to help develop the initial designs of their new tools and API systems? I mean, if I already develop useful and innovative tools using the "old" system, why shouldn't I (and others like me) be the ones you ask for guidance in the tool design process instead of the larger community as a whole?

    Maybe I'm wrong, but it seems like Unity tends to be completely disconnected from the users who actually *use* their systems heavily. I too am worried about the approach behind the scenes, but not because of the "cool street-fighter controls" being possible (thinking bigger isn't always a bad thing!), but for the fact that I notice you guys relying heavily on forum posts for "usability" feedback. Ask any Asset Store developer here if you don't believe me, but "forum feedback" is only useful when there's a problem with an existing system -- *not* at design time for a brand-new one! Most users only know vaguely what they want -- but there are power-users who develop tools based on your APIs -- and this development occurs usually because they want to use/make their own tools because existing tools are either non-existing or insufficient!

    If you guys want input, then I've got a proposal: Why not get someone on the team who is heavily experienced with the old systems, who uses it every day to some extent, and let THEM be in charge of that initial list of the "improvement" feature sets in the future? This, I feel, is the best way to get useful input and stop wasting engineering man-hours on a bloated feature-set users simply cannot work with! -- And don't hire just ONE member of the community, but hire THREE or even four of these types of people for a short time to get the design off the ground! -- One guy to do any user-facing design, another guy to deal with the scripting API approach, and a third / fourth developer to be the intermediary dev(s) that keep everyone's egos in-check, who have a say in both aspects. Just seems like the "right thing to do" to me.

    Most asset store devs make tools for extra income. To get them to offer valuable input, pay them a useful monetary bonus, let them put the experience on their resume/cv, and get your awesome new system design plans that will be something everybody who uses it will absolutely love -- all *without* wasting time going back to the drawing-board when people tell you all your hard engineering work was for nought due to the simple (but fatal!) flaws in your initial design.

    What say you guys (or your bosses?) on this?

    Saving time & money usually makes the "man-in-charge" happy...
     
    Last edited: May 17, 2018
    Player7 and interpol_kun like this.
  6. hippocoder

    hippocoder

    Digital Ape Moderator

    Joined:
    Apr 11, 2010
    Posts:
    21,804
    I trust @Rene-Damm and know he will deliver excellent results. This reboot of input has taken time but I think they'll do it right.
     
    awesomedata likes this.
  7. Rene-Damm

    Rene-Damm

    Unity Technologies

    Joined:
    Sep 15, 2012
    Posts:
    253
    Well, first and foremost, we say that it is a concern we wholeheartedly share :) And second and... backmost (??), that the picture visible here on the forum thread(s) is a partial one.

    Deeper embedding with game teams and faster iteration through tighter feedback loops is an effort that is being pushed inside of Unity R&D on several fronts. From enabling it technically (package manager and overall Unity development structure being part of that) to enabling it 'culturally' (with our CTO being its fiercest proponent).

    So, in regards to Unity as a whole, it's a weakness we see in our own past and ongoing development efforts and a weakness we are working to address.

    Which... goes for the input system, too.

    There have been collaborations (arguably not enough) and various feedback loops with the previous new input system (which in various ways are feeding into what's happening now as well) and feedback in the end was why it didn't get released but rather went into a redesign.

    For the current system, we're in the process of talking to gamedevs and finding collaborators (and if you think you'd be a good fit, we'd be more than happy to have a conversation :)) willing to try things out and iterate with us to find what doesn't work and what does.

    Now, you could argue that this should have happened earlier. In my experience, it's far easier to get productive feedback, willing collaborators, and iterate if you already have *something* working based on initial research but are willing to redo and reshape whatever needs it. And with this being a reboot of an initial new input system, we already had a great deal of feedback to incorporate.

    Where we are now is the product of a learning experience, a learning experience that continues and that leaves me without hesitation to rip out and reshape whatever we still find inadequate. If tomorrow we find that the conceptual approach of the action stuff isn't holding up in practice, I have no qualms ripping the stuff out and rewriting it. We're out in the open at this point not to show an end result. We're out there to find collaborators and get us to an end result.

    The other thing we want to get right, too, is to not develop for one use case. Especially with input it's easy to end up in a niche. Concurrent cross-platform *and* cross-input game dev is kinda rare so we're trying to sample from multiple niches.

    I think it's important to not read too much into the forum thread here. Valuable user feedback and ideas can surface on the forum and it's an easy way for us to keep a portion of interested users appraised about what's happening, but it's by no means a 'design tool' of sorts for us. We do not equate a forum thread with finding out what works and what doesn't.

    Most of the communication and collaboration that's happening isn't visible on the forums. Stuff like us hiring the principal input guy behind some of the FarCry titles (hey Tom), us talking to specific game teams, us talking to other companies doing input stuff, us having in-house teams bang on and complain about the stuff, etc.

    Like, make no mistake, there's still tons of room for us (and me) to improve, but we're working on it :)

    That's what did happen with input, I'd say. Except it wasn't a single guy but rather a whole bunch of guys with a whole bunch of experience doing a whole bunch of different things.

    Ok, this thing here got kinda longish... hope it provided at least some useful insight. Let me know if I've missed your point somewhere.
     
    Last edited: May 18, 2018
  8. awesomedata

    awesomedata

    Joined:
    Oct 8, 2014
    Posts:
    447
    @Rene-Damm

    I apologize in advance for the length of this, but please understand that everything I say here is necessary for you guys to hear. This may not all apply to you personally (the majority of it does apply to one of your views though, which are clearly shared by many of your colleagues, and also why I felt the need to write this all out instead of just PMing you). but I would appreciate it if someone further up the chain (who has access to the other teams developing big features for Unity) to put their eyes on this post too.




    That makes sense, and it is to be expected. I am glad there is more "behind-the-scenes" going on than meets the eye. However, some of the effects from "behind the scenes" even we can see, and because of those effects, not knowing the "why" is very difficult to stomach when it comes down to business. That lack of "why" we see those effects makes some of us very concerned about trusting our time to Unity. (I'll get into some of the "effects" seen later in this post.)


    I am incredibly glad to see this.

    You're probably the first Unity Team member I've seen (outside of the "tech" videos offering up another new "feature" or "system" in Unity that required /having/ to admit the older system it replaced was poor-enough to require a rewrite) to fully put the hubris aside and admit that there is a weakness within the walls of Unity. I've seen Unity-Team devs come and go, but I really hope you stick around. Although there is one major point I disagree with you on (I'll get to that in a moment, since it takes up the length of this post), but I want you to know that I highly-appreciate that you took the time to address the issues I brought up in my previous post without you "stone-walling" me (which seems like it's the go-to "response" from UT these days). Doing what you've done here is the only way progress can ever truly be made so that everyone can start seeing Unity as "cool" again. :)


    So this leads me to my major sticking-point with Unity's development these days.

    In regards to the input system being in a "beta 2.0" now, I completely respect and appreciate you and your team for incorporating our feedback into your 2.0 design -- it really does show in the more elegant design! Because of that, I've got nothing negative to say about the input system's progress (despite it being very early!) since its (current) design really does sound like you guys are back on the right track!

    This leads me to my next point -- "What is the 'right' track?" -- IMO, it's not the issue of whether or not the feature is great in the end that bothers people about poor systems in Unity -- it is generally the overall "uncertainty" of the minimum standard of "what exactly IS going to be delivered in the end" that causes me (and MANY other devs) to fear the worst about "new features". This is because the "design" of said features rarely seems to have a solid list of bullet-points for the value to be offered (or a list of possible sticking-points) to us end-users. We tend to be forced to rely on faith that the resulting "great feature-set" will cover our many (also-unpredictable) use-cases for said "new" features. However, because there are NO expecations set -- we all have *great* expecations.

    This is a classic case of the problems arising from not setting clear and realistic expectations to those you offer a service to. Now they can say "You didn't deliver what you promised!" simply because you were never clear (in writing, of course!) about what exactly it *was* you had promised. They now have a blank-check they can cash at your expense -- and get away with it! -- all because it was *you* who gave it to them and told them "write what you think is fair." lol

    Of course most people aren't this terrible -- unless you make them angry.

    Here is a good example of how that might occur:

    I decide to make a game "with Timeline integration" (to use a real-world example). I want to know that anything I want (or try) to do with Timeline is going to be supported. If there are potential ways to use Timeline that aren't supported upon its official release, then I want to know before I design my entire game concept around the use of the "Timeline" tech that any important features (such as the "Events" feature shown in various videos) will not be included upon release, and that Unity still needs a serious overhaul under the hood for it to be properly supported. If there is some other internal workflow "issue" that puts the "feature" at risk (such as the internal pipeline being unable to support arbitrary code-execution on multiple platforms at the time of release), then I need to know that it is risky to expect that feature in a timely manner and how long it could take to receive it (and then be pleasantly-surprised if it arrives sooner).

    However, in the case of the "Timeline Events" feature that was held back from us for so long -- it was actually NOT some deeply-internal, highly-technical, heavily-integrated engineering problems that prevented the release of the "Timeline Events" feature. -- No -- it was a "design" issue -- currently holding it back until 2018.3 now.

    This is the kind of thing that happens on a regular basis. Since Mecanim, since the "new" 4.6 UI system, since pretty much everything in the recent years of our beloved Unity to some extent.

    That being said:

    While I have no issue with "design" being heavily malleable -- it is the fact that there is no "technical feasibility study" put out to the general public before you guys "have something to show" -- And therefore, many man-hours are wasted all because of the "concept" itself was flawed in some way. And to clarify -- the "concept" is not simply a "new input system" but is instead a list of bullet points (considered as a whole) that defines what the "new input system" actually IS -- This would look something like the following (which allows for heavy malleability while also being firm as to what it offers (without being specific at to HOW it offers it -- allowing a lot of technical creativity under the hood by you guys):

    • Works for both Editor and realtime in-game input detection
    • Should be easily modifiable for new types of input (i.e. VR motion detection as well as gamepads)
    • [RISKS] Has a system for "creating" a custom device type and mapping input for it
      [RISKS]
      1. internal programming to support this may not exist
      2. could require some serious development time to deliver this feature
      3. might require multi-team collaboration
    • Supports all major platforms' currently-supported input devices
    • Adds support for a list of device descriptions that can be patched into a runtime game executable to add new types of input support (i.e. adding head-tracking to an FPS game that is currently awaiting a specific device to support it to start being manufactured)
    • Offers the ability to check detailed input chains at once (such as street-fighter's "press down, down-diag forward, and hold forward for 10ms, then press "button 1" quickly 2 times)
    • [RISKS] Input chains can have "replaceable shortcut labels" that reference buttons/directional-inputs/etc. substituted for other buttons/etc using the shortcut labels (labels are either strings that represent hashes or direct hashes representing a reference to the input slots and controls)
      [RISKS]
      1. possible "development-hell" feature
      2. long-term or lengthy "design" processes might risk in it being cut or released prematurely.
    • (etc. etc. continued here, until it stops being technically-feasible in the timeframe alotted to you guys)
    Then, once an overall list of "value" and "risks" like the above are assessed and agreed upon by your team, release THAT list of bullet points to the community (via forum post, etc.) rather than as a buggy half-finished "beta" with "features" that were pointless to begin with. What?? The community didn't want you to waste effort on making it have "replaceable shortcut labels" over larger amount of device support out of the box?? That could have been fixed before you guys wasted the effort in the initial phases and then delivered sub-par device support that the devs themselves would have to supplement on their own (and it could have been "fixed" had we just known about it in advance so that we could speak up about it)!

    In the case of Mecanim (which is an even better example of why there should have been a detailed bullet-point list like this first), what about the flexibility of the Legacy system? -- Some, even today, would argue that it is better in almost every way. And had a list of bullet-points like this been released prior to Mecanim's "beta 1.0", people would have asked "Where is the bullet-point that says "flexible scripting API to allow skipping the internal state machine system so user can implement a custom one?" or "parameters are added via Mecanim visual interface and cannot be added via scripting" etc. etc.?" A mockup showing the visual workflow would have been even better. -- The lack of a list of bullet-points and mockups showing workflow like that just means that an expensive, slow, (and, sorry for being harsh, but to many, a somewhat "useless") system was developed instead of a more robust, lightweight, and flexible system that could even piggy-back a bit off of the Legacy code. The way it was written, it was completely detached and did all sorts of bells and whistles -- but very little of what regular users like myself wanted to do with it (i.e. a simple way of playing animations, that could have more advanced features such as blending/state-machines added/removed when I needed them.)

    I know this is not your fault, but it does follow your philosophy of "make something first, let users play with it, then refine it if they hate it" -- and I hope the above example with Mecanim shows how that is not particularly a great idea sometimes -- especially for larger systems (like animation) with many potential use-cases (such as simply posing a character or doing IK/FK or retargeting automatically on non-humaniods, etc. etc. etc.).



    The philosophy of "just make /something/ and show it to people" in an attempt to "wow" us while also gathering our feedback on what didn't "wow" us so much has backfired on so many occasions for Unity these days.

    The reason why is that people want something that will fit their needs and not their fantasies -- and imagination/fantasy is always prettier than reality -- at first. The reality is -- Mecanim is a system that is inflexible and obtuse/bloated/slow, and Timeline was incomplete and released too early with its main feature to most (Timeline Events) completely MIA -- and it even took away existing features (our Animation Events) with its inception -- all without warning! Had the developers in charge listed a bullet-point list with the "RISKS" (written in a candid and considerate way to the game-developers who might use it eventually, with a note about Animation Events being a possibility of removal) that let us know exactly what they wanted to deliver, and where their stumbling blocks are that might eventually make US stumble too, we would be so much more appreciative of their efforts on this feature.

    So perhaps you can understand why I am against the "just make something and show it to people" idea and why I feel it is worse than simply a "shot in the dark" approach. After all, it may not matter to YOU that you must rip it all out and start over again, but it matters to US as to how long that "ripping and re-writing" takes because WE have to wait on you (and if Unreal has what we need already, for example, maybe we might choose to instead go learn about that in the meantime, instead of dealing with all the uncertainty of a "new" feature we're not sure will fill our needs). Sorry to sound so harsh, but it is a fact -- The faster you guys can develop a solid design for us, the faster WE can use a solid technology design to speed-up our development efforts. If you are slow to make this technology -- then we are slow to use it. None of this stuff is really "future-tech" anymore, and there are people making their own game-engines that are beating us to the punch these days. Maybe you can see now why this "bullet list" I mentioned above to describe the concept to us is a total necessity for many of us! Before you guys even write your first lines of code -- I ask that you would pleace make that list, and show THAT to us -- with pictures, if you really want to "wow" us -- and use THAT to see if it fits our needs, instead making us wait for you guys to finish coding for months just to trash it and try again after a few more months (potentially making us wait YEARS for your revisions to finally make it into Unity properly).


    No game developer worth his salt (or your time) will ever ignore a detailed bullet-list of promised functionality (especially when the mockups are solid, and the suggested API workflows are solid and easy-to-understand too). The true reason we beta-test is to check that solidity for ourselves! -- we want to see whether it fits our needs! -- If you can provide this via a list (instead of after months of wasted work!), we will begin to notice that Unity is progressing fast again and trying to keep up pace with its developers on the bleeding-edge. No offense, but when we "beta-test" we don't usually care much about squashing bugs for you -- We really just want to see (for ourselves) whether your system does what we want (or are expecting) it to do. If you guys have a great idea for an interface feature (i.e. jaw-dropping dragging-dropping of states/button-inputs/input-events/shortcut-labels/timeline-events) that you feel might "wow" us, then draw us a thumbnail or three -- and we'll figure out how well that will work for us in production -- and we will tell you if there's a problem (or if we want something else instead)!

    Sure, you might argue that your current approach is a "more-concrete" way of getting UX feedback, and (to an extent!) you'd be correct -- However, on a system with a highly-mutable codebase (that could be ripped out at any moment) that users can touch/use but might not be near representative of the same experience in the end (and also has a very-high development-time cost overhead), trading /that/ version of "more-concrete" for a "more-concrete" bullet-list of features that, although they don't yet have a physical form users can test, the development time-cost is next-to-ZERO at this point, which means, if all of the major points of the design are nailed-down here in "pre-development", actual development time would be mostly straightforward, and a highly-mutable API/codebase would be mostly unnecessary as long as "concrete" API examples of doing things described in the "bullet-list" are provided beforehand.

    Unity's strength is that we can program our own interfaces for Unity's API, and as long as the API is good enough to cover any use-cases in the bullet-list (and remains flexible in areas where it could be used for other things), you guys have nothing to fear! API is mostly theoretical, and can be implemented without being literally "implemented" quite easily! After the API is solidified ( "more-concrete" ), then the interface should be fast to make. You want something more visually-fancy, either you guys can add in some visual / functional flair during the "polish" phase, or provide an easy way so that others can implement that "fancy" themselves (via editor-scripting "overrides" or whatever). This is the kind of "more-concrete" I feel people would much-prefer (even if they can't get their hands on the system until later) -- especially since the current version of "concrete" is actually not very "concrete" at all, since even the "hands-on" early-access "beta" experience typically lies about the UX (due to its inherent malleability) -- and that's why the bullet-point list above should never change after it is solidified (and thus it will /never/ "lie"). The desciptions should be as candid and forthcoming about the "risks" and "rewards" each "feature" listed is capable of bringing -- and then let users decide on a "final" version of that bullet-list, with all concerns out of the way (and any workflow-mockups necessary to convey the concept more clearly where more heady or abstract stuff is involved. )

    Let "beta" really be about bug-testing a semi-user-ready module that runs a much smaller-risk of introducing even more bugs (due to a feature or programming concept having to be ripped-out or change somewhere entirely (especially when this is under the hood!) to fit the "new" version of the maleable UI/UX design rather than the other way around) -- Again, Mecanim suffered from this "buggy" state for a long time after its release -- and I'd put money on it that this "malleable" process was behind that (when stuff was added under the hood to support user-requested features). Had Mecanim had a list of things that users wanted from the outset, a proper mockup of workflow thumbnails and implementation details (such as showing that parameters were not able to be added programmatically or that using the API to check states was such a hassle, or that adding states via script would be an issue, etc. etc.), users would have been able to "fix" Mecanim before it was ever so hopelessly broken. :(


    As hinted at above with the bullet-list of features -- I feel like this should change.

    "Democratizing" game development is only possible where there is enough transparency for "the people" (the heart of the "Democracy" itself) to have a say. After all -- if the Unity engineers are the "Electoral College" of the video game development "democracy", don't let yourselves be the ones to prevent "the people" from having their final say in how their games are going to be developed.

    I feel like we should be the first to know, and the last to have a word on the subject of any major new features.


    Who else is with me on this?
     
  9. hippocoder

    hippocoder

    Digital Ape Moderator

    Joined:
    Apr 11, 2010
    Posts:
    21,804
    You had me at TLDR.
     
    LurkingNinjaDev and awesomedata like this.
  10. interpol_kun

    interpol_kun

    Joined:
    Jul 28, 2016
    Posts:
    49

    I extremely agree on that. The UT's approach to developing things seems very poor like they have some sort of communication problem internally and with the community.

    To prove that let's remember how long the old team was developing the new input system before they all got reformed? It's good that we have Rene here with us like he's the rare type of UT stuff who always takes the community criticism. That's good: I have no problem with him. I want to really question other stuff. The Input system development stretched out a bit.

    However, that's not the only problem. The new Terrain Update was delayed until someday after they showed us all the new features at GDC Roadmap Talk.

    Tilemap was buggy at launch, and we got no new features after the nine months. No rule sets (Implement for yourself/use GitHub project), no optimization and workflow improvements.

    People criticize Shader Graph Custom Nodes because of its poor design.

    So what should we expect next? Now we are all waiting for a new Input System, VFX Editor, Nested Prefabs, Prefab Editor, Terrain and other cool features. But how many of them will not be delayed again, how many of them will meet our expectations based on your (UT) words?

    Some people think that I am a hater. But I use Unity, and I love it, but I feel bad for all what happening around the development process. It looks like a lot of new features are being thrown out hot and soon become forgotten.

    The only thing that keeps me from panic is the new performance feature-set.
     
    awesomedata and FROS7 like this.
  11. hippocoder

    hippocoder

    Digital Ape Moderator

    Joined:
    Apr 11, 2010
    Posts:
    21,804
    Hey guys, it's gone offtopic and also on-topic. So from this point only posts about input will continue in this thread.

    But... Let's start a new thread on general discussion with mod blessing, and you both should copy your replies over to that thread. I'll leave it up to awesome data to start it off on there. Thanks for understanding and I'm sure it'll strike a chord with users, so feel free to duplicate your answers in a new topic there.

    As for this thread, it's way off base because we need to give room for other people to reply and communicate with Unity staff. Thanks (and don't reply to me here please - pm if it's at all necessary)
     
  12. awesomedata

    awesomedata

    Joined:
    Oct 8, 2014
    Posts:
    447
    That's fair -- I definitely don't want to derail this topic. Thanks man! :)

    Here it is:

    https://forum.unity.com/threads/continued-from-input-system-thread.532501/
     
  13. Rene-Damm

    Rene-Damm

    Unity Technologies

    Joined:
    Sep 15, 2012
    Posts:
    253
    Let's not forget, though, I was part of that old team from day 1. I was 50% of why the project sunk.

    So a large part of why we're here is because a) something in Unity did work to prevent something getting released that had a high chance of making users unhappy (and someone in Unity endured quite a bit of abuse from me to make sure we're doing right by users; sorry again Ralph) and b) we acknowledged we screwed up and looked at why and how we could do better.

    Is there still a chance of us failing in some way? Absolutely. At the end of the day we're mostly just a bunch of dudes and dudettes writing code. But are we working to continuously improve? You bet.

    Ok, time for me to get off the soap box :)

    //EDIT: Gah, probably should've posted in the other thread. Sorry @hippocoder.
     
  14. recursive

    recursive

    Joined:
    Jul 12, 2012
    Posts:
    239
    Sooo, back to input.

    I see there's some fun new stuff with actionmaps in the repo.

    I've been chasing another rabbit hole the past two weeks (building a event/object lifecycle management framework for my input handling and some other systems I started on), but I wanted to ask is this leading to the new "callback-free" input action handling?
     
  15. Rene-Damm

    Rene-Damm

    Unity Technologies

    Joined:
    Sep 15, 2012
    Posts:
    253
    The hope is that it will. What's there ATM is little more than an idea but it works something like this

    Code (CSharp):
    1.     public void Actions_CanProcessActionsAsEvents()
    2.     {
    3.         var gamepad = InputSystem.AddDevice<Gamepad>();
    4.  
    5.         var map = new InputActionMap();
    6.         var action1 = map.AddAction("action1", binding: "/<Gamepad>/leftStick");
    7.         var action2 = map.AddAction("action2", binding: "/<Gamepad>/leftStick");
    8.  
    9.         using (var manager = new InputActionManager())
    10.         {
    11.             manager.AddActionMap(map);
    12.  
    13.             map.Enable();
    14.  
    15.             InputSystem.QueueStateEvent(gamepad, new GamepadState {leftStick = Vector2.one}, 0.1234);
    16.             InputSystem.Update();
    17.  
    18.             var events = manager.triggerEventsForCurrentFrame;
    19.  
    20.             Assert.That(events.Count, Is.EqualTo(1));
    21.             Assert.That(events[0].control, Is.SameAs(gamepad.leftStick));
    22.             Assert.That(events[0].time, Is.EqualTo(0.1234).Within(0.000001));
    23.             Assert.That(events[0].actions.Count, Is.EqualTo(2));
    24.             Assert.That(events[0].actions, Has.Exactly(1).With.Property("action").SameAs(action1));
    25.             Assert.That(events[0].actions, Has.Exactly(1).With.Property("action").SameAs(action2));
    26.         }
    27.  
    The per-action callbacks are still there and can work in tandem with this but the idea with InputActionManager is that you have a point in your frame where you pick up the accumulated "action events" and then run your own logic deciding to what to respond and how.

    ATM we're still in a phase of figuring out what tools exactly are needed to enable different use cases all the way up to stuff like "I have two possible actions to perform based on this one input but it's a raycast based on the orientation of the device that will decide which is the right one".
     
    Last edited: May 23, 2018 at 1:29 AM
    recursive likes this.
  16. recursive

    recursive

    Joined:
    Jul 12, 2012
    Posts:
    239
    Excellent, until this dropped, I was going to write an ECS system that would effectively do the same thing with the callbacks, just accumulating events until some point early in a frame, then process them and mark them for deletion. So this basically will make that more efficient, since I won't have to deal with the callback overhead and can bulk-instantiate events based on the amount in the accumulated buffer! Thanks!
     
  17. awesomedata

    awesomedata

    Joined:
    Oct 8, 2014
    Posts:
    447
    I like the idea of the "action-maps" being processed as events. Might be nice if we could do the same with physics and animations too.

    That said, the "pluggability" I see in your code is more along the lines of what I was talking about in the other topic -- In fact, I see the system mentioned above being useful for all sorts of things outside of just input.

    I would highly-suggest modularizing the parts that you can (especially the organizational parts that deal with mapping data into buffers). That would be useful for any sort of streaming input data (not just devices, but entire classes of data) that needs to be frame-independent and processed later.

    As I said in my most recent post before this -- a form like this (but based more on "general functionality" than use as "device input") could enable systems like this to be useful for all kinds of stuff. For example, one "use" could be tilemap- or collision- streaming (or really any kind of streaming data that needs to be collected but processed later in an easily-manageable way). Most of that data is mapped somewhere, so if we could design our own classes to handle this "mapping" of the data, then "devices" could be shaped like anything, and their outputs could be sent out as anything we wanted. Maybe an "input" is actually a gameobject or a state, rather than a device ID, and maybe that input has an output that is another state, class, or gameobject. These are only examples, but I can't see why the "form" this takes couldn't allow for that. After all, the "Playables API" was a great step in this direction for Animation -- so why can't it work for other game systems?


    A good way to approach this might be in sending a particularly-formatted data structure to a central device-processing-and-output class (and being able to override this and write your own "gameobject factory class", for example) could lead to some really nice use-cases outside of the standard "device input" usage -- and the "devices" could be extended as well -- but a class might need to be provided to process custom device types out of the box to make using the stuff a bit easier (i.e. myDevice.AddAxis(ref axis) , myDevice.TrackMotionOnAxis(myDevice.Axis(axis) , etc. etc.).


    Regarding "use" cases:

    Again, I know I might have come off as a bit rude at some point, but I assure you I'm really trying to help with this. The method mentioned over here about focusing on "functionality" rather than "uses" has a lot of merit if you give it a chance.

    When building your toolset/API to accommodate (relevant) granular functionality, and then build tools on top of that granularity to make it more manageable to work with (while also making those tools as close to modular as possible while still retaining their pluggability), you'll find that the "functionality" provided by those tools will easily handle all the "uses" you can imagine -- and then some. What I offer here is definitely a different way of thinking about something that "looks" the same as something else, but it is actually fundamentally different way of thinking altogether when you get into the weeds a bit. It is that subtle-but-fundamental difference, that "shift" in thinking, that makes this useful and is why I keep trying to explain it in these posts. I know it's hard to see, but it's so d*mn important for "general-use" tools like Unity to function as "generally" as they are able to! D:


    I know it might not sound like it, but many of us do highly appreciate your efforts to make stuff that functions in a way that we can use easily. Even moreso when we can use your tools for just about anything! -- especially for the stuff you (or we) didn't even plan for! -- and all that takes is putting a little extra effort into the design of the body/form of what you are trying to make -- and this goes x1000 when you're trying to accommodate for more "general-use" / "user-specific" scenarios.

    I hope this all makes sense.
     
    Last edited: May 23, 2018 at 6:59 PM
  18. Rene-Damm

    Rene-Damm

    Unity Technologies

    Joined:
    Sep 15, 2012
    Posts:
    253
    Maybe it would. I don't know. TBH I have not seen many successful attempts at "let's generalize the heck out of this". Most of the time it seems that people end up with something that is an awkward and complicated solution for everything instead of a good solution for something in particular.

    Anyways, while there will be some interesting new avenues for input in combination with ECS, our chief aim here is to solve the problem of input in Unity.

    I think we're actually pretty close in how we think about this and it's mostly terminology getting in the way. What you describe is pretty much what we're going for.

    We're not looking for a system that comes prefabricated to cover X amount of use cases that we've identified. Instead, we're looking to build a toolbox that allows users to build their own systems from the parts available to them. We want some level of prefabrication/zero-setup available as a jumpstart and easy entry but the system as a whole aims to provide a hackable toolbox.

    However... deciding what has to be added to the toolbox IS driven by actual use cases. Without trying to solve specific problems, I cannot see how one can arrive at useful tools. Functionality is meaningless without use cases informing it.

    Anyways, I get the feeling we're continuously ending up in discussions at the meta level here. I very much appreciate the level of reflection and outside perspective, but I feel we're running around in circles a bit.
     
    Last edited: May 24, 2018 at 2:55 AM
    SuperNeon, recursive and dadude123 like this.