A Unity ID allows you to buy and/or subscribe to Unity products and services, shop in the Asset Store and participate
in the Unity community.
Discussion in 'General Discussion' started by bibbinator, May 26, 2014.
+1 for events
All game logic in my game happens in FixedUpdate(), I interpolate all the visuals in Update().
This is generally true for games that use physics as well.
Currently inputs only get polled every render frame. This means if the game is lagging the players inputs also lag.
If all my game logic is happening in FixedUpdate() it makes sense that I need to poll the inputs every FixedUpdate().
Right now I have to create an inputs buffer to capture any button changes in Update() then act on those inputs on the next FixedUpdate()
But if the game is lagging then so does the players inputs.
Mouse movement is a more advanced issue, If I want to players aiming to only update every FixedUpdate() because thats when all the game logic is happening. (remember I interpolate so the players vision is still smooth)
Adding up all the mouse movement between every Update() then applying that change in the FixedUpdate() doesn't work in this case, there is a small amount of time between the last Update() and the next FixedUpdate() and in that time the mouse has moved! also if the game is lagging there might be multiple FixedUpdate() called before one Update() gets called.
I've managed to get around this for mouse movement by using external DLLs and polling the mouse movement my self in the FixedUpdate(). Its amazing how fast the mouse actually updates. if I poll the mouse, then poll it again straight away, its already changed...
Using external DLLs has some added bonuses like being able to constrain the mouse to the game window. This is really useful when you are playing a game fullscreen and have two monitors, still blows me away that unity doesn't handle this by default.
Also +1 for changing keybinds at run time. its one of the basic functions of an input system...
TLDR : Allow inputs to be polled every FixedUpdate()
Beware, I'm not great with terminology, but I'm very interested in this topic.
Completely rewrite the input system.
Unity should work much like Incontrol when it comes to devices. If a device is recognized and is a standard gamepad (2 analog sticks, 2 triggers, 2 bumpers, 1 d-pad, 4 face buttons, and a select and start button) it should be detected as such and work cross platform in the same manner. Same thing for wheels. There's no reason third party solutions to this problem need to exist. While obviously it'd be impossible for Unity to have built in support for all controllers, at least the standard console controllers should work without the need for workarounds by a developer. We shouldn't only be able to add support for controllers which we own if they use the standard setup, and their behavior should be predictable cross-platform.
If a developer comes across a gamepad that Unity doesn't currently support, they should be able to set up standard profile for it (for whichever platforms they can) and send it to Unity to be included in modular updates to the input devices profiles.
Everything possible should be exposed to developers to use as they wish when it comes to inputs. If a developer wants to use the raw data, the profiles, or anything else they should be able to.
Input should have a list of connected input device classes a developer can grab that contains all of their relevant variables. These could even be split apart in conjunction with the profile system. So have an "InputDevice" class and then have "GamePad" and "Wheel" and "FlightStick" classes that inherit from it.
Basically, I don't want to have to download a third party solution that is just an added layer ontop of Unity's Input system or build one myself. The Input Manager in editor should allow for friendly names for inputs. It should also allow for different profile of the same inputs to be entered in by a developer to be used in any key remapping menu system (for example southpaw or standard or whatever for FPS).
I'm working on my own convoluted input manager at the moment, and while it would absolutely suck if Unity went ahead and made the third party input managers obsolete by updating their own to work properly, it would be well worth it to make it a huge focus before releasing Unity 5. Even if a new system can't be implemented before then, at least expose as much as you can of the current system to developers so we aren't stuck having Input Asset files with every possible axis and every possible joystick just so we can work around the current broken, unfriendly, and archaic system.
For the love that all that is decent, just give us a solid API that lets programmers get the states of all connected devices without needing to set everything up in a god awful input manager and we can add whatever layers are needed on top of that.
Don't try to determine exactly what we need and make the input system "artist friendly" and tied into some horrible, horrible, editor oriented configuration system. The community is MUCH better at responding to the needs and use cases of actual game developers and designing evolving, high level APIs than you guys are. All we need is a simple, clean, well designed, low level API that handles communication with input devices as a foundation to build on.
No, that's not "all we need" at all. There needs to be an input system that doesn't require beginner to intermediate users to build on top of an API or, worse yet, just buy something from the asset store to get input to properly work. You can have low-level API access AND a decent in-editor system and config system for games.
Rebindable at runtime.
Main issue for me is the way Unity handles input devices other than the Keyboard and mouse.
Even if the only change was to make it so we could access connected devices and their raw data without having to use the input manager we would be able to come up with better solutions to these problems than is currently allowed.
But making it so individual connected devices could be grabbed as an object, with all of their data exposed would also be helpful.
And make devices able to be hot connected. Users shouldn't have to restart the game if their controller gets disconnected.
And framerate independent.
That's pretty much the minimum changes that are REQUIRED for the input system to no longer get in the way of developers. Implementing a new input manager on top of that would be nice, but not necessary in order for a modern input system to be put into place by developers.
Though in its current form it's missing two crucial elements. The ability to remap input, and a way to add an extra layer of abstraction (like cInput does - you can define actions like "Fire", which can be remapped).
Ideally I think you want to be able define actions like "Fire" or "Thrust", then have those map to high-level input names like "lower face button" or "right trigger". What's awesome about InControl is that it handles the next level down so well ("lower face button" is translated for the specific gamepad the player is using, for instance). So close...
I'm the programmer half of the two-man team making Aztez.
We're using InControl, which is super great for anyone making a single-player game based around a console controller.
Things we still want:
- Framerate independent update ticks (note that InControl can do this on Windows standalone via XInput, but not on other platforms). This is hugely important to enable responsive games. It's currently impossible make a SF4/TowerFall/etc competitive game in Unity without platform-specific plugins, full stop.
- XInput is required to use both triggers simultaneously on a 360 controller. This is Microsoft forcing people into XInput from DirectInput (see http://msdn.microsoft.com/en-us/library/windows/desktop/ee417014(v=vs.85).aspx ). We can ship standalone Windows games with XInput, but no solution exists for the web player.
- Universal rumble, especially on OS X (I documented what it would currently take to attempt this on Mac).
- Low-level access to input history with exact timing. Low level event streams will enable packages like InControl to provide different high-level APIs.
Because Aztez is a single-player game, I don't have any real feedback on multi-person/multi-controller issues--but there are many!
(I also don't have much of an opinion on runtime bindings, since we're probably shipping Aztez without rebinding functionality).
GOD, YES. I've been working on a game off and on that requires a lot of precise character input and it's just a complete nightmare when it comes to compatibility across OSX and Windows.
Yep. I'm using rumble for a few features and it's going to require all sorts of work to get to function on OSX.
I must be missing something in regard to this "framerate independent" stuff. I mean, it annoys me to no end when games don't have stuff framerate independent, but this has not ever been one of the issues either making or playing games for me. What am I missing?
Are you guys saying that you need to know not only what frame input changes on, but also what time within that frame and if there were potentially multiple changes in that frame?
- - - - - - - - - -
Also, re: cross-platform multi-touch support for non-mobile platforms, that absolutely should have been on my list.
Honestly, it depends a bit on the genre. You don't need those kinds of things for a Civilization-like strategy game (or maybe even an RTS game, although hardcore StarCraft players would probably disagree).
So in Aztez we run game logic in Unity's FixedUpdate loop. We have tight timings for some combo links or parries, and a lot of these things get very difficult if your input state is only updated before a graphics call. In Unity terms, the InputManager refreshes right before Update().
This is hugely problematic, especially since our biggest framerate hitches are when moves are connecting due to effects instantiating and things. A player is going to time all of their moves in a rhythmic way--either for combos or to do things like parry a certain amount of time after an enemy's pre-attack tell. If we had input stated refreshed in FixedUpdate everything would still work, but as is things can get out of sync.
If we get a 60ms super heavy frame in the mix, it means we no longer have fidelity greater than 60ms for input (even if we do for our physics/gameplay sim).
In ideal world, we could handle a hit in OnTriggerEnter, fire an move, process those hitboxes, and check input for the last frame in the order of our choosing. In actuality we sometimes get: Update, FixedUpdate, FixedUpdate, FixedUpdate, Update, etc, and those gaps in input can be felt.
In short it means you really couldn't do anything as timing-specific as a competitive fighting game, and even for Aztez as a brawler it strains responsiveness in certain circumstances.
wow this is a great thread, lots of very good ideas and lead by unity so that hopefully something will get done by 5.x and not just us talking about it...
really like the changes im seeing, all the additional involvement and stuff being released by unity, keep up the good work!
need api that allows us to completely be able to get, set, add, remove, check if contains, get enumeration of bindings from list of bindings in input manager
that way it can all be done from scripts without having to use input manager ui and that way we can have easy way for users to change key bindings in out games
Ok, so the issue is consistent controls specifically during inconsistent frame rates? Personally, I'd solve that from the other direction, making sure that there was never any hitching. If I were playing something that required high precision input I'd expect the framerate to be at least as fast as my input needs to be in any case.
Except you can't always test for that on every platform and systems already exist (for instance, xInput handling in Unity on Windows) to take care of this.
What kind of testing issues do you have?
Well, in my case, there's the different performance issues that come with the fact that my target platforms range from this, to the PS Vita, to Windows PC, with the Vita version having to run with different shaders and other rendering options, which means I'm testing there for things that don't have to be an issue in this case because there is a system that can already handle this. There's also inherent performance differences in Unity on OSX and Windows, and even scripting compatibility issues that can arise. This adds a needless extra step in the testing process.
If you want time precise input, the API has to be independent of the Unity event cycle. FU doesn't run at fixed intervals in real world time, it runs as many times as needed (as quickly as possible, in immediate succession) to "catch up" to the next frame. So people who are writing tight gameplay oriented games would want a streaming / timestamp queued input API. Such an API would also unify processing inputs for players with slower-than-physics and faster-than-physics refresh rates, which adds an ugly level of complexity in the current system.
I'd definitely use such a thing were it available as I'm writing a physics oriented FPS with gameplay that runs at 100 ticks / second on my own time stream. This isn't as crucial for me as in a fighting game or brawler, but it would still improve the quality of interaction and make input handling much more robust. Of course, people on slower machines or in intense, high leverage game situations with lots of characters and effects in view would probably say this is absolutely, 100% crucial, and that input being tied to frame rate is simply broken. And of course, they would be right.
Basically, the current input system should be kept around as a legacy API, with a completely new input system implemented with a very specific and vital responsibility: keep track of and communicate with input devices. On top of that, example "device level inputs => game level inputs" mappers and binding UIs could be part of a standard asset package.
I'd like the input system to be better integrated with character controllers... e.g. you should have some common controllers e.g. platform game controller, shootemup, fps, puzzle etc... and then be able to just plug that in and it automatically hooks up to user input, and then you can tweak it.
It's completely impossible for all but the most tightly controlled scenarios to guarantee a framerate.
The way that Input.touches is implemented should be considered the model for joystick input. Allow us access to the raw input, and we'll be happy to build on top of that
Give us an Input.joysticks, which has a bool array for buttons, a Vector2 array for its sticks/d-pads (or possibly with a wrapper class so that we can get a little more information about it, such as whether it's a d-pad or analog), and a float array for any other controls that are available. Also, a string that we can use to get the device name. I think that will satisfy 85% of our needs.
Then, on top of this, if you want to create an editor interface that serves a similar function as the old Input prefs is designed to provide (Input smoothing, etc), then please do - it had some nice attributes. As long as we can avoid this when we want to and get at the raw data.... I'll be satisfied.
This is a really good point!
And it can be automatically handled by Unity for FixedUpdate loops too. When FixedUpdate catches up to current time, it basically advances the physics simulation X milliseconds for each call. It'd be easy (for Unity) to make sure the InputManager's state is accurate for that same slice of time, assuming there was a timestamped event stream already.
But yes, exposing that raw event/timestamp access to Unity developers would be great.
I feel like Unity should expose the best of both worlds.
On the one hand, I like the Input system as it is. It feels really intuitive, and all I really want is to be able to remap controls at runtime and it would be perfect.
On the other hand, Unity should also give us callbacks for when a key/button is pressed or released, processed like a queue (if a key is pressed several times during a frame, each of those is still processed as a separate callback), and given data like which key/button and the timestamp.
The first one would be the 90% use case, and the second one would be for those who require super precise controls.
How does the current system handle that? If there's, say, 3 FixedUpdate loops for a given Update, how do things like GetButtonDown("A") work? Does it return true for the first, last, or all of the FixedUpdates?
I can definitely understand that being an issue, because if a game is meant to have say a 50hz physics tick rate and input controls the physics and it's being skipped for some of those ticks then results will be inconsistent at the least, regardless of the framerate.
"Completely impossible"? *shrug* Ok then.
I think you need to drop any Intel integrated GPU older than an Intel HD Graphics 4000 if you are targeting desktops and consoles (which I consider the Vita although it's not of course). I think Unity publishes HW usage of Unity engine you should look at that to be sure if you haven't already recently. 'Desktop' games tend to upgrade in faster cycles than non-gamers. My MacMini is Mid 2007 and until Unity 4.5 was fine. I think even 4.5 it's problem is a bug in 4.5 but I'll have to wait see.
How could you possibly guarantee a framerate without having control over what hardware the user has?
That doesn't really fit the thread topic, but if you start a new one I'm sure there'd be plenty of great discussion.
One other idea that came to me last night playing in Unity, was when I first needed to implement some user input to my game... I had to code-up a raycast thing and bla bla bla in order to get it possible to click on an object to do something. It would help `user input` if it were easily possible to add input events to game objects directly, ie like an `accepts user input` toggle, which when clicked allows you to configure which input events are sent to this object, and then in your code you just do the OnMouseDown or OnInput or something and handle it. This would be easier than hacky raycast code or having to try to figure out how to integrate the right kind of clicks or whatever on an object. Something simple like that would be helpful. So then I just click on some game object, say I want it to do something when left mouse is clicked or when user does a touch or something, and then either write a tiny script or maybe it should auto-generate a little input handler script with the appropriate functions ready made.
I neglected to read the previous 100 posts, so forgive me if this is redundant.
I found the lack of in-game input customization awful. I couldn't understand why it was ever like this.
So beyond the basics of 'let's have input be able to be customized', here's my wishlist since I went out and wrote my own input system customization and completely ignore Unity's.
1. Double click. Please, please, please somewhere just have a built-in double click.
2. Double click time customization. Just put in the input manager settings or somewhere in that relative space.
3. I don't know why, but ability to turn off double clicks (someone will want it.)
4. Trying to decipher KeyCode.Alpha01 and have it map to the number 1 is ridonkulous. If I try to have a user type in '1' and have it output what key was hit, it's some huge string of alpha## if I recall. Maybe I was too new at the time, but I don't think you can take an input number and compare it to a KeyCode.Alpha##. I had to set up my own enum and do compares myself.
5. A very basic in-game UI prefab using the new UI. Since a huge portion of games will probably want this, instead of having someone submit something to the asset store and then have us buy it just to get a quick-and-easy version of an in-game input customization... have one done already. People can sell fancy ones with skins and whatnots, but the most basic... should be included. I shouldn't have to code something to integral even if it isn't terribly difficult depending on the implementation.
6. Repeat rate - Input.GetKey has the most ridiculous repeat rate. Have us be able to customize the repeat rate of when a key is held down, because 100+ times a second is a bit much.
7. Expanding on option 5, maybe even have a plugin-type system so you can select which types of controllers for input want to customize - joysticks, gamepads, keyboard, mouse, phone, whatever... So you don't see joystick 1-15 sitting there if you don't need it, etc. I guess this could all be coded... so this is more of a wish than a need.
8. As stated before (I did read some posts) input shouldn't rely on framerate, or at least have an option for this to turn it on/off. Again, in the input manager settings.
9. The GetAxis is nice with all the customization. Lovely in fact, other than I can't remap my keys to it while in-game, so once that is resolved... lovely.
10. I don't know if it's there or not, but options to turn off any accelerometer or other mobile-specific inputs in the relative mobile builds.
11. Don't know the feasibility of this, but I assume mouse sensitivity in all forms probably belongs in here too.
Basically, find a very popular, highly customizable FPS and replicate what they do.
I think that's it for now. Having written my own Input Customization once in uGUI and then again with NGUI, I can tell you the current setup is completely unacceptable and I am very please to hear this will be improved, even if I may not use it immediately as I'd have to replace what I have.
[Edit: Feel like everything here's covered. Frame rate independence. Some feed of time stamped inputs. More access to raw data and connectivity. Sorry for the spelling. Rushed, but needed to have a say! This stuff is important! You might think it isn't, but it is!]
Oh, I've been waiting for this!
Polling Frequency/When are Inputs Fresh?
I've seen a lot of people trip up on using Input in FixedUpdate - notionally complex to understand that there might be more than one FixedUpdate before the next Update. What people miss is that there's no fresh input POLLING in between two FixedUpdates. So, people using Input.GetButtonDown in FixedUpdate sometimes get a double event in physics (i.e. jumping with twice velocity in low frame rate situations).
In the case of Low Frame Rates
If your Updates are slow enough, you might completely miss a Press + Release in between polling takes if you are saving off inputs from last frame and comparing them to this frame. I think Input.GetButtonDown probably also missed these events (and for a while, it was also like, a frame behind, possibly because of OnGUI's execution order: )
The (slow) OnGUI work around[b/]
There's a not great work around: You can currently grab all the keyboard, mouse and joystick* button events and deltas in OnGUI, but OnGUI is executed after Update and LateUpdate, not in time to update the current game frame. I wish we could get this at the beginning of Update. The earlier we can respond to input the better.
Mouse (And general axis) deltas are aggregated between update frames. So, if Update frames are infrequent, you can lose subtle detail in between frames. You might "snap" your mouse left then right, and you would miss important feedback like a button hover-over event because, accoding to the polling, your mouse never went over the button.
This seems small but it's one of those things: "Your eyes don't see it, but your brain did."
Again, this also causes problems for FixedDelta, which may run faster than Update, but will only receive input data since the last Update frame. See this demo: http://ludopathic.co.uk/2012/09/09/mouse-sampling-in-unity3d/
Suggested Fix: Input Events
Ideally be able to register to event delegates for button presses, and axis changes.
Ideally also has some "real time since startup stamp" data appearing with a button press/release event, too, so that we can be totally accurate about when the button was pressed inbetween frames, and retcon gameplay accordingly. i.e. if I pressed "fire" 20ms ago, I can spawn my bullet at 20ms * bulletVelocity
Ideally a choice of when to receive the next queue of polled events : just before each FixedUpdate or just before each Update.
Common thing for local multiplayer games: It's very hard to deal with disconnecting and reconnecting. I like to do things where the act of plugging in a game spawns a player, and pulling the controller out disconnects that player. Other games might want a player to persist until the same controller is reconnected. If TWO people disconnect, how do you tell which controller is which again?
Using enums for the OnGUI workaround (described above) also made this tricky, because who knows which button KeyCode.Joystick1Button6 refers to anymore? Very unclear what's actually connected/disconnected, right now. They're all just anonymous controllers.
This makes disconnecting/reconnecting a real struggle, and in some cases, impossible to deal with. You just have to quit the game, plug back in, and start again.
Ideally some way to know about the XBox 360 Controller's ring setting.
And if we're going there, holy crap, I'd love to know separate Mouse feeds and UIDs (I'm currently doing a DLL based on ManyMouse, and it does work, but I am having trouble getting useful UIDs from them... I can detect disconnects fine but not connections so I can't "repair" temporarily lost connections)
Suggestion: InputController OBJECTS
Some kind of joystick or controller object to point at, i.e.
Joystick playerInput = Input.GetJoystick( this.playerID );
Vector2 leftStick = playerInput.LeftStick;//read only, raw data.
bool buttonAState = playerInput.ButtonA;//Also just an array interface?
playerInput.EventDisconnected += OnJoystickDisconnected;
playerInput.EventReconnected += OnJoystickReconnected;
Very optional Possible DeadZone filters
There are loads of resources for how to properly do dead zones. Unity's standard Input.GetAxis approach actually complicates and hides away the details of how their axes work: The InputManager is not an easy to understand for a new user, and not easy to use for an experienced one. It results in people's ignorance about kinaesthetics affecting the quality of game feel in unity games in particular.
Give Input to us raw by default, but make it obvious that there are simple utils to help clean up things like dead zone wobble, or easy ways to make a better analogue deflection curve.
(See http://ludopathic.co.uk/2012/02/28/no-deadzones/ )
Some sort of interface in a controller like this, which makes you think "Hmm, what's a circular deadzone anyway?".
Vector2 circularDeadZone = playerInput.CircularDeadZone(leftStick, lowerDeadZone, upperDeadZone, powerCurve);
But keep it open and clear: When it's obfuscated in any way, it's really hard to be sure what's causing your game to feel kinaesthetically displeasing.
*I believe that KeyCodes had joystick buttons added - things like KeyCode.Joystick4Button7 so that you could definitely get all inputs in OnGUI regardless of frame rate.
Oh and also, Screen.lockCursor exists, and forces the mouse to the center of the view.
Can we have a Screen.lockCursorToEdges for browser or windowed games, where you want to have a fast, hardware cursor, but where it's annoying to click outside the screen bounds and lose focus.
I don't think it's really hacky. The advantage of raycast is you can specify which layers will be the only ones considered for the click.
An event-based approach is useful, but I think the polling approach of doing a raycast is also useful. It should be up to the developer to decide which is best to use for their current situation.
I made my own small set of utility scripts and one of them is a static Mouse class with a Mouse.Raycast() so I can do this:
if (Mouse.Raycast(myLayerMask, out mouseHit))
// do stuff
Or if layer isn't important:
if (Mouse.Raycast(out mouseHit))
// do stuff
Minimal amount of code to type, and easier to read. Perhaps something like this can be added officially? Something like Input.Mousecast or whatever.
That's as many be, and all well and good if you know what to program, but for others this should be much easier to get at least some basic input up and running with a few clicks instead of having to write code.
I don't think that Unity themselves should be worrying about implementing arbitrary use cases for non-programmers because there's a lot of downsides to that - which ones do they support? What's the workflow? What are the tradeoffs they imply for other stuff?
IN all honesty, I thin this kind of thing is what the Asset Store is for. The kind of things you're talking about fall in the significantly-less-than-a-day bucket for skilled coders. (The ray casting bit itself is genuinely only minutes for me.) If there isn't already something on the Asset Store that does it I think the commercial or collab sections should be a good bet, especially if you're willing to pay a little for the effort.
Yeah, additional options for cursor locking would be great. Adding something like Screen.lockedCursorBounds allowing us to specify where the cursor can go when it's locked would rock - it could default to the centre so it's uncustomised behaviour replicates existing behaviour, but would also allow us to either put the cursor somewhere else or allow it to move in a restricted area.
Lots of non-programmers use Unity. I specifically use tools like uScript and Playmaker specifically because of how bad I am at programming. Most of the stuff being made with Unity is being made by non-programmers. And saying "this is the kind of thing the asset store is for" is what has allowed Unity to get away with half-assing a lot of implementations until now.
Let me flip that around. I'm a non-artist. Should Unity include more art for me out of the box because I can't make it myself? No. If I need anything more than the example or fundamental components that come out of the box, it's up to me to either find and buy them or find an artist to work with. Why is code different?
Before you say "but Unity does come with art!" Yes, it does. It also comes with some standard code assets. Just as the standard code assets don't meet all of your needs, the included art doesn't meet mine. Neither of those are shortcomings of the engine.
My point is that the engine should concentrate on giving us the best tools to make stuff ourselves, not on doing the making for us.
Unity is a game engine, not photoshop. And it's not that the included code assets don't meet my needs, it's that they're fundamentally broken, obtuse, inefficient, or useless. Are you railing against the new GUI being almost entirely drag and drop too?
Exactly. I wouldn't typically expect that to come with use-case-specific art or code. I'd expect it to be a tool that helps me work towards a huge variety of use cases.
Not at all. "Drag and drop" has nothing to do with whether or not something is use-case specific. From what I've seen in its thread the new GUI pretty much exactly sticks to what I've said - it gives us a tool that meets the fundamental use cases and is easily extensible by us if we need to do more specific things. It's a tool to make GUIs, it is not a bunch of pre-made GUIs being handed to us. If updates to the input system follow a similar model I'll be super happy.
Anyway, this is getting way off topic now.
Except handling user input is a fundamental part of game development and should be streamlined as much as possible. Developers should not have to build their own control remapping solutions when these are standard features in all games since before Doom came out.
Please re-read the context of what I was talking about, which was someone asking about use-case specific stuff. (Before you jump down my throat again, I think the question was reasonable. I just personally think the answer is "no" as far as core engine functionality goes.)
I know you're all psyched for the Input Manager to include a make game button, but ultimately, input handling can be an expensive business which takes up quite a few ms on limited devices or hardware, so I'd like it to be a bit of a priority to focus on having fast alternatives.
For instance 0.5-1ms spent on input is a lot if you've only got 16 to play with.
Just want to say thank you to everyone who's taken the time to reply here. There is a lot of *really* valuable feedback here, all of which we'll take under consideration (I'm compiling a list of all the points that have been raised here as we speak).
As a quick update on the current state of things, we initially realized we won't make it in time for 5.0, partly because we couldn't free up the respective developers and partly because we don't just want to patch up the existing system a little but rather take a full step back and do this properly. ATM, with the work on 5.0 tipping more and more towards stabilization, we're ramping up work on input again.
There already has been quite some design work (and some implementation work) and I'm glad to see that we're already covering many of the points raised in this thread. There's full script access to the raw event stream (including timings as precise as the underlying platform gets them to us), for example, which you can process at will (including rewriting/synthesizing the stream completely). There's full availability of input within FixedUpdates based on the timing information we have. There's a separation between a low-level event pumping + state processing layer and a high-level action-based layer. And lots more. But I'm also seeing things raised here that we hadn't so far taken under consideration.
I'd agree with angrypenguin in that a plugin can sufficiently do what you want. We all have different ideas on how an easy non-coding way to do it is. It's possible your needs are different from the next Unity dev. So I don't think Unity should have a built-in one. But of course, that's UT to decide. They made a navmesh pathfinding library built-in even though there are devs who don't need it (if your game has a chess-like map/hex you'll still want A*).
And there was much rejoicing from anyone doing anything based on twitch timing! And now my DDR clone can begin
Just in general, are there improvements to the way non Mouse/Keyboard devices are handled? I'm not sure how to phrase this, but if I have 3 gamepads hooked up, is there anyway to get something like GamePadOne.buttonOne.GetDown() (even if it has to be cached through Input.Devices or something like that) and get the raw data without having to go through the input manager?
This is pretty much my #1 complaint with the current input system, that non-Keyboard/Mouse devices are pretty much broken by being hidden off behind the input manager.
While the scripting API for this isn't very firm at this point, the way the current design works does address this problem. For one, you can tap into the raw event stream where each event has precise information which device and source on the device it comes from. Also, state within the input system isn't allocated as a fixed set but rather grows with each device added to the system meaning you can specifically ask for a button X down state change on a device Z.
This design addresses two issues raised multiple times in this thread: the one of not being able to circumvent the input manager entirely (which you can now do by simply hijacking the event stream entirely) as well as issues with supporting multiple devices of the same type (which is now trivial as you can allocate and plug in new devices at will).
I really believe that these guys did a perfect job of enhancing the input system, I highly suggest unity should buy their code...and offer them a job.
I spent a little bit of time extending the add-on to make it so any player at runtime can create truly customizable controls completely platform independent.