Input System Update

//UPDATE: For instructions on how to install it via package manager, please see here.
//UPDATE: Mentions of preview builds are outdated. You can use the “develop” branch in the GitHub repo with any Unity 2018.3 version or 2019.1 beta. No special build is required.

Hey everyone,

First, we’re sorry we’ve been radio silent for a bit. We wanted to make sure we had more than just a “Hey, we are still working on it.” So… what’s been happening?

After an evaluation by several teams inside Unity, we found things falling short in a number of ways, chief among which were performance implications of the event model we had chosen. After reviewing what we had to change, it became clear that while our C++ parts were headed in the right direction, our high-level C# layer was going to have to be fundamentally changed and this meant a rewrite. Having gone through what’s already been a rather protracted period of development, it was a bitter pill to swallow but we really think it was the right thing to do given that whatever becomes final will be final for some time to come.

After spending time back at the drawing board, we’ve made rapid progress implementing the new system and are getting close to having feature-parity with the previous system. We’re excited about how things have turned out and are seeing solid progress with the issues we’ve previously identified. We’ve also gone through another round of internal reviews getting a much more enthusiastic response.

However, to get to the next stage, we want to open development up to a wider audience to make sure we’re hitting all the right targets. To this end, we’ve made our C# dev repo public as of today and will be providing preview builds of the editor shortly (our goal is to have them available before Christmas) so you can run the system yourself and be involved in shaping its final form.

Be aware that things are still under heavy development. What you’re seeing isn’t 1.0 or even 0.7. This is not a polished “final” release and all accompanying materials are work in progress.

Beyond the preview builds, our plans (disclaimer blablabla) are to land our native backend changes in a Unity release and to make the C# code available as a Unity package. By that point, anyone will be able to use the system with a public Unity build.

Of course, throughout that process we will listen to feedback and adapt. We’re trying our best to take extra care we’re getting it right.

Q&A

What are the main differences to Unity’s current input system?
Whereas Unity’s current system is closed off and only internally has data about device discoveries and events that happen, the new system sends all data up to C# for processing to happen there.

This means that the bulk of the input system has moved into user land and out of the native runtime. This makes it possible to independently evolve the system as well as for users to entirely change the system if desired.

Aside from this fundamental architectural difference, we’ve tried to solve a wide range of issues that users have with the functionality of Unity’s current system as well as build a system that is able to cope with the challenges of input as it looks like today. Unity’s current system dates back to when keyboard, mouse, and gamepad were the only means of input for Unity games. On a general level, that means having a system capable of dealing with any kind of input – and output, too (for haptics, for example).

How close to feature complete is the system?
There still remains a good chunk of work to be done on actions and especially their various editing workflows. Output support (rumble/haptics) has a design in place but implementation is still in progress. Also, while desktop platforms are starting to be fully usable, there still remains a good deal of work on the various other platforms. Documentation also needs major work. There’s lots of little bits and pieces still missing in the code. And, finally, there’s a stabilization pass that hasn’t happened yet so a good solid round of bug fixing will be required as well.

Beyond that we’re working on equipping the system to function well in the world of C# jobs and the upcoming ECS.

However, we don’t think the system has to be 100% feature complete to be useful to users and instead are aiming for a baseline set of functionality to be fully completed by the time anyone can use the system with a public Unity build. From there we can incrementally build on it and ship updates through Unity’s package system.

How can I run this?
The C# system requires changes to the native part of Unity. ATM these are not yet part of a Unity release. We will make preview builds of the editor based on our branch available shortly which can then be used in conjunction with the C# code in the repository. As soon as we have landed the native changes in a public release, everyone will be able to run the code straight from a normal Unity installation.

Are action maps still part of the system?
Yes. While the action model has changed and there’s still work to be done on actions (and especially on the UI side of them but also with things like control schemes and such), actions and bindings are still very much part of the system. Take a look at InputAction and InputActionSet in the repo.

In the old model, actions were controls that had values. In the new model, actions are monitors that detect changes in state in the system. That extends to being able to detect patterns of change (e.g. a “long tap” vs a “short tap”) as well as requiring changes to happen in combination (e.g. “left trigger + A button”).

Is it still event based?
Yes. Source data is delivered from native as an event stream which you can tap into (InputSystem.onEvent). The opposite direction works as well, i.e. you can send events into the system that are treated the same as events coming from native.

Is it extensible?
Yes. Being able to add support for new devices entirely in C# has been a key focus of the system. We’re still polishing the extensibility mechanisms but the ability to add new devices without needing to modify the input system is already there.

What were the performance problems of the previous event model?
The previous event model was very granular. Usually, one control value change meant one event. Also, events had fully managed representations on the C# side which required pooling and marshaling. Events, once fully unmarshalled, were sent through a routing system which additionally added overhead. This was compounded by a costly way to store and manage the state updated from those events.

In the new event model, all state updates are just memcpy operations and events contain entire device snapshots. Event and state data never leaves unmanaged memory and there is no routing. This model is also much better equipped to work with the upcoming C# job system (where a C# InputEvent class will become unusable).

I’m seeing things in a namespace called ‘ISX’. What’s that about?
This is temporary. Ideally, we would like to use the UnityEngine.Input namespace but given that’s a class in UnityEngine.dll, that comes with problems. We’re still thinking about the best approach here or whether to just use a different namespace inside UnityEngine but it’s still TBD.

The name itself comes from the fact that the system initially had the internal name “InputSystemX”.

What are the plans for migrating to this from the old system?
For now, the two will stay separate and exist as two independent systems in Unity side by side. The existing input system in Unity is such a fundamental part of pretty much every Unity project that we cannot safely consider a migration until the new system is fully ready in terms of both functionality and stability.

Projects will have a choice of which system to use (including using both side-by-side) and will be able to turn one or the other off (with the new system being off by default for now).

Once there is truly no reason anymore to use the old system over the new one, we can start thinking about what to do about the old one.

Are the APIs that are already there reasonably close to final?
At this point, it’s still too early to tell. There may still be significant changes.

49 Likes

Awesome news, thanks!

I really like the approach of releasing these things as external packages (just like the Post-Process stack). It gives easy souce code access and prevents bloating the engine with tons of things. Other systems that would benefit from being released as external packages are the UI and Unet HLAPI. These are two things that I often wish I could easily modify without having to recompile a dll

So basically I wouldn’t mind if this remains not-integrated into Unity. Maybe add them to a list of “official packages” that users can select when starting projects, if you’re worried about visibility

6 Likes

Cool!
I’ve got a few questions, some of them were already answered in the wiki, so here’s the rest:

Will we be able to do custom constraining of the cursor?
In the best case that’d be SetCursor(sceenPos);
I want to have the cursor wrap around the edges just like when you drag a value in the unity editor (getting close to an edge will put your cursor on the other side of the screen)

I’ve read the wiki and I’ve watched some of the videos as well, but it is still not quite clear to me how much delay exactly there will be between a physical input and the earliest point in the system where we can react to that.
Will there be some “slow” (as in less than mouse update rate, <1000hz) buffering going on somewhere that delays the events in a “flush every Update()” fashion?

Here: GitHub - Unity-Technologies/InputSystem: An efficient and versatile input system for Unity.
It says that there’s a third option. But I don’t get it.
Either you have events, or you poll, right?
Or is the event triggered by some user method call like “UpdateAllInputEvents()” or something like that? Is that what’s being said here?

If not, then how is that different from simply having a polling api and a event based api available at the same time? I think I’m missing something here. Would be nice if you could elaborate on that.

Integration with the job system sounds nice. But I don’t quite understand in what scenarios that would make sense. Input isn’t something that takes a lot of time to calculate, so the only thing I can imagine would be that we’d get full async events pushed to us on a Job-Thread (for absolutely zero delay, which would be awesome).
Is that right? Can you give an example scenario here?

This is more like a comment.
You said you didn’t just want to say “hey we’re still working on it”.
But I think if you take a look at the forums, the people would have reacted completely differently if you just did that :slight_smile:
You could have said “Hey, still working on it, need to do a full rewrite bc performance sucks atm, the experienced people here will know that its for the best, cheers”.
That would have reduced all the negative and inflammatory comments by 90%

Anyway, I’m really happy about the way things are.
The new system looks really really promising, I like it.
And I’m happy that you guys decided to rather do a rewrite than to accept GC-pressure and/or lower than optimal performance.
My only concerns are SetCursorPosition and input-delay, and those are minor.

2 Likes

Great news, thanks! :slight_smile:

Thanks @dadude123 .

ATM I’m still figuring out how to best handle pointer positions especially with respect to multiple displays and the fact that the input system can be used in EditorWindows as well (not just in game code).

What you can already do is put custom processors on pointer position controls which modify the stored value the way you want to when it’s being queried. Also, I think that once output support is complete you’ll be able to set the value on arbitrary InputControls – though that needs figuring out how it’d work with actions (which need to be able to observe state changes).

I’ll have a think about your use case and how the API could best support it.

By default, yes, there’s buffering. However, unlike in the old system, the buffering happens on the source data, not the aggregated data. What this means for mouse input on Windows, for example, is that it would sample mouse input at probably a higher rate than your framerate (IIRC the default on Windows is 120Hz) so there’d usually be multiple state updates for a frame, all of which come through in C#. Also, where it’s on us to sample rather than the system (XInput, for example), we’re making sure sampling happens at user-controlled frequency rather than framerate.

Preconfigured input updates, where we flush out data, ATM happen right before fixed updates and right before dynamic updates. One or the other will be able to be turned off.

Buffered data can be flushed out manually – which, however, ATM only benefits you in the case of data that isn’t fetched from OS-supplied event queues that are tapped on the main thread’s player loop.

Finally, there’s work under way to give C# game code control over the arrangement of the player loop so with that in place, scheduled input updates can also be moved around within the loop.

Events are actually the third option. The notes aren’t worded well. It’s talking about 1) callbacks, 2) polling, or 3) events where events in this case means a “give me all the events that have accumulated” style rather than “we notify you whenever there’s an event”.

Yup, indeed, the concern isn’t the processing overhead of input but more the logic that is affected by input. Say your entire game logic is chunked into nice jobs with clear dependencies. But then, part of that logic depends on the state of the input system. If you can’t get to that data in your jobs at the point you need it, then you have that ugly sync point that forces you back on the main thread. You can put it up front to before scheduling any jobs but that’s still a constraint on what you can do in your logic and what you can’t.

Believe me, this has led to some heated internal discussions :slight_smile:

9 Likes

Very good news! Definitely looking forward to seeing this develop!

I do also have some quick questions about this:

  1. Will/Does it have controller rumble support?

  2. Can it handle multiple gamepads and if so, how? Any chance for an example in some way?

  3. During the development for this, will there be any example scenes/projects provided so we can see how things are done in Unity? IIRC the original prototype also provided some example scenes and scripts.

  4. Is there a way for us to detect what the user is currently using? Like some event that fires whenever the player switches from gamepad to keyboard. If not, there should be! It makes it easier to adapt the game to whatever the user is using right then and there.

Very excited to see how this turns out! Looks very promising! Just please don’t kill this new system! :slight_smile:

Yup, working on it :slight_smile: On the native side, we have the foundational pieces in place but on the managed side, we’re missing the part where you can write into state and have the updated state reach the underlying backend. It won’t be in place for the preview builds but it’s high up on the list of things to get finished.

Yes, though “handling” can refer to a lot of things :slight_smile:

You can have arbitrary many gamepads (or any other type of device – there’s no device count limits in the system).

// Find the last gamepad the user used.
var gamepad = Gamepad.current;

// Find all gamepads.
var gamepads = Gamepad.all; // In the API but not yet implemented; will throw.
gamepads = InputSystem.devices.Select(x => x is Gamepad);
gamepads = InputSystem.GetControls("/<Gamepad>"); // Every device using the gamepad template.

With actions, you can have a set of actions, for example, and then use that same set for a 4-player local coop scenario:

InputActionSet UseActionsWithGamepad(InputActionSet set, Gamepad gamepad)
{
    var clone = set.Clone();
    clone.ApplyOverridesUsingMatchingControls(gamepad);
    clone.Enable();
    return clone;
}

// Determine which gamepad to use for which player in some way specific to your game.
var player1Actions = UseActionsWithGamepad(myGameControls, gamepad1);
var player2Actions = UseActionsWithGamepad(myGameControls, gamepad2);
var player3Actions = UseActionsWithGamepad(myGameControls, gamepad3);
var player4Actions = UseActionsWithGamepad(myGameControls, gamepad4);

// In reality, you'd probably have a component representing the input for one player and then have four player GOs with that component...

There’ll be more refined ways to work with multiple devices of the same type and sets of actions being applied to them but the core of it is there.

Absolutely. ATM what we have is… nothing much. We’re hoping to get the R&D content team to help us out there. As we get closer to full release, there will definitely be more polished supporting material and more examples to work off of.

ATM it’s possible by listening to the input stream. Events tell you which device they are for so you can see when the user changes from one to the other. However, noisy devices make that tedious (e.g. the PS4 gamepad will constantly spam the system due to have sensors in the device).

Eventually, there will hopefully be more refined mechanisms.

:smile: No one wants to see that happening. Least of all anyone who went through the reboot…

We’re very confident we addressed what needed to be addressed and are on a really good track going forward.

4 Likes

Has the mythical Unity input system beaten Spider-Man’s number of reboots record yet? It looks like this iteration at least has more than just a cool new suit :slight_smile:

(Insert monthly rant here about Unity’s age relative to the suckiness of its input system.)

3 Likes

Hopefully http://www.isthenewinputhereyet.com/ won’t be up long.

@Rene-Damm One of the great things about rewired is the colossal amount of joypads it just recognises and maps to the internal layout. So my code just uses internal names (we prefer xbox naming) and that maps spatially to whatever pad is plugged in so if the internal name is .LeftBumper it’ll always be that place.

Any plans for it, because without, I don’t think Unity’s new input is worth using because using Rewired, you can just deploy on steam and practically nobody needs to map their weird logitech or ageing ps3 pad.

Rewired has some 900+ mappings so far, a few no doubt just variations but still, it’s actually something that raises the quality of a lunched product in a real gamer-perceived way vs just invisible input behind the scenes, that the gamer doesn’t experience.

Thanks!

PS. Hopefully it’ll be as simple as InControl to use (rewired takes a few too many editor-side setups for my taste even if it is more powerful).

I don’t think that’s fair. In at least the latest iteration, Spiderman also got an annoying child actor with lots of unfunny one-liners.

Agreed that consistency of mappings is a big deal. If you can’t rely on the A/south/cross button to be in the spot you expect it to, that’s not much good.

IMO this has two axes: 1) consistency across platforms and b) consistency across a specific interface (especially HIDs on desktops).

  1. is something we’re working on to address from the get-go. Unity’s current input system is 90% platform-specific code with huge variations given that all the interpretation of “what does this input mean?” happens on a per-platform basis. What we’re going for now is both much less platform-specific code as well as more consistency between platform-specifics. I think this step alone will make a big difference.

  2. I think will partially be something that will build up over time as we add profiles for devices to the system. We did license InControl’s roster of profiles but things have changed so much that we’ll have to re-evaluate at some point what we can still bring over in some form. Consistent support for XInput and PS controllers will be there from the get-go, though.

It does pertain mostly to a very specific segment of input devices, though (namely gamepads and joysticks). The system overall aims to address input in a more comprehensive picture.

4 Likes

Really good work. Shit happens sometimes during dev, but we’re all rooting for team input. After all you even have your own website now :slight_smile:

1 Like

As you probably already know, the old Input system has a pretty high latency compared to other engines ( Input seems delayed in old and new (at least mouse does). ), I know that you already talked about delays in terms of input buffering but what average delay can we expect from physical device input to rendering (at ~60fps for example) using a default setup ?

Do we have access to actual precise raw axis this time, because, on my end at least, Input.GetAxis() and GetAxisRaw() reports very imprecise mouse input (always rounded to the nearest *.5 value, nothing in between) while other input systems I tried on raw C# or C++ weren’t having the same issues.
Here’s the script I used to reproduce it:

using UnityEngine;

public class InputTest : MonoBehaviour
{
   void Start()
   {
      float p = 1.2741f;
      Debug.Log("Here's a random float to string to compare formatting:" + p);
   }
   void Update ()
   {
      Debug.Log("GetAxis:"+Input.GetAxis("MouseX"));
      Debug.Log("GetAxisRaw:"+Input.GetAxisRaw("MouseX"));
   }
}
1 Like

Thank you very much for the update. Please keep them coming!

First off, awesome to hear that this is still happening! I agree an in-between update would have been very welcome, but oh well. My questions are less about the input system itself, but might as well.

I am really liking this trend of using Unity Packages here, but my main concern with them revolves around the discoverability of the packages. I would love to see them more closely integrated with the editor itself. Since Unity phones home and checks for updates every so often anyway, I would love to see it fetch information about available packages and create entries similar to the regular Standard Assets (i.e.: Assets > Import package > InputSystem (Opens in Asset Store))

Since the system also deals with hardware outputs, how about just __UnityEngine.IO__?

Regardless, great work so far! Cheers!

2 Likes

We agree, and we’re working on something in that direction :slight_smile:

5 Likes

Are you planning to support multiple controllers for applications like simulation (where my users likely have a control stick/wheel, some pedals that have three or more axes, and maybe a button box or three)? Understand that this is a pretty niche case, and from talking with the Rewired folks I understand that consistently identifying USB input devices especially in the context of cross-platform code is a pain. That said, my mgmt and I would be over the moon if Unity let me query against things like the USB VID/PID after Gamepad.all so I could roll my own persistent mappings.

https://discussions.unity.com/t/670078/3 :wink:

1 Like

Cool stuff. XR crossplatform input is a hassle currently devs have to juggle. Is native XR input handling dependent on this new input system being completed, or is XR input going to be a part of this WIP branch?

1 Like

We don’t yet have hard numbers for you as this is aggregating a host of possible setups but I definitely think in addition to giving you a certain level of control, the system also needs to give you an idea of what to expect.

It does depend quite a bit on what device we’re talking about. For things that get tapped on the main thread ATM (stuff like the mouse on Windows), there is a platform-specific distance between that and where the player loop brings input into C# land – which happens as the first thing in fixed and dynamic updates. There’s stuff in the works to give more control over the placement of input processing which would allow you to move things closer.

For asynchronously collected input, we either pick things up as fast as the system produces (for devices that notify us) or pick things up at user-controllable frequencies. If you want the freshest possible set of async data, you can explicitly flush things out yourself. Out of the box, the system does that kind of stuff for head tracking where we need an extra update right before rendering.

Is that at least a somewhat satisfactory answer? :slight_smile:

Yup, you do. We perform as little processing on the native side as possible and send the data up to C# as raw as possible. There’s all kinds of conditioning you can do in the C# system for delivering final values (deadzone processing being the most obvious example) but you always have access to the raw unprocessed source data we picked up.

When the preview builds are out, it’d be great if you can give it a whirl and see if the data does match your expectations.

Glad the packman guys jumped in this thread and yup, 100% on board. The existing package management solution we have in Unity is weak and I’m super happy about what’s brewing there ATM. Just looking at .NET dev in general with NuGet (or any other contemporary dev ecosystem for that matter), it’s really something we’re missing. But that’s about to change :slight_smile:

That’s an interesting idea. Let me have a think on it.

The one worry we have with the namespace is that we assume users (especially new ones) will likely expect anything input-related to be called something with “input”. Which would get doubly confusing if there is something called UnityEngine.Input and it’s not the right thing to use. But maybe we’re overestimating how much importance that really has (definitely interested in hearing opinions).

Absolutely. We consider being able to accurately map whatever input device is connected to Unity to be a key aspect of a proper input solution.

For HIDs, the Rewired folks are definitely right that it can be tricky and will often require resorting to a database of per-product data about how to make sense of a specific device (thus big controller matrices like http://guavaman.com/projects/rewired/docs/SupportedControllers.html).

ATM we have a two-pronged approach that will hopefully have you covered.

Product-specific templates can be built to specifically deal with individual devices. This is the database-style approach. You can see an example here.

Additionally, there’s a fallback path that represents a best effort when there’s no product-specific template in place. Using the HID descriptor, it tries to figure out how to best represent the device in Unity. ATM this is still very bare bones. You can see it here.

Note that HID data from the platform is now available 1:1 in managed code (we are literally working with raw HID reports in C# code). If what comes out of the box proves insufficient, you can always set up support for your specific HIDs entirely in user space without having to modify the input system. Vendor and product IDs are available to you (see here).

That said, HID is where it ends. We do not ATM have support for other USB device classes. I.e. if your device is USB but not a HID, it won’t get picked up ATM.

We are closely working with the XR team in Bellevue and are working towards converging our efforts when we land the native changes in Unity and a public release. At that point, XR input should be a first-class citizen in the system next to the other types of devices.

1 Like

A possible idea is that the new input use the same namespace as Input. So we have Input.X.
Then later on, when Input is depreciated, both Input.X and Input point to the same thing.

Or, actually. Now you’ve done it. Precious input code harmony is never achievable again :confused: