In Input Team we’ve tried adapting a few existing small games to using the new input system as back-end.
It’s hard to cover lots of use cases so we’d love to hear if everything goes well if you try to use it for your own game(s). It might in particularly work well if you use:
Keyboard
Mouse
Xbox 360 controller (on Windows and Mac)
(Generic joysticks and other gamepads are not yet supported.)
Things we’d like to know about your game if you try it out:
Does your game have multiple control schemes (for example playable with either keyboard+mouse or with gamepad)?
Does it have tutorial text saying which buttons to press to perform various actions?
Does it have local multi-player (co-op or competitive)?
We are too far along in development to adapt the new input system, but I’d be very happy if Unity could attempt PS4 & Xbox One certification of a title (both single player and local multiplayer) with the new input system, as a part of the development process. There are specific certification requirements around handling multiple controllers in a single player game, matching user profiles to controllers, etc, and these have been cumbersome with the current input model or InControl (which is what we use as a wrapper on top of Unity’s input model – except for PS4 we now use a native wrapper that directly fetches the data from a custom P/Invoke library.)
Having Unity design to make these certification requirements easier to meet would be very valuable.
(EDIT: To clarify this would be things like assigning platform user IDs / metadata to player handles, automatically setting up an initial player handle (and not allowing others) in PS4’s “initial user logout not supported” mode, etc.)
I have done some fiddling with the input system to see if I could get it to work in an already released game. This was (more or less) successful! Below are my findings.
Game details
Game is released on standalone platforms only, and available on Steam.
Runs on Unity 4.6.8p3
Uses InControl in order to support as many gamepads as possible.
Game has rebindable keys, for the most part.
Single player only game.
Game Control details
When using the keyboard & mouse, the game has a control scheme similar to diablo (or other hack-n-slash games). When the player uses the gamepad, he/she can control the character using the left stick and mostly uses the 4 action buttons in order to interact with the world. In short, the gamepad controls aren’t very complex.
Context-Sensitive UI helper
Aside from the game & menu, there is also a HUD element in the game that will explain the player which buttons he/she could use in a certain context. For example, when the player is in combat, it will see in the bottom-right corner which buttons it can use. (see screenshot below)
This is not a very weird feature, but it gets close to the input system when you take into account that you want to have different icons show up based on which type of controller is used. In this case, I was using an xbox gamepad, so its showing xbox icons. A PS4 controller would be showing its own icons instead.
In order to achieve this for InControl, I simply added an extra field inside each Controller Profile, giving each controller to display “default”, xbox, or ps4 icons. I then googled the profiles that weren’t obvious to see what they looked like. (and wow, some of them are awful contraptions)
It would be really awesome if Unity somehow managed to streamline this process for this new system. In particular, the important parts here are:
Being able to match icons to specific controls in a consistent manner. Since this should be matched on the gamepad side, rebinding actions to different inputs shouldn’t be an issue.
Somehow being able to respond to an “ActiveDevice” being changed. (when user grabs controller after using mouse for a bit, or the other way around.
UI-wise, a small but awesome feature would be to support detection “Press [button] to [action].” Though this could probably be an asset-store thing on the side.
This is a low-level thing of course, but it might be one of the most important things (in my opinion) that the new input system should fix. Ideally, you’ll be able to use triggers as:
2 seperate axes
2 buttons
(what it is now) a combined axis.
The actual porting part
Because the new input system has some of the same concepts as InControl, I had an easy time understanding the concepts of playerhandles etc… I fiddled with the demo scene a bit and then decided right away to try it out in this game. Since the game also uses InControl and was already set up in a way that the input system could be swapped out for a different system, I knew that it probably shouldn’t take too long.
This means the actual implementation itself didn’t really contain any suprises nor things that I was missing, except for a manager object that I had to create myself.
Because I had limited time, I thus far only implemented the gamepad & gameplay part. This means I didn’t get to test the “block” functionality with multiple contexts, nor did I implement any UI related parts. I might eventually find time to do those parts as well, but not for the next few weeks at least.
Other stuff/questions about the input system
Is it possible to have two players using one keyboard? I can imagine some games wanting to do this so I’d be a shame if that doesn’t work.
minor notes on Inspector of input system:
I wasn’t able to select the mouse scroll wheel axis (±) ?
Shift & Ctrl keys, among others, aren’t named yet in the dropdown of all keyboard buttons.
Continuing work after clicking apply will prompt if you want to apply again (because you’ve again changed something while it was saving)
Dropdown for selection a source seems iffy:
Select Type Vector2 and set it to two other actions
Remove an action in the list above these actions
notice the source value has changed
Selecting dpad-left or dpad-down as buttons doesn’t work. Most likely because they are “technically” part of an axis.
Selected dpad-up as absolute axis, did not work. down does work, likely because of the same reason as above.
Please tell me if you are missing some info and/or if you’re interested in anything more specific.
Great to see you trying out the system with your released game Tim. Thanks for that!
This is part of the Device Standardization feature in the new input system.
If you look at the two device profiles included - Xbox360MacProfile.cs / Xbox360WinProfile.cs - you can see that they include a name for each control. The the action buttons this is “A”, “B”, “X”, “Y” for example.
We don’t have more profiles yet, but a profile for a PS4 controller would instead have “Cross”, “Circle”, “Square”, “Triangle”.
In your gameplay code you can query the name of the control that’s currently used for an action:
Now, this gives you a name and not an icon. Since we want to have many device profiles be built into the Unity builtin resources once it ships, it would be a bit hard to customize icons for builtin profiles.
Instead, You can create a look-up dictionary or list where you map names to icons, and use that to find the icon for a given control name. We’d like to provide such a thing together with the input system, but haven’t gotten to that yet.
Right, that makes sense. We could expose that as a delegate on the PlayerHandle for example.
You mean when authoring an ActionMap? Yes, this is our plan to support. But we can’t do it for the prototype that’s based on the existing Input System. Once we have the new backend, we’ll get to this feature.
Right! We need the new back-end to support untangling the two triggers.
Apart from that, having two different analog inputs work as two axes, two buttons, or one axis should already work, more or less. You can try it out with some other analog axes like x axes of left and right thumbsticks. For the combined axis, you can use a ButtonAxisSource and specify the two axes as the negative and positive inputs. I guess we’ll need an option to reverse either of them for this to work for all cases.
We don’t have builtin support for Unity UI yet so it probably was a good choice to focus on other aspects first.
It’s something that will probably require a workaround, since the normal system is designed around a device being assigned to one player only. But I think you should be able to just use global player handles for this case, so that two global player handles can both listen to the same keyboard device.
Hmm, I haven’t seen this. If you could record a video showing exactly what you mean, that would be great. (For example using Jing.)
Right! This is one our todo list.
Thanks. We’ll look into it.
Thanks again for the detailed feedback! Let me know if the functionality for getting the name of the control currently associated with an action works out for you - e.g. “A” on Xbox / “Cross” on PS4.
That seems like a good idea, I would advise that we’d somehow be able to see the list of all action buttons in documentation or somewhere else… because there are controllers that can be named all kinds of ways. Most developers won’t want to spend time making an icon for every single possible button, but you DO want to be able to have some fallback option. Such as showing the PS4 buttons when the actions are 1/2/3/4.
In addition, you may run into trouble with some conflicting names. For example, you may want to show a different icon for “Back” depending on if the user has a xbox360 vs. an android phone.
Not entirely suprising. I guess in most cases WHERE this would be used, the devs shouldn’t have a problem using the global player handles.
I’m in the process of pouring my current in-development project to the new system, and it’s already allowed me to simplify my input logic significantly. I ended up placing the playerinput object on my persistent managers object along with my input manager component (which is now just a tiny stub thanks to the new system) and passing a reference to the player on spawn. It’s working brilliantly supp far. The only thing that hasn’t been obvious so far is how to handle mouse scroll wheel events, which I’m currently handling directly in my input manager.
I tried it out in a WIP project. It’s a 3D interactive application with no multiplayer support. It will support multiple control schemes on release, though (but not yet). Anyways, here are some of the observations I made.
Switching over from the old input system was surprisingly painless. The process also took care of some long-standing bugs for me. For example, I didn’t want the 3D scene to respond to input while the user was actually clicking on a UI element. This worked like magic in the new input system.
There are some tiny UI behaviors that got in the way sometimes. For example, renaming an action map in the project view doesn’t automatically rename the subassets and the auto-generated C# script. I have to change a setting and then click apply to fix the subassets, and then manually delete the old C# script.
The mouse axes delta axes seem to be inverted. Please verify, because I’m not entirely sure that the error is not from me. I just noticed that when I switched to the new system, I had to add a negative(-) prefix to my variables.
This isn’t particularly a bug, but I feel that a double-click/double-tap should be supported by default, since this is supposed to be a very high-level framework… (as well as mid and low levels, of course).
Overall, this is a wonderfully designed framework and well worthy of applause.
We’re aware of these issues and are looking into implementing a better solution.
Mouse movement to the right is positive and up is positive. This is similar to in the Mouse X and Mouse Y axes in the current/old input system. If you are seeing something different, please let me know.
@runevision we’re looking to incorporate this in new game that we’re developing. Is Unity still looking at using this approach going forward? Also is there a more up-to-date code base? Perhaps a GitHub project that we can tie into?
We have moved from the prototype to developing the proper new input system. It will be very similar to the prototype but not identical and unfortunately not backwards compatible. If you begin to implement thing with the prototype, it will be conceptually easy to switch over to the new input system when it’s ready, but you’ll need to setup your ActionMaps from scratch and tweak various code here and there.
We’re not actively developing the prototype for this reason. The new input system is expected to go into public experimental availability within half a year from now.