Hi there! I’m currently getting familiar with the new Input System and I’ve seen that the documentation isn’t much clear regarding the keyboard layout support. From what I read, it seems like the Input System converts the active layout to the US layout.
So, if I set up a typical WASD movement binding, a user with an AZERTY keyboard would use ZQSD instead. A French friend of mine confirms it.
However, I’m wondering exactly which layouts are currently supported. Would it work with something really weird like the Dvorak layout:
There’s no support for any specific keyboard layouts. The way it works is that keys are identified by physical location. The US keyboard layout is used just to give the keys names. So the “A” key is always the key to the right of the Caps Lock key regardless of what kind of character that key actually generates according to the current keyboard. This is what’s leading to the stable WASD behavior your friend was seeing.
You can bind by generated text character. In the control picker, you can find this under “By Character Mapped to Key”.
When you do this, whatever key generates the text in the binding will be targeted by the binding. If no key generates the text, the binding will not pick any control. If the layout changes, the binding will refresh and look for the key again.
The list in the UI gives options only for the current keyboard layout but you can switch the picker to text mode (that little “T” button) and enter any character you want. For example, to bind to the ä key:
“The way it works is that keys are identified by physical location.”
And how is the physical location actually determined? I don’t know much how it works on the hardware level, but I’m guessing keyboards send different signals depending on the key pressed. Do you mean that those signals are bound to the same physical location regardless of the layout? So, an AZERTY keyboard sends the same signal for its key “A” as the “Q” key of a QWERTY keyboard. If that’s the case then the Input System is just reading the signal and it will work assuming the keyboard is indeed sending the same signal. Is it something like that?
Yup, pretty much. Keyboards identify keys by “scan codes” which don’t change based on keyboard layout. With different APIs and platforms, we have different level of access to that but in general, it’s possible to identify keys independent of their current layout such that the same physical key always comes back to us with the same numerical identifier. So we use that as the basis for identifying keys and then have the identification by layout sitting on top.
One thing I forgot to mention is that the “displayName” property of a key will correspond to what the key is named in the current keyboard layout. So if you do
Thank you for clarification. This is a pretty nice feature, the new Input System is pretty cool. Except for the lack of a proper API to persist rebindings and a few bugs here and there I’m pretty satisfied with it
Hello everyone,
I came across this thread on my research about the new Input System. It’s a very cool feature. And now a question came to my mind… how many supported keys are there? Is there a way to find out the number of the supported keys for keyboard and Gamepad separately?
For the Keyboard you can use Keyboard.keyCount and I believe that’s 110 since is a constant value; Instead for the Gamepad you can use Gamepad.all.Count
Well that was incorrect actually.
By default, you should just press the key you want.
So if you are on a Azerty, press Z. The UI will show “W” because the UI maps the display on a Qwerty keyboard.
But ultimately this is indeed the physical scan code that will be used.
It is possible to actually map on the character (for example Z key) by using the alternative option (“Character mapped”, I suppose). Then it will react on Z wherever the key is.
Actually, the game behaves as expected after building. which is what matters. Although, it does not during the simulation.
I use an AZERTY keyboard. And, in my inputaction file Up is physically bound to W. but during the simulation Z (W) and Q (A) doesn’t work. and if I fix the input file for the simulation, I ruin the build version instead.
Anyway, I think that it is just a minor bug. that was more inconvenient than it should because I am new here ^^"
Basically the same thing for me, I assigned an action to Z key and had to use W to perform the action.
For QWERTY users, I think it is awesome that unity knows automatically that W should be replaced by the key actually used on the system, but it should then be able to do the same in reverse.
@Erenquin pressing Z will only get W displayed when using Listen, which I think should be the default then. If you click on the Path box, it starts a search by text value, so pressing Z gives you … Z (which activates by pressing the W key on an AZERTY keyboard)
Well it depends on your need.
If you press “search by text”, it will associates the keypress to your keyboard system layout so each key will display its proper letter no matter the keyboard.
If you use the default key binding method, it will use the physic signal of the key. It will display the letter of the standard US keyboard, which for Z (on azerty) corresponds to W on qwerty. But in this mode no matter what letter is on the keyboard (considering other regions/keybors), Z will be displayed, but the engine will react to whatever key is there, no matter the systems layout.
So it all depends on:
do you want to respond to the key “position”, no matter the layout (it will probably be yes most of the time)
do you want the key press to depend on the layout. This is very specific case.
where you select buttons when the window is in the mood you can search - sometimes - change devices all sorts. when its not in the mood you cant. It frequently gets stuck in places and wont move, and so for example, in trying to explain this i was trying to add cursor keys and my 2d vector would a) not show me the option for keybord… this is the most buggy thing ive seen in unity UI for a long time