I don’t want to use delegates, too confusing to read.
I made this thing:
so I want GetAxis(“move.vertical”) or whatever the syntax is nowadays
I don’t want to use delegates, too confusing to read.
I made this thing:
// With the generated C# thing.
myActions.NewActionMap.vertical.ReadValue<float>()
// When reading directly from InputActionAsset, e.g. when using PlayerInput.
playerInput.actions["vertical"].ReadValue<float>();
what’s a generated c# thing?
any way to hash that string? or using string has no impact on performance?
In the inspector for the .inputactions asset, tick the “Generate C# Class” checkbox. Results in a self-contained C# class that surfaces the maps and actions in the asset as direct getters.
Note that this workflow isn’t currently compatible with PlayerInput. For the PlayerInput component, only string-based lookups exist ATM.
ok i thought the inputaction asset was the same as the playerinput (the window?)
for the future: which one allows player to deeply customize action? which one handles QWERTY/AZERTY in wasd?
and finally for the switch what’s the easiest way to get npad running?
For the most part, the functionality is equivalent regardless of how/where the actions are defined. Exception is control schemes which currently are support for InputActionAssets. Other than that, features such as composite bindings and interactive rebinding are supported no matter where the actions are coming from.
Overall however, except for prototyping or simple self-contained components that need input, would recommend using PlayerInput. Or, if not that, then the “Generate C# Class” workflow mentioned above (which, however, ATM does not use/support control schemes).
Great to know!
So you recommend that game logic is made of public void SetMove(Vector2 d){}
that change a class wide direction variable rather than polling input from Update(), why is that?
Do I sprinkle playerinput everywhere in my scene? One on the player object and one on the gui, each pointing to a different inputasset?
and what is the camera bucket for? does the PlayerInput automatically do things like raycast/spherecast from camera?
This is very cool btw, and i see you support c# events and unity actions, what’s the benefit of one over the other?
feedback on the UI
love that I can drag and drop an action to another action map
this needs the action type (stick)
when the action type is changed, put a warning sign in the binds that are no longer compatible
what is this 2dmotion?
you need to add tooltils everywhere
no idea what this does.
action>stick should allow binding to gamepad but as you can see, doesn’t
when set to default, (default) should be appended near the name of the binding.
Polling can be used either way. If all a callback does is store the current value which is then processed in Update/FixedUpdate, polling generally is the preferable approach IMO.
Each PlayerInput represents one player. With a single player, there should generally only be one PlayerInput. Actions can be shared between UI and PlayerInput.
ATM this is only used by PlayerInputManager for split-screen scenarios.
A device’s primary 2D motion vector.
Drops binding path into text mode.
i don’t know what that means, example ?
real case where i’d want that?
you guys need to float the “listen” microphone back next to it, btw, nested ui is the bane of productivity
“generally” meaning what? i can have one on both ui and player or not because some internal conflicts? for cleanliness i’d rather have that one input per hierarchy that it controls…
when you say single player, you actually mean single LOCAL player? ie: not 4 gamepads.
PS: input works well on pc now, doesn’t work on switch at all even with an empty scene with only your samples.
Left stick on gamepad and delta on mouse. Basically a 2D planar motion space in the [-1…1] range on each axis.
Usages are still a bit of a novel thing that hasn’t quite evolved into proper shape yet. We know it’s a super useful concept that can help solve some common input-related problems where the name/path of a control is much less useful/important than the intended use of the control but the way they are currently set up and leveraged in the input system needs more refinement.
The control picker can form a variety of the most common forms of control paths but it’s working off of limited information and doesn’t give access to the full variety of things you can do with the underlying path language. Thus text mode.
Example: say at runtime you’re doing
InputSystem.SetDeviceUsage(Gamepad.all[0], "Left");
InputSystem.SetDeviceUsage(Gamepad.all[1], "Right");
The control picker doesn’t know about this except it’s being communicated to it through the “commonUsages” thing of the layout system. But you can just avoid the hassle, drop the picker into text mode and hack in "<Gamepad>{Left}/buttonSouth"
.
Or say you want to bind to any “xxxButton” control on any kind of device device: "*/*button
There’s all kinds of things the path language allows but rather than building some highly complicated control path builder UI, we opted to just have the text mode button to allow directly inputting paths where the streamlined picker isn’t covering your needs.
Yup, local. Thing with PlayerInput is that it doesn’t like sharing devices. Except if explicitly forced through the API, each PlayerInput will grab devices and prevent other PlayerInputs from using the same devices.
If the hierarchies you mention are meant to be controlled with different input devices, then yup, multiple PlayerInputs is just the right tool for the job. If not, then having a separate one for each hierarchy will require some custom scripting to force them all on the same device.
Left stick is not motion, delta is motion. So what is it? delta of left stick?
If it’s only a generalization of direction then it should be called 2D direction and not motion.
Does it handle switch gyro? mobile phone compass etc…?
no idea of scenario i’d use that but very cool
example? in a game this time.
What’s the magic code to have them act as one?
do they detect each other and warn user of input theft when linked to the same hw?