User-Defined Action Types

The input system I use at home is pretty similar to the new Unity new approach, with ActionMap assets, event listeners, etc.

A feature I’ve come to really enjoy in my system is ‘user-defined Action classes’. Rather than defining Actions with a “PlayerMove” string in the ActionMap, I can create a “PlayerMoveAction” class.

This class will listen for the raw input data (like changes to x and y controller axes) and convert them into contextual gameplay events (like “PlayerMove”). Any scripts listening for this event would be passed a Vector2 with the movement, rather than the raw X and Y input.

I’ve really come to appreciate this, since it creates a separation between input logic and game logic. My gameplay scripts can think purely in terms of game concepts like ‘moving’. They never need to consider non-gameplay concepts like controller axes or touch input.

I haven’t seen any evidence that the new Unity system will include a concept like this, but I’m curious what you Unity devs would think of the approach. -Thanks!

Here’s some code to illustrate (simplified for clarity):

Here’s the base InputAction class (only used as a base for custom Actions):

public abstract class InputAction<TInputBinding, TGameplayEventArg>
{
    public delegate void ActionListener(TGameplayEventArg arg);

    // a collection of listener delegates waiting for this Action to occur
    public ActionListener listeners;

    // a collection of input bindings, assigned by the ActionMap (and defined by the user from inside the Unity Editor)
    public TInputBinding[] bindings;

    // This is called every frame, to catch input based on binding assignments in the ActionMap.
    // My system is polling here, since it's build on top of the existing Unity input system. Ideally, it would using event listeners to catch this input.
    public void PollForInput()
    {
        for(int i = 0; i < bindings.Length; i++)
        {
               if(EvaluateInputBinding(bindings[i]))
               {
                   break;
               }
        }
    }

    protected abstract bool EvaluateInputBinding(TInputBinding binding);
}

Here’s an example “PlayerMoveAction” class:

public class PlayerMoveAction : InputAction<Vector2, Vector2>
{
    // This method evaluates the bindings that were assigned to this Action in the ActionMap.
    // This Action can only be assigned bindings with both an 'x' and 'y' axis name, so it can always check for input from both.
    protected override bool EvaluateInputBinding(InputBinding<Vector2> binding)
    {
        float x = Input.GetAxis(binding.xAxisName);
        float y = Input.GetAxis(binding.yAxisName);

        if(x == 0 &&
            y == 0)
        {
            return false;
        }

        // as soon as valid input is found from one of the bindings, the rest are ignored.
        // A gameplay event is immediately sent to any scripts listening for this Action.
        Vector2 movement = new Vector2(x, y);

        listeners(movement);

        return true;
    }
}

And finally, a gameplay script that would take advantage of this custom Action:

public class GamePlayScript : MonoBehaviour
{
    privat void Awake()
    {
        // this adds a listener, and also creates a new instance of the PlayerMoveAction. InputActions are only instantiated when one or more listeners needs them.
        myActionMap.AddListener<PlayerMoveAction>(OnPlayerMove);
    }

    private class OnPlayerMove(Vector2 movement)
    {
        // do something with movement here
        if(playerCanMove)
        {
            MovePlayer(movement);
        }
    }
}

We have a Vector2 type of action in the InputSystem. When you use it you can also get the input as a Vector2. The object you can query in the ActionMapInput is a Vector2InputControl.

We’re considering adding support for custom action types.

Well it depends on what you mean.

The new input system is specifically designed so that the gameplay code doesn’t need to be concerned with concepts like controller axes etc. The control schemes you setup in the ActionMap encapsulates this so the gameplay code won’t have to deal with it.

But maybe you mean a different flavor of it than what the new input system has at the moment. You mentioned touch input. Does your PlayerMoveAction class also handle input in the form of touch input, or can you give an example of what you mean?