can i get an AI to create its own InputAction.CallbackContext and control a player script

I have a player script that gets its command from the PlayerInput component with the callback context. Is it possible to replace the PlayerInput component with my AI script which creates its own CallbackContext in code to feed into the PlayerEntity script??

I am looking for an answer as well. What I understand of good design principles, the AI Players should play/operate by the same “rules” as the human players, so they should be able to use the same interface. Doesn’t seem like you can “set” the “performed/started/cancelled” bools of a new CallbackContext that you create.

You can either have code “play” the game through a virtual device

public PlayerInput player;

private Gamepad m_Gamepad;

private void OnEnable()
{
    // Add a gamepad to the system.
    m_Gamepad = InputSystem.AddDevice<Gamepad>();

    // Switch the player onto it.
    player.SwitchCurrentControlScheme(m_Gamepad);
}

private void OnDisable()
{
    InputSystem.RemoveDevice(m_Gamepad);
    m_Gamepad = null;
}

private void Update()
{
    // You can either "drive" the gamepad by sending events:
    InputSystem.QueueStateEvent(m_Gamepad,
        new GamepadState(GamepadButton.A) { leftTrigger = 0.5f });

    // Or directly mutate state. This will be visible to actions but won't be visible
    // to code listening for events. So, stuff like onAnyButtonPress or PlayerInput automatic
    // control scheme switching won't work here but in this case, neither is likely to be relevant.
    // Unlike events, it does change the state immediately. Whereas events queued here
    // from Update() will only get to the device in the next frame.
    InputState.Change(m_Gamepad,
        new GamepadState(GamepadButton.A) { leftTrigger = 0.5f });
}

or have code make up a device based on actions and then “play” by setting values on the controls representing those actions. See here.

The first approach has the advantage that at the action level, the input will be indistinguishable from human input (so, other code that e.g. checks if an input came from a gamepad will continue to work). The second approach has the advantage of removing one layer of “semantics”. To set a control on a gamepad, the code has to “understand” how that control works and what it does. To set a value of an action, the code has to “understand” only how to trigger the action.