I commented on the article and Renee said I should start a new thread explaining our use case more deeply. We’re making a pretty interesting game that I think has only been possible thanks to the new Input APIs, so first off, I heavily appreciate the incredible flexibility and open-source nature.
Our game depends on frame-perfect, deterministic, replay of previous device input, even after these devices are disconnected. Currently we are achieving this by recording all state blocks for a device, create a new device with the same json layout, and then manually injecting the State Blocks from the previous recording every frame. We’ve made this possible by making the internal state block pointer public and handling a few corner cases.
This works absolutely fantastically for everything up to the InputDevice level, we can poll the controls like normal and essentially get the exact same behavior. But this starts breaking down for the Actions API. I believe that system uses a more fundamental input event system, and so it can’t see our injected state blocks.
So the two things that are stumbling blocks for us:
We’d really love an example or explanation of how best to inject simulated input events into a virtual InputDevice. Copying the state blocks is extremely fast, and very general (it works for all devices). Ideally there may be a way that is just as general, but acts at a lower level? The challenge here is that we need to have frame-perfect control – if the replayed events are even a frame off, the game starts to break.
I would love to find some way to just run an action manually on top of a given InputDevice. It seems heavily tied to the set of currently attached devices, which makes it hard to use when we’re attaching/detaching simulated devices all the time during gameplay!
I understand this is probably a very abnormal use case, but I’d love to hear any thoughts you have even still. I’m actually pretty acclimated with the source at this point, so making changes is no issue for us.
One of the fun side effects of the work we’ve done here for replay/recording is that we can replay the entire set of input devices from one game session on a completely different platform! See:
Because we’re doing this deterministically, this means we can debug a recorded VR gameplay session on a laptop with no VR hardware and everything “just works”.
This type of thing is relevant to my interests. Would you have a more in-depth blog post or article planned once you get confirmation on API simplification?
I’ll definitely do a write-up at some point! At the very least it’s an interesting dive into some of the inner workings of the system. The fact that it doesn’t work with Actions, though, makes it less suitable for most games.
I always see people attempting to implement an input replay system, and i know the InputSystem has “InputStateHistory” but no one uses it. Is there something about it that just isn’t useful for this case? I always thought it’s purpose was basically for this (and input buffering)
The InputStateHistory is useful, but not the crux of the challenge. We’ve essentially written a simpler version of this for our needs. The challenge with replay we’ve had is how to replay these state blocks properly.
There might be an existing API that I’m not aware of to inject state blocks into the system? I’d love to hear about it if there is!
@Kleptine Thanks for the write-up! Much appreciated. Exactly what I was looking for.
Replaying input at the state block level is doable – and if going through InputState.Change, should also trigger actions correctly --, but overall would recommend going through events instead.
By capturing StateEvents and DeltaStateEvents for a device, all its input should become fully replayable in a way that, from the input system POV, is indistinguishable from “actual” input. It’s basically what the input debugger does. Create a device locally from the same layout as used by the player and then just send every event byte for byte over the wire. Events are blittable so it’s enough to just capture each as a raw byte sequence according to its sizeInBytes property.
Device IDs (and possibly timestamps) will require some patching up but that’s relatively trivial.
By going to the event level instead of directly injecting state, you’ll also implicitly have it work with actions.
For spacing recorded input out over time such that frame distribution is the same, there’s no built-in support ATM. Basically, you’d have to record frame markers yourself and each begin-frame only inject the events for the upcoming frame.
By replaying event sequences on simulated devices, actions should play nice with simulated input out of the box.
With the approach you currently have, you can turn each raw state memory block into an InputEvent and call InputState.Change. This should trigger actions as expected.
Ah, this seems perfect! I wasn’t aware of the InputState.Change functions. I’ll give this a try when I have a spare moment.
One extra tidbit: part of the reason we can’t record DeltaEvents is because our recordings also loop within the gameplay. In which case, summing up the deltas ends up with drift over time. But it sounds like we can just replay our full snapshot for the device, and that should work perfectly!
The whole game is on FixedUpdate, so luckily we don’t have to worry too much about frame distribution, as long as we can inject in the right spots.