Unfortunately, too early for me to offer anything but some vague ideas. There’s the notion that gestures are basically interactions as found in the action system. But that the recognition of these interactions is separate from the interactions itself and that input can be sourced more flexibly than ATM. Such that you can have gestures/interactions fed from platform-specific recognizers as well as from custom-built software recognizers.
How that will be made contextual is still to be decided. In my mind, it’s a separate step. The low-level part of surfacing gestural data and the high-level part of contextualizing them – ideally in a similar way that you can do it, for example, with LeanTouch.
But this is all super vague and probably not very useful. There’s still a couple things to get worked on before gestures (e.g. general action system improvements like a better polling API, stacking of actions, support for setting parameters dynamically, stuff like that).
@Rene-Damm Do you have any indication of when the swipe support in the new input system will be available? Just wondering whether to code a manual one myself of hold on to a release that is coming soon (or might have dropped already and I didn’t spot it). Thanks.