Input managements patterns and linking GUI button presses to Input Manager

Hi! I would like to know what is the de-facto pattern for input decentralization right now. I have a specific case where I am porting a game from mobile to PC and it seems like using the Input Manager makes the most sense. However, I am not sure how to call Input Manager buttons from scripts. An alternative is to call my handler directly but this forces me to repeat myself. Is there a way to call Input Manager programmatically?

Usually you gather user intentions, then map them to user semantic intentions, and finally process the semantic intentions. But there’s no “de-facto pattern” because games can be hugely different: touch vs tap vs stroke vs buttons + multitouch, etc.

If the mobile button code you have is already hard-wired, start jamming in semantic intention callpoints for all the meaningful things the user can do to the code, then make your own PC-based input strategy to drive them.