Best way to handle buttons that aren't supported by XRITK

I want to know how I’m supposed to use other buttons such as primary and secondary button when using XRITK. I understand that select is grab and activate is trigger (usually). I understand I can add some new actions to the XRI Default Input Actions such as primaryButton and secondaryButton for left and right xr controllers then create my own script that listens for those and decides what to do but it feels like it goes against the base functionality of how this system is set up. I was thinking of extending the ActionBasedController to add more inputs such as primaryButton, secondaryButton, primaryButtonValue, secondaryButtonValue, touchPadValue, touchPad, and maybe even joystick. Then maybe extend the Interactable to be able to do stuff based on those new inputs but it seems like overkill a little too. According to the OpenXR Controller Sample I guess the way would just be to have scripts that live on the controllers that are detecting those inputs.

An example of what I’m trying to do is for example have multiple different “Activate” actions for a held interactable. An example is a gun, primaryButton for magazine release, secondaryButton for slide lock release, Trigger for shooting.

Is there a standard approach here or should I just be writing custom scripts to go on the interactables that just enable/disable based on being selected?

I use OnButtonPress scripts on my controllers to invoke functions that are always there (menu button, B or Y to open the inventory screen, etc). But, for extra button functions related to grab-interactables, I think the best way is to attach those scripts to the interactables. The Unity Learn VR path has a tutorial for this—their script lives on the interactable, and first detects which controller is holding the object, then listens for button presses from that controller.

Hey Nalex, thanks for the reply. That sounds like the reasonable approach. I’ve been scouring the internet, youtube, this forum, discord servers, docs etc trying to figure out the “right” way to handle it and yeah I’m thinking that’s basically it. I’ll go check out that tutorial though. I have seen hints here and there of how people are handling it but nobody has like a concrete tutorial and I haven’t seen anything that specifically says this is the way. But I guess at a certain point we just have to decide that it IS the way.

Edit: Have you got a link to the specific one? I looked through VR Development Pathway - Unity Learn and don’t see anything pertaining to extra buttons on interactables.

Mission 4, Project 1 has some relevant stuff in the Balloon Inflator tutorial—that’s where they go through figuring out which controller is holding the grab interactable. There is also some useful stuff in the earlier missions. Mission 1 introduces a lot the basics of the XRIT, and includes some handy scripts like the OnButtonPress that I mentioned. It’s worth having a look to see how they make that work.

That script is decent for things you want to be attached to the controllers but I guess I still feel it is unsatisfactory for actions you want objects to perform when you press certain buttons. What I’ve settled on is a custom script that lives on the left or right hand and it has Action events that it fires whenever performed or cancelled are called from InputActionReferences. Then when a grabbable is grabbed I just hook up those events to whatever I want to happen in the held object, and whenever they are released I unhook up those events.