Refactor for 2.0

I know this is late feedback in the game, but, I am a bit dissapointed in the early coupling to XRController for Interactors and Interactables. If you have any refactoring plans for the future, here goes.

If I were you, I would have designed the Interactor/Interactable interaction (no pun intended), decoupled from the XRController. That whole system has nothing to do with any VR controllers.

I mean, a SocketInteractor could basically be a DirectInteractor with auto-select on.
I would have loved to have had the Socket function “show hover mesh” on my DirectInteractor or RayInteractor.

Example: Let me set up 5 RayInteractors (let’s call them UFO:s), that hovers an area randomly, and if they hit a target (let’s call it cow), the target will be grabbed.

Example 2: Let me setup a HandRayInteractor that follows my hand, and when I press a button any valid target will be grabbed. (yes, an XRRayInteractor!)

Now that 2.0 is in development:

What differs a SocketInteractor, DirectInteractor, RayInteractor?

Detection

  • SocketInteractor, DirectInteractor: Trigger Collider.

  • RayInteractor: RayCast

Interaction

  • SocketInteractor: Select is always on
  • RayInteractor, DirectInteractor: Select from XRController button

I want a DirectInteractor with a large trigger area to be able to choose if selected item is forceGrabbed or not. The code is there in the RayInteractor, so why limit the DirectInteractor? I can even see a use-case for wanting anchor control in my DirectInteractor.

Basically, I would have merged all three interactors into one, and expanded “Raycast Configuration” into “Detection Configuration”, where Collider is another option (besides Raycast/Spherecast).

Because 2.0 uses interfaces, I guess I could rework the framework to fit this, but I’m not gonna touch it until it’s released from its alpha state. Hopefully Unity improves their classes by their own.