I’ve just spent a few days debugging a few issues in our VR game regarding GUI interactions. I’ve compiled a list of feedback & observations. In particular relating tot he classes: InputSystemUIInputModule
and TrackedDeviceRaycaster
.
1. TrackedDeviceRaycaster raycasts against ALL graphics in a canvas, not just ones with raycastTarget = true
During my debugging I found that
**TrackedDeviceRaycasterLine
**174: inside functionSortedRaycastGraphics
.
There is the line:
**var graphics = GraphicRegistry.GetGraphicsForCanvas(canvas);**
This returns all graphics within the canvas, not just the ones marked as raycastTargets. From reading the code in GraphicRegistry I noticed there’s also function GetRaycastableGraphicsForCanvas(canvas)
;. It feels like this should have been used instead. I guess this is more of a bug report than a request or suggestion.
2. InputSystemUIInputModule inflexibility for determining eventData.trackedDevicePosition and eventData.trackedDeviceOrientation.
These 2 properties are currently read directly from InputActions. This is inflexible as you may want to use the same “tracked” pointer logic from something other than the raw value of the input actions.
A use case maybe involving some kind of gameplay mechanic (not even necessarily a VR game).
Our use case is this (and we can’t be the only ones who ran into this problem):
Unfortunately for certain devices on certain XR SDKs the poses aren’t consistent, and we have to apply offsets to these. These offsets are applied at transform level, we have a **TrackedPoseDriver
**with several child transforms representing the offsets, depending on which device/xr sdk we are using we select the appropriate child. Those children are all configured with their offsets by defining their local positions and rotation. The selected child (known as the “offset transform” is then used as the main transform for all the hand positioning in the game. However when interacting with UI, we can’t use transforms to define this as it reads directly from the input actions which are can only be bound to
- /devicePosition
- /deviceRotation
- /pointerPosition
- /pointerOrientation
Note that these bindings aren’t even available on all XR platforms. Our offset transforms are the only way to achieve this consistency.
One potential solution
Maybe we can try implement our own custom Interactions or Processors to plug into the input system? Transforms are easier to visuals and tweak as they provide a visual medium to do this in and again would allow more flexibility for other use cases than the one I described.
Only Remaining Solution
The only solution remaining is to extend some of the classes to allow the position and rotation to be pulled from somewhere else. Either extending **TrackedDeviceRaycaster
**I can substitute the position and rotation when we do the raycast or extend **InputSystemUIInputModuleand
**substitute it where we populate the **pointerEventData
**or the PointerModel
.
However this isn’t easy either: See point 3
3. Not easily extendable!
There are places in the code that indicate it was designed with subclassing and extensibility in mind. However, that’s not strictly possible.
TrackedDeviceRaycaster
In here we have
internal void PerformRaycast(ExtendedPointerEventData eventData, List<RaycastResult> resultAppendList).
where the raycasting happens. However it’s internal and not virtual. Above it however there is
public override void Raycast(PointerEventData eventData, List<RaycastResult> resultAppendList)
which is the **BaseRaycaster's
**api. It seems that the event system does use this function yet the **InputSystemUIInputModule
**calls the internal function PerformRaycast
. Additionally the **InputSystemUIInputModule
**is specifically referencing the instances of **TrackedDeviceRaycaster
**so even writing my own **BaseRaycaster
**implementation is out of the question without also duplicating my own InputSystemUIInputModule
.
Much of this stuff is easily addressable,