Thanks for the quick reply. I’v been getting acclimated to all the examples, finding them all very helpful.
I’m was attempting to get the VR Template working with visionOS because I had seen the above video and the dev made it seem that it would more or less work if I included the right packages and built for visionOS. I can verify that I’ve done all your suggestions you’ve outlined above – thanks that was helpful. I needed to fix the camera offset in the project so my hand meshes don’t float above my head, and now the hand models appear as expected and move with my hands but none of the hand interactions works within the environment, buttons, Interactables, menus. I’m guessing I’m missing something with regard to how the inputs are being handled.
The VisionOS XR Plugin Samples I found in the package were sort of helpful, although none of the grabbing interactions work for me. I can pinch to set a transform from a raycast of my gaze on anything the raycast collides with, but I can’t interact with the two blue cubes or green sphere in the scene. It appears that I should be able to manipulate them in this sample. The menu buttons highlight on gaze but I can’t interact with the slider.
I have dug into the visionOS Template which has some great stuff in it. But I’m finding them a little confusing on how interactions are handled in these scenes vs. the PolySpatial Samples that I’ve also been looking at. These seem to be two different approaches.
In the visionOS Template, the bounded interactions with the objects are handled differently than the unBounded scene. I’m confused why in the bounded I’m able to rotate objects when I interact with them, but in the unbounded, even with track rotations enabled on the XR Grab Intertactable component the objects only follow position (and not rotation?), is this a bug or is this intended? The bounded version of the scene doesn’t use an XR Grab Interactable component and I’m guessing it gets its movement from the Bounded Object Behavior Script? Using that version I’m able to rotate objects though.
Looking further at the PolySpatial samples (specifically Manipulation Scene) there is an entirely different Manipulation Manager script that looks like it handles interactions with the objects. This interaction allows rotations.
The XRIDebug Scene again uses the XR Grab Interactable, but doesn’t track rotations.
I know there is a lot to unpack here, but I’m trying to find a straightforward example of the best way to handle interactions. I’m guessing the recommend way it using the new VisionOS Template (bounded and unbounded examples), but I’m struggling to understand why I’m unable to rotate objects in the unbounded?
I’m giving up on the VR Template for now… I’m guessing at some point someone will explain clearly the process for modifying inputs from exiting VR/XRI setups to PolySpatial (VisionOS) inputs. 