What is the correct minimal setup of XRI, Polyspatial 2 and RealityKit?

The samples provided are using samples from XR Interaction Toolkit and the documentation is very sparse in regards of Input. What should be the correct and minimal setup to support XR Interaction Toolkit, including XR Grab Interactables and XR Simple Interactables that also works in the Editor’s Play Mode?

I haven’t tried this but I believe you will need XRI 3.0.6 with this sample visionOS Sample | XR Interaction Toolkit | 3.0.6, though not sure if it works in play mode.

Are you specifically looking for a sample that shows how to use XRI? Or is the root issue the fact that what you do have to use is not working in play mode?

1 Like

Assuming you use Play 2 Device, they should work in play mode just fine.

1 Like

Oh, I copied the setup from the XRI samples, but now I have two issues:
poking doesn’t work, Project Validation in Multiplayer Play Mode in Virtual Clients is complaining about starter assets of XRI requiring Shader Graph, which is installed, which makes Virtual Clients almost unusable, because you can’t close the Project Settings window because of this, and third - I tried copying all the scripts that are used in the visionOS sample to another folder and deleting the rest of the samples (I need it, because we’re using custom packages and I want the whole setup to be a part of our core package) and the interactions stopped working.

So I would like to have a proper instruction on how it should be set up (different scenes in XRI and Polyspatial samples have different setups) to try to solve the above and to understand how it should be done in general and not just to blindly copy the setup from the samples without understanding how it works.

I don’t use P2D, interactions with the setup from the visionOS XRI samples don’t work with P2D. I know there is the Input Mapping for Play Mode in Polyspatial samples, but that will be the next step.

Hey so I built the XRI sample for VisionOS.

It has a few caveats however.

  1. That sample is largely designed for polyspatial. If you need metal support, the input is handled differently there, so you’ll need a different prefab setup (for now). You can find that setup in the visionOS xr package samples.
  2. The sample is designed around the use of the built in spatial pointer gestures, rather than hand data. The poke gesture generally tends to be unreliable. I’m not super happy with the implementation. To get that working if you have hand tracking setup, you simply need to make the poke interactor track the index finger instead of the the spatial tap poke gesture.
  3. The popup error for starter assets should have been resolved in xri 3.0.6, which just released.

If you want detailed explanations for how input is configured in the xri sample, I recommend reading the extensive docs I wrote explaining it. It helps illustrate how the whole setup works so you don’t have to blindly copy samples.

Hope that helps!

1 Like

I’m using 3.0.6((

Note as well that we recently tracked some issues with the input system package update that may be resulting in input issues on device. It’s being worked on.

Regarding the pop-up thing, can you share a screen shot so I’m sure we’re talking about the same thing?

I have the same setup in a different project and there the Project Settings window also gets opened, but in a separate window, so it’s not blocking the view in the Virtual Client, weird