ARKit For VisionOS

I have two questions about ARKit support for VisionOS in MR mode
1, Any demo projects for ARKit related feature?
2, How can we test ARKit’s features(planes, meshes, tracked hand) in visionos simulator?

1 Like

Hi there! You can find a scene called Mixed Reality in the Polyspatial Package samples (Samples tab for PolySpatial in the Package Manager UI). Unfortunately, the visionOS simulator does not provide AR data, so you will need to test on a Vision Pro device. You can also test AR scenes using the XR simulator feature in the Editor, or on another AR-capable device.

Dear @mtschoen ,

I would like to check if there is any follow up on this.

Because apparently, when using native SwiftUI in xcode, it was able to access AR data from the visionOS simulator and detect planes/walls/etc. So wondering why Unity still says AR data not available in simulator scene?

The simulator does not let you test any of the ARKit features unfortunately. Apple has developer labs available from around the world with devices you can test on. Apply from Apple developer portal.

Hi, @yosun , thanks for your reply. But I’m still confused about “The simulator does not let you test any of the ARKit features”

In swiftUI, it was able to detect planes like walls, etc. in the simulator and create some objects on the detected plane like in the snapshot below:

Isn’t this using ARKit feature? Sample swiftUI code:

let wallAnchor = AnchorEntity(.plane(.vertical, classification: .wall, minimumBounds: SIMD2<Float>(0.6, 0.6)))

Are you writing your own plugin or using Unity ARFoundation / PolySpatial’s Mixed Reality ARKit Image Tracking support?

Hi, @ina , just to be clear. When I use Unity ARFoundation / PolySpatial’s Mixed Reality, it doesn’t work in visionOS simulator.

But if I write native swiftUI code in xcode (like in the sample code from my last post), it can do plane detection in visionOS simulator.

So it sounds to me that maybe Unity ARFoundation / PolySpatial is compiled to some legacy ARKit API calls that can only access ARKit data from real device but not from the visionOS simulator?

@tdmowrer can you please fix this? would love to be able to test unity polyspatial in the VisionOS XCode simulator.

@timc-unity

The VisionOS simulator actually provides AR data for native apps. Please update / fix Unity’s ARKit implementation so that it can interop with the XCode VisionOS Simulator.

2 Likes

Hi there! Thanks for pointing out the differences here, @iamknew8. I wasn’t aware that AnchorEntity had this behavior in the simulator. I think I see where the confusion is coming from now.

Although AnchorEntity will work with simulated AR data, the ARKit API we use to integrate with Unity and AR Foundation does not. For this to work, Apple would need to provide simulated AR data through this API. I encourage folks to submit this feedback to Apple using the Feedback Assistant.

It may be possible to create XR subsystems for Unity that leverage AnchorEntity to provide simulated AR data, but you would probably have to make a lot of compromises along the way. Getting this to work with Virtual Reality apps using Metal to render with CompositorServices would be even more challenging. A more direct route would be for us to create a component that you could use with PolySpatial in order to create AnchorEntities within the hierarchy of replicated Unity GameObjects, but we do not have plans to implement such a feature. Please feel free to suggest this as an idea on our roadmap so that other users can vote on it and provide a signal for how much demand for this compares to other potential features.

Hi, thank you very much for your reply.

Yes, I think AnchorEntity looks like some high-level function from RealityKit which uses ARKit under the hood.

One “good” thing I noticed about AnchorEntity detection in the simulator is that it looks super stable almost locked with the real objects (I’m not sure if it is all built-in or hardcoded simulated data though), like in the below snapshot:

While the AR plane detection within Unity simulated environment keeps shifting/drifting. So I think as you suggested if PolySpatial can mimic the AnchorEntity behavior, that would be great (I think each developer can also have their own algorithm to stabelize the anchors though).

Another dumb question is that unlike iOS AR projects where a tap can be captured by screen and ARRayCastManager can return the hit position on the AR plane; in VisionOS project, somehow the AR planes don’t seem to respond to spatial tap even though they have colliders … So I’m confused how we can have a user tap on a detected AR Plane and instantiate a GameObject at the tapped position … ?

just checking - are you able to get this mesh tracking to work in the simulator from unity polyspatial alone or using native?