VisionOSHoverEffect Not working

I am building a VisionOS app. I have 1.0.3 of the VisionOS on a physical device. I am using 1.3.1 of polyspatial.

I have a game object that I am able to pinch to select using the many code examples out there, so I know that the Vision Pro is “seeing” the object and pinching is working

I would like to have the object indicate it is selectable when the user looks at it.

I read in the docs that all I should have to do is add a VisionOSHoverEffect script to my gameobject.

I did this.

The GO has a mesh renderer and capsule collider. That is what the docs say have to be there in addition to the VisionOSHoverEffect script.

Nothing happens when I look at the object. I can still pinch to select it…

The effect is subtle, but you should be able to see it. If you submit a bug report with a repro case and let us know the incident number (IN-#####), we can see what might be going wrong.

I will try to create a simple example. Maybe it is because of the type of material I am using. It is mostly transparent. I notice in the docs the material is a solid blue.

1 Like

I created a simple project that has 2 blue spheres in it. One has the hover script on it and the other doesn’t. I chose one of the simple built in color shaders like what I think the docs have. I can’t see any difference at all between the spheres when I look at them.

I have uploaded a zip file with the project in it.

Archive.zip (105.8 KB)

I look forward to finding out what I am doing wrong or missing…

Your project uses VR mode (Project Settings → XR Plug-in Management → Apple visionOS → App Mode), which doesn’t support the hover effect. The hover effect is only supported in Mixed Reality mode. Sorry for the misunderstanding.

My app is VR. Is there any way to know when the user is looking at an object and do some effect myself?

I know that with the pinch stuff I can select objects using a ray cast. I tried using a ray cast without the touch stuff and it didn’t seem to work. Is that on purpose?

There is not. VisionOS doesn’t provide a way to get gaze data on the CPU; the only way to achieve the effect is through shader graphs in RealityKit (i.e., MR rather than VR mode).

I’m not sure what you’re trying to do with a ray cast. If you mean using Physics.Raycast to find intersecting colliders, then that should work the same as it does on any other Unity platform, but it won’t help you find where the user is looking (which is simply not information that Apple provides to normal user applications).

That must be it. I was doing a physics.raycast using the main camera that is attached to my XR Rig, but i guess unless you move your head so that you are “looking” straight out, that won’t work.

So why does this work in AR but not VR? Apple limitation?

Yes, exactly; I believe they consider it a privacy issue. You can submit feedback via their Feedback Assistant to let them know that you would like a way to access the gaze information.

Ok. Thank you for the quick responses.

1 Like