World Canvas Buttons

Posting on behalf of [TrentBurkeRed]

"We’ve found that UI buttons in a world canvas UI aren’t working with VisionOS in editor or in simulator/device. Disabling Polyspatial runtime in Project Settings → Polyspatial → Enable PolySpatial Runtime makes the buttons work again in editor, but we obviously can’t go forward with that. Is interacting with Buttons on Unity canvas something that should work, or will work in the future?

We can get around this by using Raycasts, and/or by getting SpatialPointerState from EnhancedSpatialPointerSupport, but we have to rebuild a lot of our UI if we need to do that. Is there another way to make this work currently?"

I’ve found that input for canvas buttons only works if the button is in the centre of a canvas. More info here: Correct setup for Unity UI - #17

For now, I’m using SpatialPointerState as a work-around

Would you mind elaborating how you use SpatialPointerState to work around buttons not working in other canvas locations?

We are using SpatialPointerState mostly following the samples from the PolyspatialPackage:

        private void Update()
        {
            var activeTouches = UnityEngine.InputSystem.EnhancedTouch.Touch.activeTouches;

            if (activeTouches.Count > 0)
            {
                // For getting access to PolySpatial (visionOS) specific data you can pass an active touch into the EnhancedSpatialPointerSupport()
                SpatialPointerState primaryTouchData = EnhancedSpatialPointerSupport.GetPointerState(activeTouches[0]);

                SpatialPointerKind interactionKind = primaryTouchData.Kind;
                GameObject objectBeingInteractedWith = primaryTouchData.targetObject;
                Vector3 interactionPosition = primaryTouchData.interactionPosition;
                var phase = primaryTouchData.phase;

                Debug.Log($"Spatial touch detected: {interactionKind} {phase} {objectBeingInteractedWith?.name} {interactionPosition}");

                if (phase == SpatialPointerPhase.Began)
                {
                    if (primaryTouchData.targetObject != null)
                    {
                        if (primaryTouchData.targetObject.TryGetComponent(out PolyspatialButton btn))
                        {
                            if(btn != null && btn.onClick != null)
                            {
                                btn.onClick?.Invoke();
                            } else
                            {
                                UnityEngine.Debug.Log(btn.name + "'s onClick action is null or does not have a btn script");
                            }
                           
                        }
                    }
                }
            }

So our “buttons” just have a PolyspatialButton script attached and a collider.

public class PolyspatialButton : MonoBehaviour
{
    public Action onClick;
}

This works, but completely goes around the UI canvas. We have a lot of views already using canvas UI, and if UI Buttons worked it would save us a lot of time converting each input from that to a collider and onClick action.

Similar here. We have this somewhere in an update loop:

var activeTouches = UnityEngine.InputSystem.EnhancedTouch.Touch.activeTouches;
if (activeTouches.Count > 0) {
	var primaryTouchData = Unity.PolySpatial.InputDevices.EnhancedSpatialPointerSupport.GetPointerState(activeTouches[0]);
	if (activeTouches[0].phase == UnityEngine.InputSystem.TouchPhase.Began) {
		var targetObj = primaryTouchData.targetObject;
		if (targetObj) {
			ExecuteEvents.Execute(targetObj, new BaseEventData(EventSystem.current), ExecuteEvents.submitHandler);
		}
	}
}

It’s pretty janky, and suffers from ‘raycastable’ objects blocking buttons (ie, an image on top of a button), but it will do for now

Canvas buttons work fine for me. When I started using the TrackedPoseDriver on the main camera it stopped working in simulator though. I then figured out that the Canvas also needs a TrackedDeviceGraphicRaycaster for it to work in simulator.

Never mind, input isn’t working properly for me. After adding AR Handheld Device input actions for devicePosition and deviceRotation, I started getting the following log and canvas buttons stopped working in simulator. It seems like the AR Handheld Device and Spatial Pointer Device do not work well together or something.

Could not find active control after binding resolution
UnityEngine.InputSystem.InputActionState:RestoreActionStatesAfterReResolvingBindings(UnmanagedMemory, InputControlList`1, Boolean)
UnityEngine.InputSystem.InputActionState:FinishBindingResolution(Boolean, UnmanagedMemory, InputControlList`1, Boolean)
UnityEngine.InputSystem.InputActionMap:ResolveBindings()
UnityEngine.InputSystem.InputActionMap:LazyResolveBindings(Boolean)
UnityEngine.InputSystem.InputActionState:OnDeviceChange(InputDevice, InputDeviceChange)
UnityEngine.InputSystem.InputManager:AddDevice(InputDevice)
UnityEngine.InputSystem.InputManager:AddDevice(String, String, InternedString)
UnityEngine.InputSystem.InputManager:AddDevice(Type, String)
UnityEngine.InputSystem.InputSystem:AddDevice(String)
Unity.PolySpatial.InputDevices.InputUtils:AddDevice(T&)
Unity.PolySpatial.InputDevices.SpatialPointerEventListener:.ctor()
Unity.PolySpatial.Internals.<>c:<.ctor>b__21_0()
System.Lazy`1:ViaFactory(LazyThreadSafetyMode)
System.Lazy`1:ExecutionAndPublication(LazyHelper, Boolean)
System.Lazy`1:CreateValue()
Unity.PolySpatial.Internals.PolySpatialSimulationHostImpl:OnInputEvent(PolySpatialInputType, Int32, Void*)
Unity.PolySpatial.Internals.PolySpatialSimulationHostImpl:OnHostCommand(PolySpatialHostCommand, Int32, Void**, Int32*)
Unity.PolySpatial.Internals.Safe:HandleHostCommand(PolySpatialHostCommand, Int32, Void**, Int32*)
UnitySourceGeneratedAssemblyMonoScriptTypes_v1:.ctor()

I think I finally got input working in simulator. Instead of using both AR Handheld Device and XR HMD actions in the TrackedPoseDriver component like in the visionOS sample scene, using only XR HMD actions seem to work.

1 Like