Correct setup for Unity UI

Oh, snap. I tested it with both (we’re testing next release right now) and must have compressed the wrong version. Let me get that updated/fixed.

Ok, project at link is fixed for 0.4.3, and tested to work in simulator. Sorry about the mix up.

2 Likes

So I was able to build and run the sample project in the simulator, and I saw the “<<< CLICKED >>>” log. I noticed the sample project was in Bounded mode, so I added an XR Origin via Right Click in Hierarchy → XR → XROrigin (AR), Swapped the Volume Camera to Unbounded, childed the Canvas to the camera at a distance of (0,0,1) for convenience, and rebuild/reran. This time there was no clicked log.

I thought maybe it was because of the hierarchy, so I made a script to move it in front of the camera instead. Still no log.

Is it possible this has not been tested in Unbounded mode or in a project build around using AR functionality?

That’s entirely possible and there may be an issue here I didn’t test against. Can you file a bug with this repro project you created and post the id here?

1 Like

So I ended up getting it to work in the various XR samples. I just needed to add similar EventSystem and InputSystems to the relevant scenes. The input system module is also off by default in the Project Launcher scene in the Sample project by default. Thanks for the help :smiley:

1 Like

I’m having a nightmare trying to get this to work in our own project. I’ve tried copying all the settings from the sample @joejo provided here, but I can’t get button clicks working. Are there any tips for debugging this stuff? I feel like I’m missing something tiny somewhere.

For what it’s worth, I’ve tested the sample using Unity 2022.3.12 and 0.5.0 poly spatial packages and the sample works with those upgrades, both via simulator and on device. As stated it doesn’t work in editor, but toggling the InputSystemUIInputModule component off and on at runtime fixes it.

I’ve figured out our problem and can reproduce the issue in the sample project. Basically, we have a canvas and volume camera that is much larger than the sample, and buttons are not clickable unless they are at (0, 0) on the canvas.

I’ve proven this by breaking the sample:

  • Change the volume camera to have dimensions of (650, 650, 650)
  • Adjust the canvas size to match (Scale 0.25, Width/Height: 2600)
  • Adjust the button size to width: 600, height: 150
  • Move the button to the bottom of the canvas

Run this in the simulator (or on device) - looking at the button will highlight it, but clicking it does not invoke the OnClick event handler.

I’ve submitted a bug report but haven’t got confirmation on it yet so I’ve uploaded the modified sample here

1 Like

I traced the issue to GraphicRaycaster.Raycast, when looping through foundGraphics, we hit this test:

if (!RectTransformUtility.RectangleContainsScreenPoint(graphic.rectTransform, pointerPosition, eventCamera, graphic.raycastPadding))
    continue;

and RectTransformUtility.RectangleContainsScreenPoint fails, causing us to skip over each graphic. pointerPosition is always (0,0) and eventCamera is always null.

Is there any update here @joejo I’ve made a super simple modification to the sample you provided in this thread where there is a button in the middle of the screen that is clickable, and a button at the bottom of the screen that is not clickable. Please find the project here

1 Like

Sorry for the late reply.

The short story is that you need to back your input camera (in the case of this project, the main camera) so that all the UI elements are visible to it. That will fix the issue.

The longer story has to do with the fact that we need a camera to do raycast input with, but the fact that things aren’t visible to the camera are not at all reflected in what is visible to you in the actual world that you have setup. This is complicated by the fact that we have no idea what things like camera dimensions or ‘screen size’ are in visionOS so can’t really compensate directly for them. We are looking into ways to get around this but for now this should fix the situation.

Hey @joejo ! Can you explain some more the where to position a Canvases EventCamera so it is able ability to recognize click events?

We’re still struggling to get our game’s UI to react to clicks. They do get the hover effect but they don’t react if they’re clicked, neither in the simulator nor on device. So I looked at it again in a simple project based on the UnityUI project you sent. But I have a difficult time trying to understand where the EventCamera has to be positioned.

In my experiments it works if the EventCamera is positioned at the edge of the VolumeCamera. E.g. when the VolumeCamera sits at (0,0,0) and has Dimensions set to (1,1,1) (with a Canvas and UI Button inside this Volume) and I position the EventCamera at (0,0,-0.5) it works as expected. When I move it farther away it gradually stops working and if I move it along the y-axis only part of the button work, which is kinda expected, but I don’t fully understand how they’re related or where exactly I should position this EventCamera in relation to the Canvas in the Volume.
Then I even removed the camera from Canvas.EventCamera and the button still reacted to clicks which doesn’t make sense to me anymore. If there’s no EventCamera set, does Unity use the VolumeCamera as a default for raycasting the UI elements or something?

Here’s the example project I worked with. It will work if you start it as Canvas has no EventCamera set. Once you set the camera called UICamera as the Canvases EventCamera the button doesn’t work anymore. If you move the camera to (0,0,-0.5) it works the same way as when it’s not set in the Canvas…

I would like to understand the relationship of this, so I can better figure out why it doesn’t work in our full project. Any help or explanations would be appreciated! Thanks!

1 Like

In my testing today I’m pretty certain I found that Canvas only uses the camera that is tagged as ‘MainCamera’ for raycasting / registering events and that it doesn’t matter what camera is set to Canvas.EventCamera. This is not how it should be, right?
This means that the camera that is tagged with ‘MainCamera’ has to see the UI otherwise the UI won’t react to clicks

No, that is not how it should be. If you look at where it sets the camera, it should set it as the camera on the event system. But the raycast is only part of the equation as there is also the SpatialPointerEventListener and the camera that it uses. This is part of the fix that we are putting out in the next release.

As far as the relationship, the raycaster should use the EventSystem camera, as you noted, and that camera will only raycast against things that are within the view frustum that is visible to it. So if the UGUI elements are outside the bounds of what your camera can see then, regardless of what you can see visually, hit testing will fail.

If you think there is an issue with it using the wrong camera, you can always try to create your own raycaster derived from GraphicRaycaster and add that to the Canvases that you want to use/test with.

Something like this may work for you:

    class MyTestRaycaster : GraphicRaycaster
    {
        public Camera MyCamera;

        public override Camera eventCamera
        {
            get
            {
                if (MyCamera != null)
                     return MyCamera;

                return base.eventCamera;
            }
        }
    }

Thanks for the info. We have worked around this issue by tagging the camera, that always has the UI in its frustrum, with tag ‘MainCamera’.
But if I understand you correctly than the issue that Canvas.EventCamera is ignored and that the system always uses the MainCamera for raycasting will be fixed in the next release? Good to know!

And is the SpatialPointerEventListener responsible for doing the hover effect on visionOS? Because that always worked on our UI element, but it never registered the Click events.

Hi @joejo, we have updated to PolySpatial 0.7.1 and unfortunately this breaks our UI setup again. In 0.6.3 our UI worked when we set our UI Camera, which is set as the EventCamera of our Canvas, to tag ‘MainCamera’. But in 0.7.1 this doesn’t work anymore, nor does setting this camera as the EventCamera of our Canvas.
I again compared our test projects to your UnityUI project you shared in here, but couldn’t find out what the differences are. :frowning:

I have filed a bug with a TestProject attached. CASE IN-64089
In the TestProject the button only reacts to click when it’s set to (0,0,0) and not otherwise.

Whishing you happy holidays and looking forward to your response once you have time to look at this! :christmas_tree:
Best,
Felix

I’m seeing a similar issue with 0.7.1. A setup that works in 0.6.3 does not in 0.7.1. Inputs only seem to work if the UI is within 1 unit from the origin or if the volume camera is set to bounded.

I think I found a good workaround. Seems like syncing the transform of VolumeCamera.BackingCamera with the main camera works.

Thanks for the tip. I haven’t seen VolumeCamera.BackingCamera before, good to know it exists! Sadly your workaround doesn’t work for my test scene :frowning: Not sure I understood you correctly, but here’s what I tried in a quick test:

  • We have a camera called “UI Camera” that is set as the EventCamera of our Canvas.
  • On this camera I have a component that sets the position and rotation of this “UI Camera” to the position and rotation of the BackingCamera. Like this:
void Update()
    {
        volumeCamera.BackingCamera.transform.GetPositionAndRotation(out Vector3 vCamPos, out Quaternion vCamRot);
        transform.SetPositionAndRotation(vCamPos, vCamRot);
    }

This did not help in our test project. Is that (kinda) what you’re doing for your workaround? Do you also have a camera set as Canvas.EventCamera? Is that camera also tagged as ‘MainCamera’?

In addition to syncing the position and rotation, I’m also doing the same thing for scale due to how some of the objects are set up in my scene. I’m not sure if you’ll also need to do that.

Canvas.EventCamera isn’t set for me and I’m using the camera at Camera.main to set up the backing camera. I’m basically only using a single camera (tagged as MainCamera) for the project.

Doing the above works both in the 0.7.1 template project (with additional objects for UGUI) and the project I’m working on.

Hey, I was running into the same problem, and wanted to try the solution you posted. It seemed backwards to me, and when I ran the following code, it seemed to resolve my UI input issues

public void Update()
{
    if (_mainCamera.transform.hasChanged)
    {
        _mainCamera.transform.GetPositionAndRotation(out Vector3 pos, out Quaternion rot);
        _volumeCamera.BackingCamera.transform.SetPositionAndRotation(pos, rot);
    }
}

_mainCamera and _volumeCamera are [SerializedField]s I assigned in my main scene.

Hope that helps :smiley: