Transition between bound and unbound mode?

At Unite Amsterdam yesterday, there was a nice example shown of switching/transitioning between bound and unbound mode.
How can that be achieved?
Also, how can you blend Passthrough with fully immersive content in polyspatial mode? (supporting the digital crown), if that is possible at all?

Hi,

For mixed reality apps using PolySpatial, you can switch between unbounded and bounded mode by creating a Volume Camera Configuration asset for each mode, then assigning that to the Volume Camera’s OutputConfiguration property. You can assign this on runtime - see here for more info.

For your second question, PolySpatial is used only in mixed reality - fully immersive content, AKA traditional VR content, does not use PolySpatial. I’ll ask around to see if it’s possible to use the digital crown to control immersion in Unity VR apps.

Edit: it’s not currently possible to control immersion in Unity VR apps, but it is on the roadmap.

1 Like

@vcheung-unity are you meant to be able to change the OutputConfiguration of a Volume Camera at runtime, ie: from Bounded to Unbounded?
Not by loading a new scene with a different Volume Camera (this works OK), but by altering the Volume Camera in the scene?

eg: m_volumeCamera.OutputConfiguration = m_boundedConfig; ← If this is meant to work, then I’m either not doing this right, or I’ve found a bug. (Both Volume Camera Configuration’s are in /Resources/)

From a glance, yes that looks like it should work.

A couple of questions: if you build the scene with the desired bounded volume camera configuration and deploy on simulator, then build the same scene with an unbounded volume camera configuration, do both scenes look right in the simulator? What version of PolySpatial are you using in these tests?

If things still don’t work, can you please submit a bug report and attach a repro project? Thank you!

Hello, I am developing a project using the technique to transition from bounded to unbounded and vice versa. Follow your instructions above, everything works great except one thing: The position of the object in Bounded mode is different from the position in Unbounded mode.
For example, I place a ball in front of the user’s eyes in Bounded mode (position is 0, 0, 0). When I switch to Unbounded mode, the ball does not stay in front of the user’s eyes anymore, instead, it shows up at the foot of the user (I have to look down to find it).
Can we have something like persist the position of the object through modes, or at least get the delta position of coordinate in Bounded and Unbounded mode?

Many thanks!

1 Like

This is unfortunately a platform limitation – there is no connection between the coordinate spaces when switching modes (or when opening new volumetric windows). I’d encourage you to submit a Feedback Assistant request with Apple to ask for this.

In a bounded volume, the center of the volume is aligned with the center of the volume camera in Unity’s space. When an unbounded volume is opened, the origin is placed at the user’s feet by ARKit; that position is mapped to the position of the Unbounded volume camera.

1 Like