Auto rescaling in volume is driving me nuts

this feature is driving me crazy!

as written about in the polyspatial docs
“Currently, Unity content within a bounded volume will expand to fill the actual size of the volume”

why does this happen? why can’t I control it?

stick to square dimensions

it happens when the dimensions are the same on x,y,z too, it does like an auto up scaling of whatever’s in the volume, really annoying as I my content should never change scale compared to the real world

Are you setting the VolumeCamera Dimensions to match the VolumeWindowConfiguration Dimensions?

ok that does seem to help with the stretching, I’m still confused about how this works though, I really just want my game objects to always appear at a consistent scale compared the real world, and I also want the volume to take up less space in the shared space of VisionOS, how do I do both?

Maybe, I’m not quite picturing what you are trying to accomplish.

If you pick a size for the volume window and match the volume camera dimensions, then whatever you can fit into the volume should be at the same scale in the scene and in the real world. E.g. if you can fit a 1 meter cube into the volume camera bounds in the scene, it should be displayed as a 1 meter cube to the user. However, the OS doesn’t guarantee that it will give you the size of volume window that you asked for. If it gives you a different size, then you would still see content scaling as a result. You can use the volume camera events to react to this possibility.

actually i’m confused at the two modes for volume camera resize listener…
do you have solid examples of each mode? ScaleToFit vs MatchWindowSize

The script that defines those options is part of the sample. If I’m reading the code correctly, MatchWindowSize changes the VolumeCamera Dimensions so that it matches the actual VolumeWindow size that the OS created for you. This will either clip or include “extra” space within the volume.

ScaleToFit computes a uniform scale for the VolumeCamera Dimensions based on the smallest dimension of the VolumeCamera object and then resets the VolumeCamera Dimensions to the actual window size times the computed scale factor.

That second algorithm works for the sample since it uses cubic volume cameras, but might not work for rectangular volumes. Because it is just a sample, you would have to adapt the code to your situation.

Suppose you want to have a lifesize replica in a volume - how do you make sure it is to-scale? Does choosing either of the mode make the scaling different?

So far I have been fudging smaller things by setting larger volume sizes.

Match window size should maintain the scale of the content.

Scale to fit would scale the content so it fit within the new window size (however, it only works for volumes with the same dimensions on each side – for arbitrary sizes, you would need a better way to select which dimension to use as the basis for scaling.

Found this great post going into detail on the weirdness of volume resizing, really helps you understand it’s rather undocumented behavior


Is there a way to know the absolute min and max sizes that the volume camera can be?

There are two parts to this, the Volume Camera and the Volume Window.

The Volume Camera dimensions don’t have a technical maximum size, but do have a practical limit because the contents are scaled down or cropped to fit in the Volume Window. You will have to determine what works for your app.

The maximum size of the Volume Window is determined by visionOS. I haven’t found any Apple documentation that specifies the limit. I have tested a large volume window configuration in the simulator and noticed that the largest window dimension the OS created was 2 meters (well, 1.99), but I don’t know if that’s definitive.

1 Like