I’m testing out the volume camera resizing function and found the values of WindowState.OutputDimensions
and WindowState.ContentDimensions
were different with what I had expected. My understanding is based on the document here. I’m not sure if my understanding is incorrect, or there is inconsistency with the document and the real behaviors.
I created a bounded volume camera, both input and output dimensions are 1x1x1, ScaleContentWithWindow
is initially enabled. I subscribed to OnWindowEvent
using this simple logging function:
public void OnVolumeCameraWindowEvent(VolumeCamera vcamera, VolumeCamera.WindowState state)
{
if (state.WindowEvent == VolumeCamera.WindowEvent.Resized)
{
Debug.Log($"vc.outDim={vcamera.OutputDimensions}, vc.inDim={vcamera.Dimensions}, " +
$"state.outDim={state.OutputDimensions}, state.contentDim={state.ContentDimensions}");
}
}
If I resize the volume to 1.5x1.5x1.5 in the simulator, the values I receive are ContentDimensions
= 1.5x1.5x1.5 (which is good), OutputDimensions
= 1x1x1 (which is wierd). The volume occupies a 1.5x1.5x1.5 space in real world now so I expect the OutputDimensions
to be also 1.5x1.5x1.5 according to the document.
If I disable ScaleContentWithWindow
now and further resize the volume to 1.3x1.3x1.3 for example, the values I receive are ContentDimensions
= 1.3x1.3x1.3 (which is wierd), OutputDimensions
= 1x1x1 (which is wierd). From my understanding, all contents are unscaled when ScaleContentWithWindow
is disabled so ContentDimensions
should be 1x1x1, and the volume occupies a 1.3x1.3x1.3 space in real world so OutputDimensions
should be 1.3x1.3x1.3.
Maybe my understanding is incorrect. Can someone help me out? Thanks a lot!!!
Environment: PolySpatial 2.0.0-pre.11 + Unity 6 Preview (6000.0.11f1) Apple Silicon