Is there any way to reconstruct a world space position from depth?

I am currently using URP 14.0.8 and Unity 2022.3.21f1. I understand that in a ordinary rendering process, just using UNITY_MATRIX_I_VP, positionCS, and depth is enough. However, in VR mode, after enabling Foveated Rendering, things are different. The CopyDepthPass does not enable the foveated mode. When my own Draw Screen Mesh Pass enables Foveated Rendering, positionCS and texcoord are not equal any more. Sampling depth using positionCS and then reconstruct world space position using texcoord works fine; but when I change the RT to half resolution, Vision Pro’s Viewport is not set correctly, causing rendering issues. I also tried disabling Foveated Rendering, and now the viewport was correct, but using the I_VP matrix could not reconstruct the correct positionWS.

1 Like

I’ve solve this issue. It was because the render resolution and physical resolution were different. VRRM (Variable Resolution Rate Map in Metal, more commonly known as VRS in DX12 and Vulkan) need a buffer to transform coordinate between those 2 resolution.
Disable FoveatedRendering should be fine. But if you want to have a better preformance, you’d better to ENABLE FoveatedRendering. Then do some NativeRenderPlugin work to get the buffer called rasterization_rate_map_data. After transform render coordinates, everything will works perfectly.

1 Like