We have been workin in a postprocess script to get the Human occlusion working on the AR Samples project, so we have added all the scripts needed into the SimpleAR scene so we can have something to occlude in the scene and then we have done the attached cs script and shader.
There is a few issues with the images we get from the ARFoundation script (like the fact that the image is always 4:3 and in the iPad pro 11" that is not the right aspect ratio) but even assuming that those issues will be fixed, we still have to be able to compare the depth that comes from the depth buffer to the estimated depth from the segmentation images.
We tried to convert the segmentation image to Linear space like this:
float normalisedOcclusionDepth = ((UNITY_MATRIX_P[2][2] * -occlusionDepth) + UNITY_MATRIX_P[2][3]) / ((UNITY_MATRIX_P[3][2] * -occlusionDepth) + UNITY_MATRIX_P[2][3]);
And then we converted the depth buffer to linear as well and inverted it, so it goes from 0 to 1.
It did not work.
We trasposed the UNITY_MATRIX_P in case this equation which comes from the metal examples was trasposed. It did not work.
We double checked everything, tried to do all the math by ourselves…
Nothing seemed to get both depth images to be on a comparable state. Does anyone has any idea what are we doing wrong?
4783715–456545–PeopleOcclusionPostEffect.cs (2.7 KB)
4783715–456548–PeopleOcclusion.shader (3.57 KB)