How to Setup People Occlusion

I have just have my unity AR foundation and IOS updated to the latest preview version. I tried to get the new fancy feature people occlusion works but have no idea where to start with.
I read the doc and look through the sample repo and did not find anything helpful (maybe it’s just me being stupid). Is there a option I should turn on in the unity editor, or is that function something being similar with the light estimation which will require external scripts?
Thank you

1 Like

Check out the “HumanSegmentationImages” sample scene. Right now, it just surfaces the stencil and depth buffers as raw images. We’re working on a better sample, but that should get you started.

2 Likes

I was wondering if you could suggest how to go about using the depth and stencil raw buffers to perform occlusion. Would these have to be used in a custom postprocess effect? Can they be applied directly or do they have to be cropped/rotated to match the screen resolution/orientation?

Thanks!

Some other questions: does UnityARKit_HumanBodyProvider_TryGetHumanDepth and UnityARKit_HumanBodyProvider_TryGetHumanStencil use Apple’s generateDilatedDepthFromFrame and generateMatteFromFrame to acquire the buffers? (i.e. are the buffers already scaled and refined using ARKit’s API?)

Is there any chance that there will be a way to use the human segmentation-generated Depth Buffer as the base depth buffer or will it be necessary to sample it separately inside a shader and then perform some kind of check with the frag depth or something like that?

You will need to write your own shader that uses both the stencil and depth image.

The values in the depth buffer are in meters with the range [0, infinity) and need to be converted into the view space with the depth value [0, 1] mapped between the near & far clip plane. Additionally, because non-human pixels in the human depth image are 0, you will need to use the stencil buffer to know whether that 0 value means a human occluding pixel at the near clip plane or a non-human pixel which should get the value of the far clip plane.

Todd

2 Likes

Thanks for the extra info! Sorry if this is a dumb question, but would there potentially be some way of blitting the depth texture onto the frame buffer’s depth buffer before the whole scene draws (like right after the Camera clears it)? For scenarios where we don’t need the stencil buffer functionality and only want to use the depth buffer, this seems like it would really simplify the workflow. I have an AR project that I’m working on adding people occlusion to, but it uses some rather complex shaders from the asset store that are really tough to modify because they use all kinds of #include .cginc files and compiler preprocessors…

Also, any estimate on when we might see a sample demonstrating the technique you outlined in your previous post?

Thanks!

The examples provides us with the stencil and depth data for People Occlusion but how can we get the original depth map? Is it possible to control how far it will detect the people occlusion or is it a fixed value?

2 Likes

I feel like i’m a little out of my depth (no pun intended) here, but it seems like drawing a full screen quad with a shader that only outputs to SV_Depth before the rest of the scene draws would be the simplest implementation of basic occlusion.

I’m a bit of a neophyte when it comes to View space… I don’t really understand the range of values derived from something like UnityObjectToViewPos( float4( v.vertex.xyz, 1.0 ) )

Clip space I understand a bit more because the x and y components are (-1 → 1), but I don’t really understand the z and w components.

Anyway, I hope Unity will include an example that demonstrates just writing the depth stuff to SV_Depth because that seems like a much more user friendly way of adding this to any project.

1 Like

@todds_unity So if I understand correctly, this custom shader will only be concerned with returning numerical depth data, not necessarily displaying anything ?

@todds_unity Is there a reason why the image in the HumanSegmentationImages is reversed? We are attempting to unrevert it so we can apply the shader. If you know how to uninvert the output from the humansegmentationimage scene, please let us know.

Thank you.

In whatever shader you’re using to sample from the depth/stencil textures, just invert the y coordinate:

uv.y = 1.0 - uv.y;

waiting also for a working example

What variable tells us that the depth or stencil value is 0 or 1? I can only find the texture humanStencil and humanDepth. How can I get the depth data from what we already have?

Is there a reason why the provided sample has the camera output very zoomed in compared to the depth/stencil data output? How can we change the output camera so that it sees what the original camera sees / stencil depth output sees. Because they are not the same outputs.

@todds_unity @tdmowrer

Yes agree. To get anything working took a lot of trial and error hackery using Blitting and even then the results are barely acceptable

https://vimeo.com/343272414

3 Likes

For people depth value, is there a limitation for the distance between the camera and the people detected? After 1 meter, the people depth value is a pure color meaning that there’s no more change in value.

It goes beyond 1 meter, but in a fragment shader, 1.0 in the red channel is fully red. If you try dividing the depth by 10, you wouldn’t reach full red until 10 meters away. Make sense? It’s a floating point texture so it can store values beyond 1.0.

2 Likes

Why under landscape mode, depth/stencil is flipped horizontally (which can be fixed in shader by uv.x = 1 - uv.x;
), and when holding my phone in portrait mode, depth/stencil is not changing accordingly, anyone how to fix this?

4 Likes

I have same issue.

Can anyone tell me about what the problem is?
I’m working on masking the AR scene with stencil texture.

I tried it as just mask value texture first, but it didn’t show me correctly. Next, I tried to flip and scale the texture. But it didn’t work. Please see the attached image. It was a little bit wrong. I also attached my shader code. Please tell me what how to fix the problem?

This shader code is used in posteffect.

v2f vert (appdata v)
{
    v2f o;
    o.vertex = UnityObjectToClipPos(v.vertex);
    o.uv = v.uv;
    return o;
}

sampler2D _MainTex;
sampler2D _DepthTex;
sampler2D _StencilTex;

fixed4 frag (v2f i) : SV_Target
{
    fixed4 col = tex2D(_MainTex, i.uv);

    float2 uv = i.uv;
    uv.x = 1.0 - uv.x;

    // The reason of "1.62" is correcting ratio.
    // Passed stencil texture is not same ratio to the display.
    uv.y = (uv.y + 0.5) / 1.62;

    float stencil = tex2D(_StencilTex, uv).r;

    return lerp(col, float4(1, 0, 0, 1), stencil);
}

4833506--463931--IMG_0110.PNG