Today i successfully added the following functionality to Unity ARKit Plugin:
- Create another Buffer for the Depth information
- Hook everything up on the unity side
- create a shader the depth info this as a mask.
Purpose: Have the video texture (as present) but use Depth as a mask to automatically cut out persons and reveal other behind the person.
Current state: The depth buffer seams to be read out, can be read on the unity side, is masking the video texture.
Unfortunately the depth image is rather garbled, because i don’t know about the Pixel/Textureformats.
If anybody can shed a light on this… would be great.
Fetching depth is done in ARSessionNative.mm, same way as the video buffers:
- (void)session:(ARSession *)session didUpdateFrame:(ARFrame *)frame
{
..... at the end of the method add...
if (frame.capturedDepthData == NULL) {
NSLog(@"no capturedDepthData");
} else {
CVPixelBufferRef pixelBufferDepth = frame.capturedDepthData.depthDataMap;
if (pixelBufferDepth != NULL) {
size_t imageDepthWidth = CVPixelBufferGetWidth(pixelBufferDepth);
size_t imageDepthHeight = CVPixelBufferGetHeight(pixelBufferDepth);
if (s_UnityPixelBuffers.bEnable)
{
CVPixelBufferLockBaseAddress(pixelBufferDepth, kCVPixelBufferLock_ReadOnly);
if (s_UnityPixelBuffers.pDepthPixelBytes)
{
unsigned long numBytes = CVPixelBufferGetBytesPerRowOfPlane(pixelBufferDepth, 0) * CVPixelBufferGetHeightOfPlane(pixelBufferDepth,0);
void* baseAddress = CVPixelBufferGetBaseAddressOfPlane(pixelBufferDepth,0);
memcpy(s_UnityPixelBuffers.pDepthPixelBytes, baseAddress, numBytes);
}
CVPixelBufferUnlockBaseAddress(pixelBufferDepth, kCVPixelBufferLock_ReadOnly);
}
// textureCbCr
id<MTLTexture> textureDepth = nil;
{
const size_t width = CVPixelBufferGetWidthOfPlane(pixelBufferDepth, 0);
const size_t height = CVPixelBufferGetHeightOfPlane(pixelBufferDepth, 0);
// WHAT IS THE CORRECT FORMAT???
MTLPixelFormat pixelFormat = MTLPixelFormatR8Unorm;
CVMetalTextureRef texture = NULL;
CVReturn status = CVMetalTextureCacheCreateTextureFromImage(NULL, _textureCache, pixelBufferDepth, NULL, pixelFormat, width, height, 0, &texture);
if(status == kCVReturnSuccess)
{
textureDepth = CVMetalTextureGetTexture(texture);
}
if (texture != NULL)
{
CFRelease(texture);
}
}
if (textureDepth != nil ) {
dispatch_async(dispatch_get_main_queue(), ^{
s_CapturedImageTextureDepth = textureDepth;
});
}
} else {
// NSLog(@"no depthDataMap");
}
}
}
So, what is the correct Format here?
And on the Unityside to crate the texture in UnityARVideo:
public void OnPreRender()
{
// ... insert at the end
// Texture Depth
if (_videoTextureDepth == null) {
// Depth size is different from video texture width = 640, height = 360
// What is the correct Textureformat here???
_videoTextureDepth = Texture2D.CreateExternalTexture(640, 360,
TextureFormat.RGBA32, false, false, (System.IntPtr)handles.TextureDepth);
_videoTextureDepth.filterMode = FilterMode.Bilinear;
_videoTextureDepth.wrapMode = TextureWrapMode.Repeat;
m_ClearMaterial.SetTexture("_textureMask", _videoTextureDepth);
}
_videoTextureDepth.UpdateExternalTexture(handles.TextureDepth);
}
_videoTextureY.UpdateExternalTexture(handles.TextureY);
_videoTextureCbCr.UpdateExternalTexture(handles.TextureCbCr);
m_ClearMaterial.SetMatrix("_DisplayTransform", _displayTransform);
}