How to get point cloud in ARKit

I’m trying to get the point cloud generated by the LiDAR sensor on the iPad Pro. What’s the simplest way to access the points?

As far as I know, currently, the ARMeshManager can only generate meshes.
You can use ARPointCloudManager to get the point cloud.

Is there a way to do that like Frame.PointCloud from ARCore?

Yes, you can look at the official ARPointCloudParticleVisualizer for the example of how to access point cloud.
The general concept is to subscribe to ARPointCloudManager.pointCloudsChanged event and listen for changes:
https://docs.unity3d.com/Packages/com.unity.xr.arfoundation@4.0/api/UnityEngine.XR.ARFoundation.ARPointCloudManager.html#UnityEngine_XR_ARFoundation_ARPointCloudManager_pointCloudsChanged

The problem is ARPointCloud contains only feature points and doesn’t have colors. Therefore it’s not suitable for reproducing apple’s example with dense point cloud. The solution I am trying to work out now is basically reproduce what apple does in Unity. Take ARCamManager frame, take AROcclusionManager depth frame. Project color from pixels onto a depth image, save them as colored points in mesh with MeshTopology.Point. But so far I am struggling to get any visually good results let alone results which have good performance. Performance wise there is an ARKitBackground.shader which contains some calls to native Metal functions it seems. I guess you could try translating the code apple has in their shader to hlsl. By Apple’s example I mean this one. I’ll share any code and progress I make and would be glad if you do the same https://developer.apple.com/documentation/arkit/visualizing_a_point_cloud_using_scene_depth

I managed to access the point cloud from ARPointCloudManager but it doesn’t appear to be using the depth sensor, just feature points like you said. Is there something I’m missing? Howcome I’m only getting points from optical data and not the points from the superior LiDAR sensor?

That’s a question for Unity devs, prolly it’s to much of a hassle. You can try going with the approach I outlined earlier.

@Loui_Studios_1 , I managed to create a point cloud using image from iPad camera and depth image from lidar. All you need to do is acquire depth image, camera image, unity’s scene camera and map each pixel from depth image to pixel from camera image which gives you color, and to pixel from camera which gives you world x,y position of said pixel.

var depthValues = _depthTexture.GetPixels().Select(x => x.r).ToArray();

            for (int x = 0; x < DepthWidth; x++)
            {
                for (int y = 0; y < DepthHeight; y++)
                {
                        var colX = Mathf.RoundToInt((float)x * _camTexture2D.width / DepthWidth);
                        var colY = Mathf.RoundToInt((float)y * _camTexture2D.height / DepthHeight);
                   
                        var pixelX = Mathf.RoundToInt((float)x * _mainCam.pixelWidth / DepthWidth);
                        var pixelY = Mathf.RoundToInt((float)y * _mainCam.pixelHeight / DepthHeight);
                   
                        var depth = depthValues[x + y * DepthWidth];

                        var scrToWorld = _mainCam.ScreenToWorldPoint(new Vector3(pixelX, pixelY, depth));

                        _colors.Add(_camTexture2D.GetPixel(colX, colY));
                        _vertices.Add(scrToWorld);
                }
            }
3 Likes

Hi @Misnomer did you manage to re-create the entire example project code from Apple in Unity?

Hey! I copy pasted their code from metal shader into Unity but it didn’t work correctly out of the box. But basically they do the same thing only on GPU and probably faster due to native calls. If you wish I can post it here as a starting point. Unity meshing got fixed in latest release so we switched to making a scan with that, because drawing point cloud would entail too much pain with optimization. Also there is a big question with how you gonna draw points on iPad. All the PC visualizers I saw use Geometry shaders which are not supported by Metal. If I understood correctly you can do the same thing using compute shaders but you will have to write them yourself. Other option is VFX graph but I doubt that it will eat millions of points which is what you get if you scan a living room (256 * 192 per frame, 1 474 560 per second at 30 fps).

Hello, I’m trying to make something similar. I would like te scan a room and save the scan with textures. Could you share more of your code? I can’t seem to get it to work for me.

That was my first thought to, to save the generated mesh from the meshmanager, but I would like to texture that. Do you know if this is possible?

1 Like

Yes we did that but it’s difficult. You basically have to project a camera image onto your mesh somehow. We did it by saving an image along with a camera position and rotation info and texturing as a postprocessing step using that info. It was done in blender. Theoretically you can do the same in Unity but be prepared to spend a lot of time on it

Hey @Misnomer , thanks a lot for precious information you’re sharing. I try to texture my mesh (scanned with ARMeshManager) but I don’t know where to begin.

At first I try to create a point cloud using image from iPad camera and depth image from lidar with your code example but i’ve issues to get depth and color data. Have you got a git or can you share your code about this part more precisely ? Thank’s a lot.

Hey, Martin, I don’t have this project in git since it’s a commercial one. But I took all the code which is used to get depth and color info from Unity’s examples. The one you need is here https://github.com/Unity-Technologies/arfoundation-samples/blob/main/Assets/Scripts/CpuImageSample.cs

Thank you Misnomer. For those who may be interested, I’ve found a great github project to visualize a Point Cloud using depth in Unity3D : GitHub - cdmvision/arfoundation-densepointcloud: Visualizing a point cloud using scene depth in Unity similar to WWDC20 demo.

1 Like

Thanks for the code snippet! How do you access the _mainCam?

Hey depends on your use case, but usually you do it by Camera.main

did someone test this?

I tried but not seems to work properly, it happens only to me?

happening with me as well, the RGB values are messed up , R and G are good, but the B value makes it into grey scale