With the Meta Quest 3, I want to place a 3D object on a real table. I’m using Unity and the Meta SDK.
I know Meta provides the MRUK (Mixed Reality Unity Kit), which allows room scanning and semantic labeling of real-world objects, but I’d like to achieve this without relying on that step.
Does anyone have a cool idea on how to do it?
What I’ve built so far lets the user define four points (the table’s vertices) by pinching with their Index and Thumb. My 3D object then spawns at the center of these points.
It works fine, but it’s quite difficult to be very precise. I’d love to hear if anyone has ideas for improving this process.
Does anyone have a cool idea on how to do it or have some reference/sample?