Hello everyone. I will try to convey the idea correctly, I hope someone will understand me.
I had an idea to create an augmented reality application. The application is a test one, so let it have the following functionality: placing primitives (cubes) on a surface. To do this, you need to do the following: after launch, you need to scan the space and get planes for placing objects. There are no problems with such an application. But…
Is it possible to improve the application so that I do not have to scan the space every time to get planes? Let’s say I launch this application only at home (in my room) and nowhere else. I have an OBJ file of my room. If you place it on the stage, you can see and “walk” around the room. Is it possible to use this file to skip the stage of scanning the space? And one more thing. Ideally, everything should work so that the application can be launched anywhere in the room. You start scanning (not sure if this is the right word right now) the space and the app “understands” where you are. The app matches the room from the file and you can immediately place primitives.
I tried to search for information on this issue, asked neural networks, but I find little. Maybe someone has encountered this? I am not asking for a ready-made solution (although it would be nice), at least a tip on documentation or an article, maybe someone is trying/tried to do something and has ideas. I will be glad of any help.
Thank you