I’m curious if anyone has explored remote Multiplayer + MARS. The complexities of merging 2 physical environments to work for each player individually while keeping avatars in a relatively correct location hurts my head. Has anyone begun to tackle this? I’d love to bounce some ideas.
I’ve played around with this a little bit. Mostly what I found that works is having a MARS located object, and localizing the network experience around this object. The networked positions, and orientations of the players are sent relative to the orientation of the proxy. This does present some challenges, such as, if someone is standing in a large room 5 meters away from the proxy, while someone has a small room that is only 2x2 meters, but it does help to orient / center a common area of focus.
I imagine this would work well for table top AR experiences, such as a chess game. Where navigation away from a specific proxy (point of interest) isn’t part of the game mechanics. If navigation between two points of interest is necessary, consider a teleport when they approach a second proxy, so that orientation or position isn’t relevant between the two points of interest.
Another solution is to create a proxy group, and establish progress or gameplay as a percentage between the two proxies. An example would be a tight rope from a table to a chair. Each tap could progress a character 1% across the wire. Regardless of the distance between the table and chair in each environment, the progress would be normalized to the environment.
Lots of fun solutions to think through, but my advice would be to think about players relative to a proxy, and to think of ways to normalize the progress or game mechanics to ignore room scale or layout.
We do need multiple points of interest which is where things begin to get tricky. I really like the proxy thought. I was thinking along the lines of the percentages you mentioned between the points of interest… which works fairly well until you add in more than 2 points of interest and scramble the locations. I hadn’t thought of teleporting… that could potentially solve some of this but may also result in a lot of jumping around.
Lets say you have an item on a table and another on a wall. You are standing by the table and correlating data between what is on the table with what is on the wall. Seems jarring to have the avatar continuously jump around. Would be nice to just look from the table to the wall. The trick is the wall may be located in a different spot in each person’s room so their avatars could be looking in the wrong direction. I was thinking you could make their avatars adjust their head pose/gaze as they go over the object but I feel that could also result in a lot of jarring motion… though maybe less than teleporting.
Its all pencil and paper as this point. Perhaps I’m making this overly complicated and it won’t feel so bad in actuality.