Using MARS for well-defined AR expereinces

We were kind of hoping MARS is somewhat different than what it turned out to be - but perhaps I’m missing the point right now.

As far as I can see, most of the recognition & tracking logic is outside of MARS’s scope, pushed down to the AR Foundation or lower level. So, if we wanted to create something like environment or scene tracking based on prescanned data (point clouds or meshed geometry), where the AR triggers & experiences are pre-positioned & pre-defined in advance, we would be (right now) out of luck, and would need to use a third-party framework, which probably doesn’t quite plugin in MARS (for now) and/or MARS would give us little to no benefit.

Or am I am missing something basic here?

1 Like

Hey @Amyd80 , there are a few things in MARS to help you do persistent / location-based AR like you’re describing, and some more coming down the pipe.

If you have a model / scan / other representation of your known space, you can configure that as a simulation environment, and then be working with that space in MARS in the Simulation view. Setting up a simulation environment is covered in this doc: Simulation environments | MARS | 1.0.1
To relocalize to your space (get your device lined up with the physical space), today the best approach is using image markers: if you’re able to put an image in your space, or better yet use an existing poster or such, then you can configure a simulated version of that marker in your simulation as well. This page covers working with image marker proxies, and setting up simulated markers in your environment: Image Markers | MARS | 1.0.1

Looking ahead, we will be adding support for persistent anchors, at which point you’ll be able to relocalize to your known space without the need of image markers. We have the foundation of this support, but currently don’t ship with providers for this functionality. It’s on our radar as a high-priority ask, and we’re building support now.
We’ll also be shipping the MARS Companion Apps later this year, which are phone & HMD apps specifically built for capturing this kind of environment data to bring into the Editor and make this workflow more straightforward.

Thank you for the feedback, please keep it coming :slight_smile:

2 Likes

Thanks, that does sound quite a bit more promising.

Just two questions: in using scan data in the simulation view right now, I assume it doesn’t “automagically” work with third-party frameworks like Vuforia or Wikitude, right? So, you can’t actually test the external recognition of those frameworks within the MARS environment? I at least couldn’t figure out a way to do that on a cursory try. I guess this is something that they need to implement on their side of things, to plug into MARS?

Secondy: can you talk a bit more about the persistent anchor functionality? That does sound like something that would be highly interesting for our current projects. Would this be as “simple” as converting an existing point cloud or mesh to a format that the MARS provider can work with (or alternatively using your companion apps to scan/record), or will it be more involved?

Vuforia and Wikitude don’t currently have provider integrations into MARS, so for example an image marker proxy set up in MARS won’t work automagically using those services at runtime. But, in the Editor, we use ‘simulated’ providers instead of the actual runtime provider anyway, so you can simulate generic image marker tracking in Editor, and then use Vuforia or Wikitude to do your runtime tracking (which would be the typical Vuforia or Wikitude object setup, not a MARS proxy).
Writing providers for either of those frameworks would simplify this, because a MARS-style image marker proxy would then also work at runtime and not require a separate setup. In some cases, you or we can write these providers; in others, it does require some adjustment from the framework developer. This page gets into writing providers (under ‘Providers’): Software development guide | MARS | 1.0.1
Let us know if you’re interested to dig in and we can provide guidance; otherwise, good to know which providers you’d like to see next :slight_smile:

About persistent anchors, that functionality is platform-specific and requires capturing the space on the platform - you can’t convert an existing point cloud or mesh into a persistent anchor. So yep, that’s what that feature of our Companion Apps is about: with the app, you scan/record the anchors of your space, and can bring those into your project to relocalize against. Now that our initial version of the MARS Editor extension is out, we’re working on getting these Companion Apps to you as soon as possible (later this year).

1 Like

Hi Jono_Unity,

You wrote this:
If you have a model / scan / other representation of your known space, you can configure that as a simulation environment, and then be working with that space in MARS in the Simulation view. Setting up a simulation environment is covered in this doc: Simulation environments | MARS | 1.0.1
To relocalize to your space (get your device lined up with the physical space), today the best approach is using image markers: if you’re able to put an image in your space, or better yet use an existing poster or such, then you can configure a simulated version of that marker in your simulation as well. This page covers working with image marker proxies, and setting up simulated markers in your environment: https://docs.unity3d.com/Packages/com.unity.mars@1.0/manual/Markers.html*
*

Can you please explain some more on how to do the relocalize with an image?

Hey MOlanders, sure thing, here’s how we do it in our location-based demo. We have a photogrammetry scan we made of San Francisco City Hall, which we set up as a simulation environment following the process in that doc link. Then we’ve added to that environment a synthetic image marker (most easily done via Window → MARS → MARS Panel, and then under the Create / Simulated headers, ‘Synthetic Image Marker’, and then configure it to the desired image in the inspector, same as an image marker proxy). We have the marker laid out in a central location in the sim:


The implication here is that this marker is really at that exact location in the real location - this is why I suggested using an existing poster or other such ‘permanent’ marker.

Now that we have the simulated setup, the other side of the coin is authoring the proxy that you’ll actually deploy in your scene/app. Here’s the scene we use in conjunction with the sim environment above, with a bunch of content under this marker proxy, so we can position things absolutely around it:

That gets you most of the way there – you should now be able to lay content out relative to the space by positioning it relative to the marker.

What’s missing now is any occlusion from the real environment. This may or may not be important depending on the nature and scale of your app, but we solved it via that child you see in the hierarchy above, ‘MemorialCourt_Reference’ – this is the actual photogrammetry scan again, positioned so that it will line up with the real location, meaning that we can then either 1) render a stylized version of the real space over itself, or 2) apply an occlusion shader (write depth only, not color) so virtual objects don’t render through real buildings. In a smaller space we’d recommend using a Plane Visualizer to do this occlusion, but plane finding isn’t viable in large outdoor scales.

The Rules (MARS 1.1) webinar a bit ago covers this use case (among others), in the context of the Rules feature - I think I’ve hit the main points here, but if you prefer video, give it a look :slight_smile: Unity

Let me know if this makes sense & works for you – thanks!

Hi.

I’m glad I found this post as it may offer a solution to a problem I have creating an AR trail during lockdown.

If a 3D environment is loading in using the image marker method, how conisistant will the line up be once the user moves away from the image?

Is it possible to bring in ARworldmaps into data Mars? Or if not, if I triggered an ARworldmap when image marker is recognised would that help stablise and anchor the 3D environment?

In a nutshell, I have an accurate scan of a church interior that I want to load in place when user scan image maker. Then as they freely move around in the physical, the virtual space remains aligned.

Any advice or confirmation would be appreciated!

Thanks
Darius

hello @dariuspowell

Image markers will work well while the app recognizes them. The thing is, if you get far away enough, it will get to a point where the app will stop recognizing them and things might not work.

To compensate for this, you could use several image markers placed across the environment and if you know before hand the distance from the markers you can still calculate the position and alignment of the environment.
The important part would be to always have at least one image marker recognized at all times.

With regards to ARWorldmaps. MARS Will work with ARFoundation; you might want to check the ARF examples (GitHub - Unity-Technologies/arfoundation-samples: Example content for Unity projects based on AR Foundation), specifically the ARWorldMapController.cs which performs the logic in that example.

Do bear in mind that ARWorldMap is an ARKit-specific feature.

thanks @jmunozarUTech for the reply.

That’s what I thought would be the case but I’ve also seen what @DanMillerU3D prototyped a few years back, x.com

This seems to show that you could use an image target to launch the AR content and once tracking of image is lost, the world tracking would kick in?

Also in @Jono_Unity 's example above it shows an image marker as the trigger. The image wouldn’t be in view while the user moves around the real environment.

I feel I’m close to coming up with a solution. Any support would be appreciated.

thanks

ahhhh ok ok, now I understand what you meant,

yes you could definitely do it through that approach. Get the initial anchor with an image target, place the augmented content based on that anchor and start your experience.

The thing is that you might get some drifting depending on the device you use and for how long you do it. This happens since the SLAM (positioning algorithmss) used on ARKit / ARCore are not closed loop (for performance reasons).

Hence the reasoning on my first post when mentioned that several images would “Re-anchor” your content in case some drifting happens.

Being said that, try it out. I dont see why it would fail, but do keep in mind that there might be a little bit of drifting :).

For more info about closing the loop check this out https://blog.rabimba.com/2018/10/arcore-and-arkit-SLAM.html, a little bit old but worth the read if you want to understand whats happening under the hood

Sorry for the dumb question: but what exactly is this demo showing? How is it supposed to be used? More importantly: does it show us how to do something that can actually be done?

The theory seems to be that a real person in this real version of this environment could walk up and scan a marker that is in the real world. But in the example, the marker is the size of a swimming pool for some reason. Who is supposed to be scanning this code and where are they supposed to stand? Or does this actually only work in the Unity editor player under perfect conditions?

Assuming a person were to get the code scanned somehow, we’d hope they can walk some yards away to a light pole and see a virtual light correctly aligned to it. But it sounds like from what I’m reading here that this is not possible and you’ll need to actually align everything to nearby image anchors. So MARS is providing a slightly better interface for working with all of this, but it doesn’t seem like it’s actually doing anything more than image anchors under the hood?

It would be immensely helpful to have a demo of this system working for this use case. This would inform us a bit better than the demo which seems to be more theoretical than functional.

Hey samgarfield,
You’re right that the settings on the marker here are not what you’d really want to go deploy this experience - yep, you would for starters want to use a reasonable marker size; it’s blown up here mostly for readability in the demo.

You’ve hit on something that we’ve been planning to improve about this demo: for a space as large as the one shown here, really you wouldn’t want to rely on a single marker, but use a few markers spaced around the location, and use each marker you find to refine the pose of the matched scene, rather than doing a one-to-one matching like how it’s set up right now. Like you pointed out, using a single marker in a large space would inevitably lead to large drift in the content as you move farther away from that marker.

It sounds like you’re working on things which would really benefit from the improvements we’ve been working towards for location-based experiences – if you’re open to it, we’d be happy to jump on a call and talk through your project and how we can nail supporting what you need in our next updates :slight_smile:

1 Like

Hi !

Is it possible to replicate the functionalities of Vuforia’s Area Target ? With Area Target, we can simply import a Matterport scan of a room in Unity and place 3D objects in it. Once the user is inside the room, the tracking works really great and the 3D objects are positionned accordingly to where they we placed on Unity, and no need so scan a marker or something. The only downside is… the price. It’s something like 24k$ per year.

Hey CreepyInpu, alas, we don’t support Vuforia Area Targets out of the box in MARS, though if you’re particularly motivated, it would be possible to write a provider (Software development guide | Unity MARS | 1.3.1) for it.
In a future release (can’t give a specific timeframe at the moment), we intend to provide a general purpose persistent anchor workflow which would give you a similar authoring experience to what you’re describing.
For our understanding about what you & others need:

  • Would an intermediate solution which is not cross-platform be helpful to you? In other words, in your project, are you targeting one platform or multiple?
  • Are you currently using Vuforia Model Targets or another solution (Apple WorldMaps, Azure Spatial Anchors, etc)?

Thanks!

1 Like

Hi,
Thank you very much for the information, a question: Have the persistent anchors been implemented in the current version of MARS?
8676540--1169463--upload_2022-12-20_18-15-38.png

hello @EVASNGULAR they have not. Best would be to implement your own provider for it.

Hi @jmunozarUTech , speaking of updates, is MARS still in active development? In other words, does Unity have some kind of public Roadmap or vision, with planned features to be released in the coming months?

The reason I ask, is that I just signed up for a year of MARS, and I’m pretty underwhelmed with the lack of information and tutorials/videos available online. It seems that only a few videos (mostly overviews) were released in Summer 2020, and there hasn’t been major news since, beyond version 1.3 coming out and a Companion app that’s still in Beta.

Will there ever be a way to use MARS with Microsoft Mixed Reality Toolkit (MRTK) or Azure Spatial Anchors, or have a world locking coordinate system, like MRTK has now?

1 Like

Hey @diodedreams , yes! MARS is still in active development, with 1.4 closing out QA now ahead of its release very soon. That update will add major new meshing functionality, along with a ton of smaller improvements and fixes. The general Unity XR roadmap is here - XR roadmap | Unity - and does have a MARS tab showing 1.4/meshing, though I’m seeing now that we haven’t publicly stated next major features after 1.4. I can’t say too much in this venue, but to your question and the topic of this thread, yes, persistent / spatial anchors are a huge priority for us going forward!

1 Like

Hello guys, I’ve been excited about where Mars could/would be heading to for quite some time and I’ve been following closely your progress.
The fact that Azure Spatial Anchors and Vuforia Area target have both offered solutions for large scale user positioning while offering tenfold tracking improvements makes me wonder if Mars might just be missing the boat.
With Lighthouse SRDK more recently offering yet another alternative to these approach one might wonder what is hold up on your end and how many competing solutions you’re willing to see take the lead…
@anon_79074062 I think you’re right to have made persistent/spatial anchors in Mars a huge priority, but unless it comes out in the next couple of month I’m afraid all that hard work might have been for nothing. I would truly be sorry to see that happen as I think Mars had some great features in the beginning.

@stereocorp3d I completely agree with your sentiments. I have buyer’s remorse for committing myself to a year of MARS, given Microsoft’s Azure Spatial Anchors and Mixed Reality Toolkit (MRTK), as well as Vuforia’s Area Target are all robust solutions but are not addressed by MARS.

@Jono_Unity On November 1st, you said that “persistent / spatial anchors are a huge priority for us going forward”. Any update on this? The Roadmap hasn’t seen any love since then: XR roadmap | Unity