AR upload platform based on Unity 2020LTS URP

The AR tree has a-lot of branches. The branch I worked in enhanced ads with interactive 3D product displays. When tasked with creating these award winning AR prints ads at high volume there was not much available. No viable YouTube for AR that provided a universal viewer and allowed us to use our 3D software of choice/Unity/Vuforia to full-fill the needs.
So we built an Augmented Reality Browser and the cloud based upload platform that does this. Import 3D into Unity, use the event system and event triggers to build interactivity and upload asset-bundles. The AR experiences are live in a minute, and Reality Browser supports marker-based, marker-less, deep-links and display at real-world scale. There’s not a-lot of tutorials on code-less interactivity, so we continue to post HowTo’s to help onboard quickly.

It’s not a new branch on the AR tree, but more like some 2 by 4s nailed into the trunk to help advertisers and marketers utilize AR, and to bring Unity artists an alternative, end to end path to build AR client work.

Just launched and we’re open to add any AR experiences that Unity artists create, into the Reality Browser’s AR Showcase to promote your works. Thought, comments or question are all-ways welcome.

Some live screen captures from Reality Browser.

Current version of the Reality Browser is built in 2020.3.11. We just ran an array of tests uploading prefab AssetBundles created in 2020.3.23 and all work as expected. So any LTS version from 2020.3.11 to 2020.3.23 is viable to upload.

We also decided to kill the 30day trial and make ARConnex platform free to create AR.

Will build a Reality Browser in 2020.3.23 to insure compatibility works both ways before updating but avoiding the "Must Use version .xx" to create upload-able AssetBundles is a win. Portal Shaders, Standard and Transparent Video triggered in 3D bundles and ShadowCatcher created in 10.5 Shader Graph work fine.

I have new tutorials on the way highlighting some of the 14 drag n drop helper components and AR shaders in the starter package but here's a code-less interactive tutorial highlighting use of the event system and animator.

Hi all,
We updated the Platform and Reality Browser to 2020.3.23 and Assetbundles built from 2020.3.11 thru 2020.3.23 are supported.
With that we have added Unity's Live Capture package -Version 2.0.0-pre.3 - to Reality Browser and the ARConnex Upload platform.

Record in Unity using Live Capture, add to a Timeline, save the scene as an Assetbundle and upload to your account on Playback works on both Android and IOS versions of the Reality Browser. Here's a quick test with the packages sample character but any characters with the ARKit blendshapes would work.

Just added a new tutorial to our How To page highlighting the reusable helper components that are included in the ARConnex Starter package. Also making this BasketBall shooter example prefab available as a .unitypackage.

Drag n Drop components for: Runtime parenting of GameObjects to the Reality Browser's AR camera, Prefab instancing, and Dynamic collision/trigger event listeners help speed up AR production and fill in some use-full gamification extensions.

Just wondering - regarding AR - 3D experience type - are you going to add support for other file formats than unitypackages?
Fbx and obj formats for example.
Myself - I made a similar app recently - as a small scale project for a small user group of architecture students. My users are able to upload both unitypackages and fbx files together with bitmaps and configure them in a webgl player and then download to android phones for AR experience.
The unitypackage format gives me a posibility to bake lighting easily using the Bakery plugin.

Hi tomekkie2,

Actually we don't support .unitypackage. The upload platform accepts iOS and Android versions of either Prefab Assetbundles or Scene Assetbundles. This way they are properly compressed and optimized for the device they are viewed on. The Reality Browser automatically detect which type of Assetbundles is delivered from our cloud and processes it accordingly at runtime. 95% of the AR experiences I have made for the Reality Browser, regardless of complexity, have been Prefab Assetbundles. The only difference with Scene Assetbundles is that creators can use Unity's Baked Lighting and Timeline as needed.

The reason I don't see the ARConnex platform adding upload support for Fbx and obj with WebGL layout capabilities, at least in this product ...
A. The loss of device specific optimizations that happen when the Assetbundles are built. I had a test experience with 1.4 gigs of recorded keyframe data that resulted in a 30 mb Assetbundle. (golf claps)
B. The loss of ease when creating AR experiences designed to be at specific size or 1:1 life-size.
C. But mostly the loss of a ton of features that help creators build more engaging interactivity AR experiences.
D. Redundancy as Unity supports FBX and OBJ, as well as compatibility with URP and ShaderGraph.

With Unity's code-less workflow capabilities and our video tutorials that quickly onboard new CAD and 3D Artists with Unity. They can use their 3D tool of choice to model and animate, while utilizing Unity to add engaging interactivity and build optimized AR experiences without knowing how to program or optimize.

Thanks and all the best,


:sweat_smile: Of course I meant AssetBundles, not .unitypackages in my previous post (everywhere I said .unitypackages:)). My apologies!!!

I tried to setup a test experience with ARConnex account.
I was interested in Android only, but it did not let me go until uploading just anything for Ios bundle.
I had also to upload the image target, but I wanted to test it on AR plane only.
My assetbundle scene has appeared over the image, but I could not manage to make it appear over a plane, like the chair example.
For some reason I could not make my scene appear over the image for the second time, the progress has kept o circulating and circulating and I had to restart to make it go again.

It works with shaders from URP package, but interesting if it would also work with custom shaders, like the cross-section shader for example.

The .obj and fbx. formats are common and give option to setup a predefined AR experience by a user of a modelling software with no need to open Unity and create AssetBundle.

Cool and happy to help.

RE:I was interested in Android only, but it did not let me go until uploading just anything for Ios bundle.
For testing a single platform you can just upload the same bundle. For production, adding both is will insure users on both platform get a good user experience.

RE: I had also to upload the image target, but I wanted to test it on AR plane.
Yes - this is to support both possible AR distribution methods - print or digital
For targeting GroundPlane, you can upload any random photo. Eventually we will add a checkbox to bypass and auto-generate a Random Image Maker.

To place on the floor or tabletop. Switch to GroundPlane mode in the Reality Browser menu and the last experience loaded is ready to be placed. Seeing it once helps, jump to 40 seconds in this video .

Reality Browser supports image recognition via CloudScan as you have seen, but direct loading into GroundPlane via the mobile Deeplink , Showcase or AR Communicator is available. A tap on a Deeplink will open the Reality Browser, jump to GroundPlane mode and load the experience, ready to be placed.

The Showcase and AR Communicator works in similar fashion.
The difference is... Showcase is public and AR Communicator uses authentication, so it's private.
When logging into the Reality Browser with an ARConnex account, the AR Communicator will show your ARGroup(s) and any ARGroup(s) you have been invited to follow.

Each ARGroup has its own unique team list, and you can assign member roles as Editor/Publisher to give access to upload/publish. But the default role is "Audience Member" who can load from AR Communicator.

Say you invited a group of students to your ARGroup. They are the only ones that see or can load the AR experience you have set to "Push To Communicator".

The toggle for "Push To Communicator" is on the home screen for your ARGroup, and inviting members is in the settings for your ARGroup, under membership. You can change the experience that is pushed to communicator anytime but only one can be pushed any given time.

Communicator was originally designed to support prelaunch proofing and approval for Brands,Creators & Agencies yet with Deeplinks (which can connect experiences together) the use-cases have been broad...
-performers delivering one on one home performances using transparent video
-teachers delivering AR to select students/faculty groups
-businesses delivering proprietary 3D visuals to partners
-product manufactures that use the deeplink Helper Component to build 3D launch menus and AR Homepages

If on a mobile, you try deeplink examples on the Reality Browser AR Samples page.
They are for mobile distribution or linking AR to AR and work with html "href" tags just like any http:// link.

Thanks again, and happy to help anytime.


Hi tomekkie2,

Thanks for testing, I have added some views and experiences to your ARGroup.

Also a note from the QuickStart.pdf when using "Scene" Assetbundles.
IMPORTANT: Prior to building scene-based Asset Bundles, de-activate the Event System, Physics Raycaster and Audio Listener components as they are already included on the ARCamera in the Reality Browser Application.

We will be removing those items automatically in the next Reality Browser update.
For now just de-activate them, then increment your scene name and build a new assetbundle with the same name.

All of our tutorials have shown prefab assetbundles, but a tutorial specific to "Scene" based assetbundles will be up soon, and cover these point more clearly. so apologies.

We are also completing tutorials on the "touch action" input seen on the creation forms. They support feature extensions, currently allowing video uploaded to the platform to stream onto a quad.



Thanks, it worked.


Would tagging the ARCamera with “Editor Only” tag - to prevent it from getting into the assetbundle - work? I guess this camera is needed in editor only to test the interaction in editor.

My scene from the assetbundle scene appears over the target image, but for some reasons I didn’t manage to place it on plane. I am getting the “AR experience placed” message, but the progress indicator keeps on rotating and I couldn’t see anything placed.

Do you have any examples of placing a large scale objects (in the range of - say 10x10m or more) over the Arkit or Arcore planes? I have looked at the exhibition stand example and I did not see a possibility to scale it up to natural scale, so client could walk around and check whether the shelves of posters on the stand are at proper height - for example.
I see a user can fire a rotating or flip animation on the stand.
Can the user move the stand once it is placed - finetune its position in relation to orientation points on the ground?
I have just noticed in pdf file the helper components include “move with slider”, but it would be easier to operate to make these components more alike AR Foundation Samples where they work on touches.

Yes, 3 methods will work. - progress indicator loops when more than one AudioListener and Event system exist.

Tag Camera as "Editor Only"
Disabling the AudioListener and Event system
Also disabling the camera that have AudioListener and Event system

Iterate your scene name and matching assetbundle name so unity does not pull from its cache and should be good.

As for scale, yes.
2 methods, depending on if you would like the 3D scene displayed at a smaller scale when using a target image or not.

For GroundPlane
Size the scene in Unity based on 1 unit equals one meter.
Example Target of full scale trade booth. Scan and switch to GroundPlane to place

Also In GroundPlane, The Reality Browser has a scale slider to allow the end user to scale from 0.2 to 10x, If you want to insure the scene is all-ways viewed at 1:1 scale, you can override this capability and hide the slider using the "HideStandard_UI_Elements" helper component. Just add it to any GameObject and choose which UI elements to hide. (scale, spin, mute, or flip)

To support both CloudScan and GroundPlane sizing
We create a toggle button that triggers an Animator Event trigger.
Start at a scale that fits on your target image and when the toggle is tapped, the animation plays and scales it up to full size. We duplicate the clip/state in Animator and set its speed to -1 to play in reverse and use the same event trigger to scale back down.

Example Target of an MRI machine using this technique.

Here is the alternative version of the Trade exhibit that does not hide the GroundPlane scale slider.

Does the assetbundle manifest file get uploaded as well when the assetbundle file gets selected? .

But the progress indicator loops also after a pure prefab (not the scene) has appeared over the image.

And then I can’t manage to place that prefab on the plane.

No the manifest file is not needed.

Can you check that the prefab name & Assetbundle name match.
And that the ARCamera is not inside the in the prefab.

If you can you export your prefab or scene as a .Unitypackage and I will take a look and resolve.
Depending on size you can email to or upload it into the iOS assetbundle slots of one of your experiences.

The manifest files are always generated by default to help to manage the caching, so an option to use them instead of bundle renaming would be nice.

I enclose a prefab package which I can’t get to work.

I guess you can include a number of prefabs in a single assetbundle, and make that number of experiences from it, with various prefab names.

An option to skip the prefab name in the form would be also helpful, so then the first prefab would get selected.

7798476–985260–prefab_abc00102.unitypackage (1.73 MB)

Yes I agree, re: the manifest - The incremental naming came from workflow research & discussions with agencies to better keep track of master .unitypackages and their associated assetbundles artists produce. We're looking at a hashing solution to work with both.

The package sent looked perfect.

I deleted any existing assebundles in the assetbundlebrowser.
Incremented the name of the prefab to "prefab_abc00103"
created an assetbundle called "prefab_abc00103"
Built iOS and Android bundles using 2020.3.23, But have tested 2020.3.11 thru 2020.3.23

In your ARGroup you will see a new experience called Prefab Table.
Also updated the "Table" experience.

Here are screen caps from iOS and Android as well as the package and bundles.

7800420--985659--RealityMoment64612.jpg 7800420--985662--RealityMoment40670.jpg 7800420--985665--RealityMoment72280.jpg (2.97 MB)
7800420--985653--prefab_abc00103.unitypackage (1.78 MB)

Guessing at possible causes...
Note - When I mentioned, I deleted any existing Assebundles in the Assetbundle Browser.
I am referring to their "reference" not the built bundle files. This avoids the possibility of warnings which may cause a build issue.

7800474--985668--Screen Shot 2022-01-11 at 4.42.33 PM.png 7800474--985671--Screen Shot 2022-01-11 at 4.43.04 PM.png

It's also possible the prefab_abc00102 was uploaded earlier? and the older version was trying to load.

or not applying a change made to the prefab in editor using the override... possibly.

Whenever anything like this occurs, renaming the prefab or scene along with the assetbundle to increment them and updating the "3D AssetBundle Name" input field to match will resolve.

And so long as the AudioListener and EventSystem are not active, scenes will work, but only really needed if baked lighting, timeline or FaceCapture are being used.

Hope this helps a bit and the Table looks great btw!


Yes, it is working.
Maybe this table was not a good example, because of its shape. I might not notice it when being too close to screen, or when sitting on the chair. I have duplicated the prefab and replaced table with hocker and that works well on both floor as well as image.

Can depth and light estimation from AR Foundation be used? When the depth is working - the real objects in the forefront should hide the instantiated objects. With light estimation enabled the lighting should come from main light directions. The example prefab uses lights included inside prefab.

Using the depth the hocker on the image should get hidden by the roll standing in the forefront.

My client has requested the buildings he instantiates - should get hidden by cars parked on the pavement.


Great, Glad I could help!

Integration of AR Foundation feature sets is on our roadmap but no firm release date yet. When it does release, it would be component based, so creators can decide which features to add their experiences as needed. Reality Browser currently supports a wider array of devices than ARCore and ARKit, so the supported device range narrows a bit.


Means I am working (as my side job) on another AR branch, not duplicating someone’s else work. A common feature is that user - using a mobile app - places the content previously uploaded to server on the webpage form on his desktop computer.