We’d like to show our vision-based AR tracking library, String™, in action.
The library is cross-platform, yet highly optimised for mobile platforms in general, and iPhone in particular. A typical video frame on an iPhone 3G, with one marker in view, takes around 15 ms to analyse. The tracker allocates only about 120k of memory at startup (plus about 1k per tracked image) and never allocates memory after startup. Capturing and displaying video frames takes up some additional resources.
The markers are simply images you drag and drop into the Unity project. They do need to have a high contrast outline for tracking, but other than that they can be pretty much whatever you want. The reason we use rectangular image markers is that it’s unbeatable for performance. Our tracker leaves most of the CPU and GPU time for your app.
For each discovered marker image, the API tells you its transform, which image it is and its relative colour, allowing you to mimic real-world lighting conditions in your scene.
The demos below run at the camera’s maximum framerate on both 3G and 4 (15 FPS and 30 FPS respectively). You can even track many markers at once with good framerates.
Unity has been our test bed for end-user app development from the start, however we’re also targeting other platforms such as openFrameworks. The library is very easily integrated into new and existing Unity projects for iPhone and desktop. If you can use Unity, you can use our library. Android support is coming soon.
Here’s our 3D augmented reality drawing app called Scrawl™:
Amazing, it’s nice to finally see a licensed version available for Unity iPhone. There’s another company out there that will also soon offer it for Unity developers, but so far only for Android.
Jojan has updated his post above. As such, the questions below have been answered:
Could you provide a little more detail on using fiducials? This might sound dumb but I assume in the demo above it only searches for the black border shape with a white border around it? The design within was just for aesthetics, right?
Also, does it allow it for custom fiducials, a provided set, or is everything just optimized based on the one as shown in the demo? And if it supports different ones, can it support multiple per app? An example would be imagine an A3 or A2 poster, with four smaller (and all different) fiducials in each corner. Each one a reference point in the virtual 3d world (NW, NE, SW, SE). So if the device is close to the poster, and only the North West fiducial is being captured, it knows where to display my 3d overlayed object “centered” on the poster (and we’ll only see it partially of course). Basically the idea is that the user can move close in, and therefore it’ll need multiple different reference points.
But very exciting, should be interesting to see what people make with this.
This is really interesyting, but why does the link go to facebook? Can’t you provide a direct link for people who don’t want to use facebook?
(I’m on facebook btw but I am becoming increasingly pissed off at their attitude towards people’s privacy concerns so find myself using it less)
Looks awesome. I know that my daydreams about expressive applications of AR have often led through communal graffiti, primitive building applications. Kudos on executing on it and keep up the good work!
AdriaanZA: The markers are simply images you drag and drop into the Unity project. They do need to have a high contrast outline for tracking, but other than that they can be pretty much whatever you want. The reason we use rectangular image markers is that it’s unbeatable for performance. Our tracker leaves most of the CPU and GPU time for your app.
For each discovered marker image, the API tells you its transform, which image it is and its relative colour, allowing you to mimic real-world lighting conditions in your scene.
sonicviz: We understand, but for the next couple of weeks while we finish our new website, Facebook and Twitter remain the best options for connecting with us and for following news updates and announcements.
Here’s some recent project’s I’ve been working on with String™ over the past few weeks which show the unbelievable power of this package. Considering we’ve been able to pull all of this off in just a few weeks that should let everyone know just how easy this tool is Enjoy:
Hi there!
Good luck with the release and everything
I’ve been working on AR with String and Qualcomm’s SDK, Qualcomm needs you to upload your target to be analyzed and then produces you a file to use in your project! Yours on the other hand is very very easier and faster to work with, your markers are simply .png files with simple rules to follow! (You rocked it there )
I want to give the user the ability to take a picture of something and then start using the picture instead of printing some special markers!
In this case:
Qualcomm doesn’t have any rule for borders (near white, black outline, etc) so you could use any pictures you want but its problem is that you’d have to upload the image to be analyzed first!
Yours in this case could easily be taken into account as I could simply take a photo of whatever I want and then load it in Unity as a String AR marker and voila!!! But it’s not that easy! The image has to follow your rules (the black border and white border thing!)
So in theory yours seems to be more workable but I’m not sure if we could forget about the border rules and let the user take a photo of whatever he/she wants or not?
@zipper Because you can modify the reference image to add the black and white border… but you still have to detect the black and white border in the real-world image you are getting from the camera at runtime. If it wasn’t necessary to have the black and white border, it would be possible to do what I was suggesting in my previous topic… I think it isn’t possible, but I just wanted to ask