Starting AR development help using Unity3D

Hi,
I’ve just starting using Unity and am new to AR.
My goal is to be able to overlay a video capture of a person onto the real world environment. So not a construct or cartoon - video of an actual person. So I could walk into a room and I would already be there kind of thing (a historic museum experience).
My question is what is the best way to get to that point. What are the key concepts of Unity to master? Are there any external libraries I should investigate?
The user platform is flexible. The application is just for a small museum so it would most likely loan headsets to visitors.
Please be gentle…

@rainChu Opps! It’s not a commission or job - it’s just a goal. Sorry if that wasn’t clear. I’m just an engineer who’s interested in learning how to develop that ‘type’ of AR experience. I see different SDK plug-ins available for the Unity framework like Vuforia (built in), or Wikitude. Do you have any experience with either of these? Can you point me towards anything that could help me start off in the right direction (so I don’t spend time learning stuff that’s not going to help me get to my end goal)?

As for AR, I haven’t toyed with it since the ARToolkit and the EyeToy, so no luck there, but as far as tracking an HMD goes, you should definitely work with OpenVR. I’d personally download a few competing libraries and dive right in. Unity is such a rapid tool that, during the prototyping phase at least, it really isn’t too much of a problem to try things out before you decide.

As far as learning Unity itself, the best way to do it is to just jump into the deep end and try to swim. There’s so many youtube videos, many official videos, but I’ve seen ones go on for an hour or more without really teaching anything interesting. Case in point: I needed to learn the Shuriken particle system in a hurry, found a 1 hour long vid posted on an official blog (can’t exactly remember where, but I’m sure it would show up in google), decided to do it myself and skip the video, and 15 minutes later I had a finished particle effect. I lacked a lot of knowledge that first time and the first particle effect suffered for it, but now, 8 effects later, I’m proficient. That’s really what I recommend, though all people learn different, so… your mileage may vary.


However, these lead me to the more important question: As far as your AR setup goes, how do you want to present this experience to the user? I’m not quite sure I fully understand what the experience you’re trying to make will be like. You say the background is to be virtual and the video of the person is to be photographically captured, correct? In this case, you’re not actually looking for an AR library. Vuforia and related libraries do only the opposite. They take a virtual object and put it in the real world. They provide code that finds the 3D coordinates offset from a camera, which you can then use in-experience to position objects and make them look like they belong.

To put a physical person in a non-physical room, you should look into something that’s been called MR or Mixed Reality lately. It’s kind of a confused term, but it’s what we ended up calling it when we put a live video on a virtual environment.

To do MR recording, you need to make your experience aware of the camera position, and then position a separate, in-game camera at that position. The real world camera then captures the person in front of a green screen. You can also use background replacement tech or depth tracking to cut the background out, as with an Xbox Kinect. When these images are overlayed, it makes it seem like you’re actually there. Look into the Liv SDK for this, it’s incredibly easy to do!