Any way to add position tracking to Cardboard?

Playing around with Cardboard (on iOS) today, I’ve noticed that in both the off-the-shelf apps I’ve tried (including the Cardboard demo app), and the demo I just built myself, the device is tracking orientation very well, but not tracking position at all.

That means, for example, that you can’t lean or move sideways to get a better view of something occluded by a foreground object. The whole world just appears to move with you.

It seems to me that some positional tracking might be possible by:

  • Using the accelerometers and dead reckoning. OR,
  • Using the camera to compute visual flow. (Obviously this would require a phone/viewer setup that does not block the camera.)

I suppose I could also do a cheap but limited trick based on tilt — assume the user is tilting the whole upper body, and calculate the position change that would go with the measured amount of tilt. That might be good enough for looking around a cockpit, for example, when the spaceship you’re trying to see has just flown behind the frame between the windows.

Anyway. Does anyone have any experience with these, or other tricks I maybe haven’t thought of yet? Looking around is nice, but being able to actually move — at least enough to see around obstacles — seems like a really nice enhancement.

1 Like

Well holy cows — I guess another option is to use ARKit (which is basically my option 2, but with Apple doing all the hard work). This guy managed it, and it looks awesome:

Blog post including directions and code are here; English auto-translation here.

I may have to try this! Though apparently it means that I’ll have to update my phone to iOS 11 (I guess that was bound to happen sooner or later anyway). This is much more like “real” VR to me than being stuck in one spot, only able to look around.

Any thoughts, opinions, or other ideas are very welcome!

2 Likes

Ha, nice find (came to the thread to suggest it might be possible but – yep, the Internet always impresses – someone’s already got it working!)

In case any Android users wonder, the same blog had it done on Android too here! Heading off to try it now

What a shame DayDream (v1 at least) doesn’t have the camera slot cardboard did!

1 Like

I’ve just tried it… It works! The rotation tracking is just as good as with the GoogleVR kit (or maybe it’s Unity that was doing that — I’m not sure). And the position tracking is amazing too!

At first I thought it wasn’t working, because I misjudged how big the room was (I’m using the CubeRoom prefab from the GoogleVR SDK, even though in this scene I’m not using anything else from that SDK). So it seemed like the walls were swimming away from me as I approached them.

So then I plopped a meter cube down near the origin, to see if I could walk around it. I can! And then I realized how big the room was in comparison, and shrunk it down to something closer to the size of my office. And lo and behold, everything is rock-solid now! Well, OK, maybe not rock solid — there’s occasionally a bit of wiggling. But on the whole it’s very good. I can squat down and examine the underside of things, stand tall and peek over the top, see exactly where the virtual cubes sticking out of the wall should be in relation to my real office, etc. (The urge to reach out and try to touch them was irresistible a few times.)

I even found the real-world office door, and stepped through the virtual wall to look at the scene from the outside. Perfect!

I couldn’t be happier. This is how VR should be!

OK, now I’ve played with it some more, and I could be happier. :slight_smile: I think there’s just more I have to learn.

It turns out to be very important which orientation you hold the phone in when first launching the app. If you start out pointed the wrong way, then things really do recede from you as you approach them. And let me tell you, that is very disorienting!

I’m sure this has something to do with the interaction between the camera (whose orientation is set by Unity) and the camera’s container object, whose position is set from the AR session. Basically I think we’ve got two different coordinate systems here — one made up by ARKit, whose orientation seems to depend on the first direction the camera sees, and the other chosen by Unity, perhaps based on the magnetometer.

I’m sure there is some way to ensure these two coordinate systems are in agreement. I just need to learn a lot more about how all this stuff works. Please feel free to correct me if I’m laboring under any misconceptions (which is still quite likely!).

More on the marriage of ARKit and Cardboard… Here’s what I’ve learned.

It appears that both ARKit and the Unity XR stuff define “forward” as whatever direction the rear-facing camera is facing when they are initialized. As long as the phone is not lying flat, they come up with the same answer. But when the phone is lying flat, they deal with this in different ways, and come up with different notions of which way is forward. That results in the wonkiness described above — as you move around in the real world, your position in the virtual world may appear to move the wrong way.

I find I’m able to get them back in sync by rotating the camera (actually, its parent, since you can’t directly change the transform of an XR camera) so that it matches the heading reported by XR:

        Matrix4x4 matrix = m_session.GetCameraPose();
        Quaternion arRot = UnityARMatrixOps.GetRotation(matrix);
        Quaternion xrRot = vrCamera.GetComponentInChildren<Camera>().transform.localRotation;
        // Compute a rotation that will cancel out the difference, making it
        // so that the final camera rotation matches our AR rotation.
        float ydiff = arRot.eulerAngles.y - xrRot.eulerAngles.y;
        Quaternion diff = Quaternion.Euler(0, ydiff, 0);
        vrCamera.transform.localRotation = diff;

I do this only in Y, of course; you don’t want to muck with the X and Z rotations. I have this hooked up to happen on MouseButtonDown, so when you press the Cardboard 2.0 button, it resyncs the rotations, and life is good.

Life doesn’t always stay good, though. I notice as I walk around that sometimes ARKit seems to struggle, and causes the world to wiggle or jump around. And after that, the rotations may be out of sync again. I should probably just monitor that difference and resync whenever it gets too big — or, perhaps, simply sync them on every frame. I’m still experimenting.

This wouldn’t give you positional tracking but using the camera to orbit around your position can give the ability to look around objects and you only need an accelerometers input and head tilt to update the cameras position. See my Tutorial for cardboard.

I was thinking about that — particularly for seated VR, it might be enough to use the tilt to orbit in a limited range around the seat (i.e., assume the head is attached to the seat by a typical torso length).

Where is your cardboard tutorial?