Google Cardboard Toolkit

Hey guys!

I picked up a Google Cardboard recently and tried to build something in Unity for it. I was surprised to see that there was no definitive package for this. Durovis Dive deals with the gyroscope well enough but Andrew Whyte’s click detecting script only scratches the surface and there’s no great examples for learning. It should be dead simple to make something cool for this platform.

Cardboard SDK for Unity packages everything together and improves on it:

  • Delegates for discrete magnet events, not just clicking
  • Methods to grab data such as how long the magnet has been held for
  • Tech demo and small game examples

Best of all is that it’s completely free on Github. I’m working on an Asset Store entry as a donation platform but the latest will always be free and open on Github.

So why is it here in WIP? I have work left to do and I’d like to find the audience for it. Let me know if you work with Unity and have a Cardboard. Better still, download my Cardboard SDK and let me know what you think.

I’ll keep posting in here as I get time to check things off on the roadmap.

1 Like

So, exactly what is this for?

Creating VR games for the Cardboard. The bulk of the work is interpreting data from the accelerometer and compass into something meaningful and useful for creating games.

For example, the included sample lets you walk around and interact with a VR world with just the magnet trigger. Cardboard SDK exposes all the input events for that and sets you up with a Dive camera to handle the head-mounted display. You could make an interesting VR point-and-click adventure around that.

1 Like

This is wonderful. Thanks so much for sharing. I’ll contribute if I can improve on it. I was seriously disappointed with the Durovis Dive Unity code when I tried it a week ago, as it would constantly jitter greatly even if I placed the phone on the ground. I’m excited about trying out another approach. It’s great you’re putting this out there. Again, thanks a lot.

This is very awesome!

Tips Hat

Thanks for the kind words! It was a bit hard to know if anyone would care as Cardboard still feels like a bit of a niche. I think there’s a lot of opportunity here.

Quick update:
As I built the examples I realized that it didn’t make much sense to have multiple instances of CardboardInput and its main functions were called on Start and Update. The last few commits change CardboardInput to inherit MonoBehaviour and run from a manager object. It ends up simplifying a lot of code if you can stand to have another manager script in your scene.

You can also trigger the magnet with Space (or whatever’s on your “Jump”). It helps to debug and now you can play when you don’t have a Cardboard on hand.

Let me know how it goes. I use Dive to handle the gyroscope bits but maybe something I did helps mitigate that jitter. I’d love to replace Dive with something more open if I could.

I took some time to play Google’s official apps and noticed a major thing that I was missing: vibrations. When you click it vibrates the device to give you feedback that your click was registered. It seems like a valuable tool from a user experience standpoint so I spent a while to get that working.

When I did, it become very apparent when it was registering a click and I started noticing a lot of false detections. I took the last couple days to build better debugging tools and hunt down these rogue clicks. No one likes being told they clicked when they just rotated the device quickly.

It left the code a bit of a mess though so my goal now is to clean that up. After that I can tag it 2.0 under semantic versioning as it technically hasn’t been backwards compatible since I moved to a MonoBehaviour scheme.

When I have these basics down then I want to prototype a rails-shooter like Time Crisis or Link’s Crossbow Training. I’m not sure exactly what that will look like but it’ll help flesh out documentation and I hope make for some cool promo screenshots.

1 Like

Wow this is cool. I just tried Cardboard and surprised with the (kinda) smooth experience, even with Unity demo from Dive out there. It’s way easier to show and demonstrate stuffs to people than setting up the my oculus DK.

Hoping to create small prototypes later, thanks for sharing this.

Yeah, I love the pick-up-and-go nature of the Cardboard. Hopefully this helps to accentuate that and enable developers like you.

I tagged 2.0 yesterday: Release v2.0 · JScott/cardboard-controls · GitHub. You should post your prototypes progress here as well! I would love to see how you guys use the toolkit and I bet you guys will come up with way cooler things than I can think of :slight_smile:

I’m going to start working on my own prototype now to help showcase the project. As much as I’d love to start adding fun new features like camera and shake input, it’ll be good to use my own toolkit and see what works and what doesn’t. I’ll keep posting updates in here like I just encouraged all of you to do.

Boring dev stuff ahead:

Hopefully the code speaks for itself but I also wanted to dev blog on the technical architecture of the kit a bit because I think it’s interesting. I was warned when I started that going beyond what Google does for magnet input in their apps – just detecting the click – was tricky because the phone’s sensors can be pretty touchy: I can go into the next room and potentially change the magnetic field.

The trick is to treat the magnet relatively, which the original scripts did but I had to take it a step further. I could detect and rely on the change in magnetic field from one moment to the next. This helps remove false negatives from just walking around. However, I didn’t really save state between the discrete events of up, down, and click. Click was a product of coming up but didn’t check that the magnet went down first until recently. This is important because the phone might suddenly say it went up out of nowhere because of the imprecise nature of the device. So now I have a collection of discrete events, from CardboardInput.cs, and the stringing of them together triggers your delegates, from CardboardManager.cs.

I also can’t stress the importance of extremely clear, readable code. The single best thing to fixing these false detections was to take a step back and restructure the code. Keep your methods short, abstract, and self-documenting without comments. It’s still far from perfect but it brought a lot of weird things to light. In understanding what I was doing it became obvious what I should be doing. If you’re inclined to take a look at my code, let me know what you think.

I just spent some time to test my idea using the plugin! The code from github need a little set-up because some references are gone (missing script, missing reference in prefab), so maybe you’ll need to force meta files to make it visible? Will it helps? After a little tinkering it works nice though, the code is also easy to understand.

Anyway, I tried to create a manga/comic reader prototype for VR. Watching movie is already awesome and I want to create a good environment for reading stuffs. It’s simple for now, moving right or left and using magnet event for moving the next/previous page to the front. I want to try to complete it a few more features before going fancy haha.

I also tried to recreate the “rotate to portrait for back / reset” in the cardboard app for the plugin. Apparently Unity can’t detect the screen orientation if it’s already excluded in the build settings. That makes me can’t use Screen.orientation for detecting portrait if I didn’t enable it (I’m using auto-rotate landscape). Other solution is using gyroscope/accelerometer for detecting the orientation manually, but I haven’t got to that yet.

Keep up the good work!

Ah jeez, I thought I cleaned up the prefab. Thanks for letting me know, I’ll be sure to fix it up when I get a moment.

Have you tried Input.deviceOrientation instead of Screen.orientation: http://docs.unity3d.com/ScriptReference/Input-deviceOrientation.html? My first thought was using the gyroscope but it seems like that would get complicated quickly. Be sure to make a pull request or post the code here when you get the gesture working reliably so I can integrate it for other people.

Also, cool idea! It’s always uncomfortable for me to read comics on phone screens so this is a really smart use of the benefits of VR space.

Ah nice, Input.deviceOrientation works. I guess Screen.Orientation was only called when the screen has been rotated and changed, and it can’t be called when disabled (because it doesn’t actually change). Thanks for the info!

Also tried to create the pull request, I haven’t tested it extensively but it seems well enough. I rechecked with Cardboard app and realized that we only deal with Landscape Left and Portrait (for reset), so I used that as a reference for detecting the orientation.

There’s a hole in the back for the camera as well which makes for a fairly intuitive “correct” positioning.

I pulled it into the ‘rukanishino-orientation-reset’ branch and cleaned up the code a little to fall more in line with the conventions so far. I just want to mix it into the tech demo scene for the sake of documentation before I make a master release for it.

Edit: latest master has OnOrientationTilt. Thanks again for kicking that off, rukanishino! There’s a lot of value in the tilt gesture.

Someone on Reddit was kind enough to point out that the Durovis Dive license prevents you from including the SDK. That means that I can’t include Dive in my SDK for your convenience. It seems silly but rules are rules.

Because of this I quickly threw together v2.2:

  • Ripped out the Dive SDK and Castle example to avoid legal issues
  • Added instructions on including the Dive camera
  • Added debug controls so you can test it out without Dive

I also threw together a compiled APK of the Tech Demo scene, which the Dive license does allow, and a .unitypackage file for easier integration. Check out the release on Github!

At this point I’ll probably keep adding features when the mood strikes but I feel it’s good enough to start advertising elsewhere. Can you think of any other forums or communities that might be interested in this toolkit?

The initiative of this is great! Cheers for the work, i’ll have a look a bit later but I think it’s just the thing I needed to sort out one aspect of something i’m working on. I am, however, a little disappointed at the use of the Dive plugin, which while being very useful when working with the cardboard so far, the license of which makes the use of it restrictive and i’m looking for an alternative (After trying a few myself, including self written)

I’ll have a go and i’ll have a root around in the oculus mobile sdk for how they grab their orientation information with a mind to, if suitable, moving it over to unity-friendly form. Ideally this should work with any suitable mobile device, i havent looked but i’m assuming their sdk isnt dependant on the Gear VR for sensor information, if it is then maybe instead the Unity api can be used in tandem to produce equivalent results (although obviously the average mobile sensors arent going to be as accurate as the Samsung peripheral)

Oh, and while i’m not really a Reddit user, i find a lot of useful information there and there’s a decent amount of discussion there (So you’ll probably get some notice).

I would also love to have an alternative to Dive to use with this. As you can tell by my last post, I’m no fan of their bizarre license either.

If for some reason you’re not making progress with the Rift SDK, there seems to be a little bit of information around how Android’s Cardboard SDK does it. HeadTransform gives you the rotation angles you need to set the camera and someone decomposed the .jar so you can see what the class is doing here. I can’t for the life of me figure out how mHeadView really gets set though.

Edit: HeadTransform was the wrong place to look. HeadTracker is probably what you want to deconstruct. I gotta stop looking at this stuff now or I’ll be up all night :slight_smile:

I think google may have beaten us to the punch! https://developers.google.com/cardboard/unity/ Hey well, its a nice feature list and should make vr-teering on phones a bunch more fun

It’s a nice feature list but there are a few problems with it at first glance.

First, it requires Unity Pro. If there’s a way around this then it isn’t well documented.

Second, the event system bugs me. If you look at Teleport.cs you’ll see that you get Cardboard.SDK.CardboardTriggered. Delegates instead of booleans ensure you don’t miss an event and Google only commits to supporting a click, not if the magnet goes up or down and how long it’s been held for. That data enables certain control schemes for games.

Third, the documentation is non-existent and the code doesn’t explain itself very well. Maybe it’ll be fixed over time but given what we see for the Android SDK code I’m not so sure.

What I’m saying is that if we develop an alternative to Dive that works with Unity Free then we have a legitimate competitor or companion to the Google-provided SDK.

Well, fair enough points, but its probably worth grabbing a cardboard with the strip rather than the magnet and see how that alters an approach to input also. I plan to have a look around the oculus mobile sdk anyways (not so sure about the android sdk) in case there’s some useful clever things in there, if i have luck ill post here