How best to test VR when not always using a VR headset?

Hi all,

Just got myself an Oculus Go and can really see the potential of this thing. I have successfully deployed a sample app to the device but it has occurred to me that whilst developing a game if I need to deploy to the VR device each time then the development/iteration loop is going to be painfully slow. Everything I have read so far always talks about deploying to the device when testing.

I would prefer to develop and test just within unity, but obviously whilst the Head Mounted Display can be simulated via the mouse and a first person controller, the single hand controller you get with the Oculus Go/Gear VR/Google Daydream is another matter. Is there an easy way to simulate that within the Unity editor?

How do others in here develop for VR headsets? Do you really deploy each time to the device or do you simulate the device and hand controller somehow?

Daydream allowed you to use a phone and connect to the editor to simulate the Daydream controller. On the Gear VR/Go side there was never something like this created. There was a way to use Oculus Touch controllers in a workaround but these aren’t apples to apples since they are 6DOF and most don’t have those since there on a separate system.

I did add keyboard/mouse emulation into my product in my signature (not free) to do what you describe (ctrl+drag moves controller and alt_drag move hmd) but as far as I know nothing official was ever built by Oculus. It does speed up development/testing quite a bit.

I’m dealing with this issue right now, in fact (project open in the other window!). Here’s how I’m handling it:

I have two camera rigs in the scene, “DesktopCameraRig” and “OVRCameraRig”. And I have a little script called CrossVRManager that enables one of these, depending on the platform:

public class CrossVRManager : MonoBehaviour {
    public GameObject desktopCameraRig;
    public GameObject ovrCameraRig;
   
    void Awake() {
        #if UNITY_STANDALONE_OSX || UNITY_STANDALONE_WIN || UNITY_EDITOR
        desktopCameraRig.SetActive(true);
        ovrCameraRig.SetActive(false);
        Cursor.lockState = CursorLockMode.Locked;
        #elif UNITY_ANDROID
        ovrCameraRig.SetActive(true);
        desktopCameraRig.SetActive(false);
        #endif
    }
}

Each of them has an object called LaserPointer. For the desktop rig this is under the CenterEyeAnchor, which also happens to be the main camera. For the OVR (Oculus VR) camera rig, there are actually two of these: one for each hand. (Only one of these will be activated by the Oculus SDK, depending on which hand the user uses for the Go controller.)

The LaserPointer objects have a simple “PositionReticle” class that does just that:

public class PositionReticle : MonoBehaviour {

    public Reticle reticle;
    public float maxReach = 30;
    public LayerMask layerMask;
   
    void LateUpdate() {       
        Ray ray = new Ray(transform.position, transform.forward);
        RaycastHit hit;
        if (Physics.Raycast(ray, out hit, maxReach, layerMask)) {
            reticle.SetPosition(transform, hit);
        } else {
            reticle.SetPosition(transform, ray.direction);
        }
    }
}

And note that in the desktop case, the LaserPointer object is not at the same position as the camera; I put it a little down and to the right, so that I can actually see the beam. Oh yes, and my Reticle class draws a beam (using a LineRenderer).

The desktop rig has a script that just captures the mouse cursor and moves the camera with the mouse (to substitute for moving your head in OVR). My various components that need to know where the laser is pointing, simply GetObjectOfType() and ask it. Oh yes, and to abstract away the trigger, I have this code:

    bool CheckTrigger() {
        #if UNITY_ANDROID && !UNITY_EDITOR
        return OVRInput.Get(OVRInput.Button.PrimaryIndexTrigger);
        #else
        return Input.GetMouseButton(0);
        #endif
    }

So on desktop, you click the mouse button; on Oculus, you press the trigger. This gives me a great point-laser-and-click interface that works on both desktop (mainly for testing within Unity) and in VR.

4 Likes

Any reason you didn’t actually make a hand for the editor testing? For a laser pointer having it be gaze based is probably good enough, but things like grabbing, teleporting, etc. I’ve found it much more handy to actually simulate the whole thing in the editor. You’re probably over half the way there, just wondering why you went with that approximation. It would also allow you to bring the rigs more or less the same.

The Oculus Go doesn’t have a hand; it has a 3DOF controller, which is typically used as a sort of laser pointer. Since that’s what I need to test on the device, what would be the point of having something else in the editor?

How do you do that? What custom input devices, or what usage of mouse and keyboard, do you use to emulate a full 6DOF grabby-hand controller (which is what I assume you’re talking about)?

For all intents purposes it’s a hand. Oculus even runs it through IK to simulate what would be a natural position with an average body frame based on the 3DOF orientation and gives you the option to select left or right handed. Yes it’s not full 6DOF and you can’t reach but it doesn’t stay locked to your hip either and just spin in space. I’m not talking about anything different than is on device. Having it in editor anchored the HMD is different than on device.

No not 6DOF, but there are numerous games where you can grab in 3DOF. Most just limit the grab to within the distance of the hand and then snap (others just use the laser pointer to grab). The part that would be harder to do off the HMD would be the grabbing games that allow you to rotate the grabbed object based on rotation changes after grabbing. Your neck doesn’t rotate the same way as your hand does. Emulating the hand gives you a more apples to apples for the go/gear vr, while allowing you to test in editor without needing to deploy or wear the HMD and just use the editor like normal.

Sure, that’s true enough. So what exactly would you propose?

That’s true. My current prototype has this: you can rotate your hand to rotate the grabbed object, and I don’t have any way to do that in the desktop app. But this is a minor (and simple) enough function that it hasn’t been a problem.

If you have ideas about how to cleanly simulate that in the editor, I’m all ears.

I think you mistook my original comment. You clearly spent some time in getting some editor functionality in play I was just curious on why you went with the head instead of a hand.

The feedback that I’ve gotten from my mouse/keyboard emulation has mostly been positive. It isn’t perfect especially since a controller has 3 axis of control but a mouse only has 2, but without a phone app like daydream it’s about as close as you can get in editor. In general roll is used the least so left ctrl + drag being mapped to pitch and yaw is used the most and production games that use it stated that it sped up their workflow quite a bit. I did leave right ctrl + drag for roll in case someone had a knob or something else they were trying to turn but something had to give so that’s why it’s on a different binding.

I was just curious not directly emulating the normal rig and having two separate ones if there was some need for your game.

I’m not offended or upset. I just literally have no idea what you’re suggesting.

Right, in fact the VR app has a total of 6 DOF (3 on the head plus 3 on the controller), which we’re trying to simulate with 2DOF (a mouse). So the question is, how to do that? What do we give up?

I begin with the observation that you would rarely need/want to point the laser pointer somewhere you can’t see (since, if you did, you couldn’t see what you’re doing). So I don’t bother to support that in the simulator at all. Now what’s the next most important thing? Well, looking around and pointing at things are both important; I don’t want to give either one up, nor do I really want to use a modifier key for either one. So fine, we’ll do both at once: look and point. (And I’ve also given up roll on both head and controller, for now.) Works fine for what I need, at least. And as a bonus, I’ve basically implemented a gaze pointer, which may be handy if I decide to also support something like Cardboard again.

I did consider using some modifier keys to switch between “turning the head” and “moving the pointer” (which might be what you’re suggesting). But that seems needlessly fiddly to me. 90% of the time I have no need to move the head and pointer independently; the important thing is that I can manipulate (point at) stuff in the environment, and that I can see what I’m doing. I get that painlessly by doing both at once.

I suppose if the need ever arises, I could add a modifier key to separate these functions only when needed (e.g., to rotate the pointer but not the camera while the key is held). That’s just a need that hasn’t come up yet.

The separate in is there expressly to emulate the normal rig. It just seemed a clean way to do it. The alternative would be to go through the rig hierarchy and enable/disable individual components, based on… what exactly, I’m not sure (there is no way to tag individual components AFAIK). Once set up I rarely have to touch the separate rigs, but having them is nice on those rare occasions when I do want to tweak something (such as moving the laser pointer off-center a bit in the simulator so that I can see the beam).

You don’t give them up in total, the only thing you give up is being able to do them in the same binding (or in other words at the same time). My current keyboard emulation has those 6 axis mapped to separate keys (left ctrl, left alt, right ctrl, and right alt). So you can do do all the head movements or hand movements that you can with the real device.

Any differences between on device and editor introduce things that aren’t easy to test in the quicker environment. Whether or not they come into play or not depends on the game. It’s just a way to use the entire rig in editor without needing to touch it so it’s completely apples to apples.

Ah, I see.

OK then, we’re not really so different. It’s just that I prefer to use the mouse for both pointing and looking, as those are things that (in VR) are both done with speed and precision — something you can match with a mouse, but not with keyboard inputs.

If I do ever have the need to control head and pointer separately, I’ll certainly add that. But I’ll do it by having a modifier key that separates them — the default behavior should (in my use case at least) always be to control them together.

JoeStrout. I’ve just come across this thread, and I’m also new to Unity/VR. What object is the DesktopCameraRig. I can’t find it, so I’m guessing it might just be another instance of the OVRCameraRig???

What you have written is of enormous help, but I’m just not sure where to attach each of the scripts.

Would you mind helping a little more?

No, DesktopCameraRig is just a GameObject with a script attached for rotating the transform with the mouse, and checking the mouse button as “trigger.”

I know, I really need to write all this up in a blog post… it’s a bit much to explain here. I’ll try to do that this week.

2 Likes

That would be brilliant Joe.

Hi,
would it be possible to share me the link to this blog…?
Thanks in advance.

I haven’t written it up yet. This semester has got me spread pretty thin… But thank you for the nudge, I’ll try to get to it soon.

:slight_smile:
oh okay… Thanks Joe

Any updates on when this will come out?

Hey Joe! I just wanted to let you know you’re doing an amazing job! All the support you are giving is so awesome! I understand that you are helping where you can and juggling your personal life and for that kudos to you!:slight_smile: