My only other advice is, don’t feel you need to use all the Oculus components and objects. For the most part, it’s way more complicated than it needs to be (and half of it is buggy too). You’re a very competent Unity dev, so I think you’ll do well to start with just a camera in a scene standing over a plane. Build & run, and boom, you’re looking around in VR. Build your way up from there step by step.
Brilliant thank you! So just use the supplied power cable and connect to PC USB port. I don’t have the correct port so I’ll buy a different cable or adaptor I suppose!
Great. So really surprised about the Ctrl+B. I guess I need to set up an application or something first on the Oculus side then should be sailing?
It really is that easy. All you have to do on the Quest side is put your device in developer mode (google will tell you how).
And yeah, I had to buy a USB-C or whatever it is adapter cable (got mine for a few bucks at Best Buy). But then yes, you just plug that into the power port on your headset and off you go. Get a cable long enough that you can put your headset on without disconnecting the cable from your computer; this will save you a lot of time in testing.
Cable with ADB is definitely the easiest deployment option.
DO:
Use stylized art assets, the display is nicer than the Go but you will still lose detail.
Use the lightweight/universal render pipeline for perfomance.
Use OVR or Unity’s new XR input. The legacy stuff is a bit of a chore.
DON’T:
Use HDR. Due to the binned rendering architecture it doubles the workload of the GPU.
Go crazy with transparency, (layered transparency and overdraw is especially painful).
Neglect memory, 11MB of cache and ~2.75GB of available shared memory can be tight.
Good luck with that. Since the Quest came out (actually, slightly before), Oculus has gotten very difficult to work with. They don’t care about indie devs much. I used to be a huge Oculus fan, and I still dig their tech, but as a company, well… maybe you’ll have better luck than me.
I think their attention is probably just split in too many directions. On my last project we had a large tech client that was partnering with a large entertainment franchise, both billion dollar companies, and there were still some communication issues with Oculus.
All is going well with the old oculus stuff but Unity’s new XR stuff fails to get floor height sorted for some reason… And I’m not supposed to use the XRNode stuff according to dev comments so I’m just stuck manually adjusting stuff to get correct floor height, seems a bit odd!
OVR handles that stuff pretty easily, though parts of OVR are definitely a pain with Unity. Using the XR api, you should be able to get the device’s TrackingOriginMode and TrackingSpaceType, as well as the devicePosition relative to the origin mode. From there you can calculate your desired offset based on whether the user is stationary or room scale.
It’s definitely cumbersome compared to OVR, but I suppose it does give a little more control over how to handle all possible states.
Sorry to be a bother but can you give a hint how I should be using TrackingOriginMode - having trouble with that.
It’s really confusing why it doesn’t just work. I have a char controller going and expected that the camera inside would simply treat local 0,0,0 as floor - so if I put the controller down on the floor, it would be at my feet. Instead it’s either above or below ground, depending on the headset’s starting height when it is run.
(char controller is offset so base is at feet)
Any tips would be really helpful. I am going to play with devicePosition next, though that’s confusing too!
TrackingOriginMode let’s you determine the origin of the relative position reported by devicePosition. If the TrackingOriginMode is Floor (usually the case in roomscale mode) the floor should be at world origin (0, 0, 0). If yours is changing between sessions, my guess is that you might be using a stationary guardian, in which case your TrackingOriginMode would be device. This puts your world origin relative to some previous moment in time, usually a display recenter event.
You can try to force the device to use roomscale by calling SetTrackingSpaceType(TrackingSpaceType.RoomScale), and if your guardian is set up for roomscale, it should use the floor as world origin.
If you want to handle both stationary and roomscale experiences (sitting and standing for example), you would have to do some additional calculations to fully calculate height involving estimating or asking for the player’s height.
@hippocoder oh man I feel your pain. I made a really simple VR sandbox to play with some of Unity’s XR stuff. Part of that was I copied out a script from the now missing VR template they used to have. You can see the code here…
That manages the height for you and sets tracking space etc, worked for me on Rift S at the time.
If you grab the project you can also see how I setup an XR rig etc as well and I’d just started to implement some really basic locomotion, no idea if thats the best or ‘right’ way.
I’ve since changed over to a Quest as I had just purchased the Rift S and was able to return it as I thought with the upcoming Oculus Link and confirmation Unity and Unreal will support the play preview type thing we get the best of both worlds.
After finishing my first VR project in Unity I’m playing with Unreal again right now to see which is the lesser evil so I’ve not tried my template with Quest just so your aware
Unreal went pretty well, they have a solid setup to be fair, things just work but can’t beat Unity’s productiveness - BP’s didn’t feel as fun as they looked vs C# code
Going to have to roll up the sleeves and battle through with Unity and wait for the Unity cavalry to come and resolve a few pressing things, hoping for that sooner rather then later!
There are some genuine outstanding issues with UniversalRP, I would not recommend anyone use it for Quest/Go development just yet. See this information coming straight from the Unity Devs here too:
If you have a look into the Oculus scripts provided in the Oculus Integration package, there are some bits of code unfinished, some bits broken/buggy and lots of parts of code where they have commented ‘to fix later’ or ‘it should be using this XR API but we must support Unity 5.x.x’ and similar. I would strongly encourage like Joe said just start simple using Unity XR Legacy Input helpers with tracked pose drivers and get a simple rig going.
I started looking at how the OVR Player controllers were written and very quickly was able to make my own version that was nowhere near as buggy and also more bespoke to the kind of app I am working on. If you need any tips feel free to drop me a message.
From what I see we need to wait for the Universal Render Pipeline team to code it.
MSAA
FFR
Vulkan
Single pass (multiview is still slower I guess, until vulkan)
So that’s a huge amount of work. But Quest is the hottest selling VR unit, and it’s accelerating, looks like it will actually go past fb’s projection of 1.2m devices per year. I guess VR+Mobile (Adreno) will need a lot of TLC / priority put behind it as it taxes the problems everyone is facing.
Any ETA on the XR graphics slide? I want to do some serious dev. Thanks!
I know this isn’t productive but… REALLY surprised this isn’t sorted out. Does Unity have a tiny team on this or something??? If so, that’s confusing isn’t it.
No ETA from them but I was there at Unite Copenhagen asking everyone I could about it just to make the issues clear to people in the teams. It seems they know about it themselves.