Different content in each eye

Hi all,

We’ve been using the standard Oculus plugin for a while, and aside from the juddering problems caused by the most recent oculus runtimes, we’ve not had any issue with it.

We’re keen to move to using Unity 5.1, but one thing we’ve come up against whilst looking to move is that there are no longer 2 cameras in the editor view. As such, situations where we’re playing different content to both eyes (stereo video for example) no longer work.

Is there an obvious way to get around this that we’re missing at the moment? There must still be a way to talk to the different eye cameras independently?

Thanks in advance for any help!

SB

Hey,

We are aware that there is currently no way to draw different things to each eye. We have a solution at the shader level that we’re working on. We will set a specific uniform constant shader variable with the eye index (if said variable exists in the shader), and your shader can draw differently based on that value. Would like to know if this is sufficient for your purposes or if you need something higher level?

I think it would be very useful if this was available as a Unity function, so that we could detect in OnPostRender which eye was actually rendering.

@thep3000 so in the case of stereo video, we would create 2 shaders, one for left eye one for right eye? And in the shader code we would essentially turn rendering on/off based on the uniform constant? I guess there are also other approaches depending on the details. Just trying to understand how we would use this shader level approach. Currently, I just use layers and set the culling mask on each camera as described here…
http://bernieroehl.com/360stereoinunity/

Hey thep3000,

Maybe…

We produce cross platform applications on Oculus, iOS and Android (in various forms), a chunk of which are stereo rendered movies. The format of these movies is L/R eyes top and bottom split. We then map these to spheres in our scene. With the old Oculus SDK, we simply changed the layers the cameras could see so that each eye saw the right section of the video on the appropriate sphere.

Would the shader allow us to do something similar? We really don’t want to be hacking through all our old code to change this to a completely different methodology…

We need higher level. We need layers with Left and Right. This is how we do stereo movies. We have 1 video file with side by side frames, we have 2 objects in the same place, one for each eye, they have offset UVs.
We use different video decoding shades across 4 platforms. We are not going to hack 4 different 3rd party sharers to support stereo rendering flags.

2 Likes

I also need higher level. Some way to draw different elements to each eye, whether by moving their position for each eye, or a layer, or something. I did try just changing the positions of an object in the prerender, but this did not seem to create a difference between the two eyes.

1 Like

@thep3000 Reiterating what others here have said. Our approach is also to use separate layers for each eye. So being able to set the culling mask for the different eye cameras would be ideal. Using shaders would only be partially successful for us.

Hi,

We will bring back an option to have multiple cameras and target each one at a separate eye. We will also support the shader eye index as mentioned. I’m aware this is high priority and will see if we can get multiple camera support into a patch release.

9 Likes

Today I tried to port our main VR application to Unity 5.1. I couldn’t find the rendermask option for each eye and hit this topic on Google.

I hope this can be integrated really soon, as we are one of the biggest companies in the Netherlands delivering a VR application for real estate companies and now our application can’t be used with 5.1 since it relies mostly on cullingmask per eye. Currently I’ll keep using 4.6 until rendermask is possible for each eye in the 5.x cycle.
But good to see that your guys notices it and are working on a solution.

@JDMulti , You can still do this with 5.x … you just have to stick with your third party integration instead of using the new built-in VR support.

Yes I know. but the whole point of the change to 5.x is because of the default VR support. But you’re also right, as 5.x is possible to use but then with sdk’s. :wink:

Except that according to Oculus the old integration support for Unity is now legacy and will be discontinued at some point in the future. So workarounds such as that will not be working for long.

This situation is exactly what I feared with a ‘first party’ support for VR being integrated into Unity. Its a completely closed off system, meaning without using individual product SDK’s and writing your own wrappers to Unity, we are now at the whims of the Unity release/patch schedule to fix over-sights such as not supporting stereoscopic video. Worse with VR being such a fast paced and mostly not standardized I foresee Unity VR easily lagging behind hardware and software updates, not to mention impossible for developers to address bugs themselves ( for example last years mess up with chromatic offsets in one of Oculus’s releases, where I beleive people eventualy hex-edited dll’s to fix).

This is one area where I wished UT had made the first party support a pure plugin and released the source on BitBucket like they’ve done with the Unity UI. That at least would allow developers some degree to fix things themselves.

Before that I hope they implemented the support for culling masks per eye. At the moment we really rely on this feature for our VR application that sells like crazy and I can’t think about not having this before support of the SDK’s drop. It would be a total disaster. The crisis in Europe did hit us hard and VR is pulling us out of this situation, sounds strange but it really does, without it… no idea.

Is it true that features in patch releases are not on the Unity Roadmap? Would be nice to have a tab for patches as well besides the major big releases. Just an idea.

I expect it will work long enough to hold people over until Unity gets support for this built in. But my point was that you don’t have to switch back to 4.6 as the same old integration still works in 5.x.

1 Like

Oh I agree with your point, but mine was that its not going to be too long before the legacy installation is too outdated in itself for it to be practical as a workaround for issues like this and thus wishing that Unity VR support would be opened up.

We had a strategy for rendering one view to the Oculus and another view to the main monitor. We achieved this by rendering an ultra-wide view so half of it appears on your desktop, and half on the VR headset. It appears you can no longer do this with the UnityVR integration. UnityVR takes over all aspects of your camera. Can we get support for achieving our goal? Here’s an article I wrote after GGJ2015 which explains our setup and how we achieved it.

Another great use case that an increasing number of games and experiences are exploring. Whilst it could be argued in some cases using networked machines ( or even just networked Unity instances on same machine) would solve this issue, in the example shown where so much content/data is clearly shared by both views it would definitely be nice to get native support for such a feature so it can run on a single machine.

However i’d argue rather than supporting the old style ( though extended monitor usage is still quite important) perhaps instead a better solution would be too leverage the already supported ‘VR.VRSettings-showDeviceView’? In this case instead of showing the left eye view the function could be enhanced/overloaded to accommodate a camera reference or rendertexture, thus allowing developers to display any content on the monitor screen?

Regarding the original issue: L/R rendering masks is coming in 5.1.2p1 this week. Will update with more details when it lands.

Regarding the discussion on displaying different content on the main screen: We have been investigating this – many of our platforms are supporting some form of “multi-view”, a standard way of doing this in unity cross platform is being worked on now. This scenario is planned to be supported. I don’t have a timeline, but will update when we know more.

6 Likes

5.1.2p1 was released today, but there was an issue with stereoscopic rendering that was caught too late to patch. So for now it is recommended that users stay on 5.1.2f1.

Rendering different content in each eye is supported in 5.1.2p1 though. If you want to get a jump start using this feature, given the known monoscopic rendering issue, here is a sample project that works with 5.1.2p1+. p2 will officially support this feature, and we’re aiming to have some documentation merged by 5.1.3f1’s release.

Details:

  • Camera now has a drop down for Target Eye: Both, Left, Right
  • If you want to render different objects to different eyes, use two identical cameras, but change the target eye of one to Left, the other to Right. You can then use different culling masks on each camera.
  • See attached project for an example.
  • Note that using two separate cameras with different target eyes like this is only recommended for special use cases like those outlined in this thread. You’ll be giving up optimizations by rendering like this.

2216451–147509–VRLeftRightCulling.zip (450 KB)

4 Likes