Hi guys, I am trying to take a stereo video that we shot against a greenscreen and drop it into a 3D scene for mobile VR. We have one video track for the left eye and another for the right eye.
Because we shot it against a greenscreen, we needed to have an alpha layer and process it with a shader since Unity cannot handle transparency on video.
The first way we tried to do this was just by playing one video track per eye. We rendered out one video to avoid any syncing issues that contained (in quadrants) the left video, left alpha, right video, and right alpha. The video ended up looking like this: http://i.imgur.com/VJYLBWN.png. When played in the scene it would look like this: http://i.imgur.com/B9WLokn.png.
As you can see doing it this way requires the equivalent 2 high res video tracks and two alpha masks playing at once (in actuality one massive sized video track containing all these tracks). Since we are targeting mobile we figured there must be a better way both file-size-wise and performance-wise.
My research led me to disparity maps, and the generation of a depth map that we can use to simulate depth on one video track instead. Here is an example of what I am talking about: http://3dstereophoto.blogspot.com/. Basically I would like to use software to generate a depth map based on video disparity between the left and right eyes. This can be done in Nuke or a similar program. This would give us a depth video track that looks something like this: http://i.imgur.com/8C4mfEp.jpg.
My question is how do I take a greyscale depth map video and play it over another video in real time so that it is given depth in the scene instead of us having to cheat this effect by playing one video track per eye? It looks like somebody successfully did this with a 3D photosphere here, but I have not been able to find any documentation on how this can be accomplished: - YouTube. It seems the best way to do this is to apply the depth mask to a plane in the scene that the other video is playing on, but I’m not sure how to go about this.
On top of this, is it even possible to do a depth map and alpha mask at the same time so that the background of our depth-mapped subject continues to be transparent? Would this method actually be any more efficient in-engine than the left eye/right eye method? As I said above I’ve really struggled to find any resources explaining this. Any help would be hugely appreciated! Thanks!