Hey folks!
I’ve been using Unity 2.6 (havent upgraded yet) with a 120hZ projector to create some pretty good looking 3d effects. The projector uses a active shutter 3d techology, so I’m essentially running a build out of Unity with two separate cameras that alternate every 1/120th of a second. I figured I’d share my current setup (and a strange problem) and see if anyone has any suggestions for a better method.
I’ve never done much work getting Unity to run at a constant framerate (especially not 120 fps) so I instead put the camera switch function in FixedUpdate and set the physics timestep to .00833334 (which is as close to 120 fps as rounded decimals get me). The cameras don’t actually enable and disable. I think its the left camera that is always on and the right camera renders on top with a normalized viewport that changes between the entire screen and none of the screen.
On a small scale, the setup seems to work pretty well. Without 3d glasses, the image on the projector looks like a double image, but with the shutter glasses the illusion is pretty nice. However, it only stays smooth when I’m running at really low resolutions (640 x 480 is pretty consistent). The larger the resolution, the quicker the image starts tearing. At lower resolutions, similar tearing starts to happen. What I can only assume is that the camera doesn’t fully finish rendering the scene before switching back to the previous camera, resulting in half of the screen showing the left cameras view while the other half showing the right cameras view. The effect starts to show up after a small amount of time running the build and slowly gets worse. I initially thought it was a performance issue, but i added a 15k polygon object with a 2048 texture to the otherwise very simple scene with a hotkey to show and hide the object. When there isn’t any tearing, sometimes showing the object starts to create tearing while hiding it again removes the tearing again. With this test, it seemed like I was adding too much to the scene for the camera to keep up.
However, I noticed something I was confused about. When the build sits for a bit with nothing major in the scene and tearing starts to occur, showing the complex model actually resolves it and removes any weird graphical oddities. Hiding the object then introduces the tearing again. It almost seems like the camera rendering is falling out of sync with the camera switching (like the fixedUpdate is switching cameras in the middle of the camera render). Changing objects in the scene seem to change the timing of the renderer.
I’m not very knowledgeable on the inner workings of the renderer in Unity, but maybe someone has a few ideas on how to improve what I have or a method for resolving the graphical tearing. Ideally, I would like to take the camera change out of FixedUpdate so I can put this solution into a program with physics without overtaxing it, but for now I’m not sure how consistent another method would be.