Precise camera specifications

I’m new to Unity so please forgive me if this is documented somewhere obvious (I haven’t found it yet).
The first thing I thought I would do is set up a stereoscopic camera. I use dual display polaroid based stereoscopic projection … two projectors each attached to one port of a dual display graphics card. So all one needs to do is draw the left eye to one half of a 8:3 (2 x 4:3 aspect) display and the right eye to the other half of the double width display. There are two ways of doing this, the slightly wrong “toein” approach where the cameras are rotated inwards to converge at zero parallax and the correct offaxis frustum. I’ve implemented both and they are working in principle but not in the details, and the problem is arising from my seeming inability to create a precise camera frustum.
In the toein method I simply have two cameras attached to my first person controller, separated and angled inwards by an appropriate amount. I have a 60 degree perspective camera, is that vertical or horizontal aperture by the way?. Whichever it is, how do I set the other aperture … I need to be able to reliably set the aspect ratio. I have totally failed to adjust the normalised viewport rectangle to center each camera frustum on each half of a 8:3 aspect window. Any hints welcome.
In the parallel camera approach I render to textures from a similar dual camera FP controller, then set up two more ortho cameras each looking at a plane with the two textures. By adjusting the horizontal position of these two ortho cameras one can achieve the same as an offaxis frustum. Anyway, I seem to have the same issues, can’t get the two images correctly aligned on each half. Part of the problem in this case is the meaning of orthographic size which I thought would mean a 1 unit wide object would full the orthographic camera viewport if the orthographic size was 1…seems more complicated than that.

Vertical one.

It’s not exposed through the GUI, but from script you can assign camera.aspect or a full projection matrix (camera.projectionMatrix).

I have a question regarding the specification of the orthographic camera size parameter. I have searched the docs and the forum, and pbourke’s post seems to come the closest to my inquiry; so at the risk of being rude, I thought it best to piggy-back my question here.

The question being: exactly what does the size parameter for the orthographic camera specify?

Is it a sort of internal reference relative to itself, like the Rigidbody mass parameter, or is it based on some external reference.

The reason I am asking is that our game system uses an overhead projector to display the virtual world on a large projection screen (typically 3.5m x 4.5m), and then our MicroSight tracking cameras to measure the velocity, point of contact, etc. for a projectile fired at the screen. (The basic implementation is golf simulation; we are now expanding into video gaming that is not necessarily sports simulation.) For the perspective camera, we have a trigonometric formula that allows us to say that an object in the virtual world, which is for example 1m x 1m, will show up on the screen as 1m x 1m. Because of the apparent distortion caused by the perspective camera as objects move away from the screen in the virtual world (an object traveling perpendicular to the camera’s Z axis, but not at the very center of the screen, appears to be migrating toward the center of the screen; this is the practical effect of the vanishing point on objects traveling away from the screen; an effect sort of like the refraction of images through a tank of water, all relative to the vanishing point) we have begun experimenting with the orthographic camera.

To maintain that virtual world size = screen size relationship, I would like to develop a formula that says, “if the projection screen is x meters high, then the orthographic camera should be of size y, so that an object that measures 1m x 1m in the virtual world also measures 1m x 1m on screen.” I am not asking for the formula, am just asking whether the orthographic camera size is an internal reference, or does it reference some real-world values.

Thanks in advance.