Let’s say I have a camera set up in Unity at a specific location and I have the intrinsic parameters of this camera. How do I go about converting this intrinsic camera matrix into a unity projection matrix?
(I found some opengl references to the camera frustum but the references were inconsistent and didn’t achieve what I wanted anyway.)
What exact intrinsic parameters do you have? Note that a virtual camera doesn’t really have a focal length or any other kind of distortion. Everything is just a matter of scale. The focal length is often interpreted as the near clipping plane distance which may be used to calculate the FOV angle (or the other way round). In my matrix crash course i explain the different values of a projection matrix.
Keep in mind that a virtual camera is bound to the screen size. Im most cases you usually specify the desired Fov angle (usually the vertical angle) as well as your desired near and far clipping planes. Changing the near and far planes does not change the actual view of the camera, just where the frustum clips the scene. Note that a too small near clipping plane or a too large far clipping plane will destroy your depth buffer resolution. So always make your near clipping plane as large as possible and as small as necessary. The far plane as small as possible and as large as necessary.
Note that a virtual camera is a linear projection of the 3d scene. Any non linear parameters can not be applied through a matrix. If you need any further help you have to be much more specific about your case. What exact values do you have and what exactly do you want to achieve. Unity’s worldspace is a linear space. A virtual camera does a linear projection onto the screen (either orthographic or perspective).