I’ve done it slightly differently to jvo3dc, but I think the result is the same. To illustrate the math, here’s a masterpiece:

The first thing to do is to simplify the object of interest to a sphere. The easiest way is to use bounds.extents.magnitude
The angle, theta, is half of the camera’s field of view. r is the radius of the sphere. Note how by fitting the sphere tightly to the frustum, the tangent to the sphere is also the edge of the frustum and so is perpendicular to a line drawn from the sphere centre to the contact point, which gives us a right-triangle. We then just need to remember SOHCAHTOA Here we want to find h which is the hypotenuse, and we have r which is the opposite side, so we need to use the SOH part. So:
sin(theta) = o/h
sin(fov/2) = r/h
h = r / sin(fov / 2)
Don’t forget that the Mathf trig functions work in radians, but Unity works in degrees (well, internally it’ll use radians I expect).
The code I have for this then is:
const float margin = 1.1f;
float maxExtent = b.extents.magnitude;
float minDistance = (maxExtent * margin) / Mathf.Sin(Mathf.Deg2Rad * _camera.fieldOfView / 2.0f);
Camera.main.transform.position = Vector3.back * minDistance;
Here margin gives us a bit of breathing space, and maxExtent represents the sphere that encloses the object’s bounding box, b. Here I just set the camera along the -ve z axis far enough to fit the object in view.
A few things to note is that the field of view is the vertical FoV by default, so should you have a portrait view, then you’ll need to use the horizontal FoV. Also you need to make sure your near clip plane is sufficient to not clip the object. I’m using an orbit camera (hence the variable minDistance), so I can set my near clip plane to:
Camera.main.nearClipPlane = minDistance - maxExtent;
Which gives me maximum precision in my depth buffer by not allowing the camera any closer than minDistance.