Manipulate position of follow transform in extension

For our game we needed a camera that follows our player looking down onto the player with a fixed angle and distance. So we are using CinemachineVirtualCameras with a Follow and the Framing Transposer to follow our player with an exact distance using the CameraDistance field. No LookAt target so we can define the exact angle (45° downwards) of the camera with the rotation of the transform.

We had some additional requirement that for some sections of our game the camera would be locked to either the X or Z axis e.g along a long hallway the camera would continue to follow the player as they walked along it from left to right but be fixed in the Z axis so it would always look in the center of the hallways. For that I wrote a CinemachineExtension that would calculate in PostPipelineStageCallback() backwards from the Follow where the final “targetPosition” of the vcam should be. And then during the Body stage modify the state.RawPosition and fix either the x or z position by that desired “targetPosition”. This seemed to work fine.

var toCameraVector = -vcam.transform.forward * framingTransposer.m_CameraDistance;
                       
var targetPosition = axisFixedFollowPosition + toCameraVector;

var position = state.RawPosition;
switch (axis)
{
    case Axis.X:
        position.x = targetPosition.x;
        break;
    case Axis.Y:
        position.y = targetPosition.y;
        break;
    case Axis.Z:
         position.z = targetPosition.z;
         break;
    default:
         throw new InvalidEnumArgumentException(nameof(axis), (int)axis, typeof(Axis));
}
state.RawPosition = position;

But now some additional requirements came up like keeping the vcam also between a certain range along the fixed axis. So e.g. The Vcam should be fixed along the Z axis but should also keep it’s X position between x1 and x2 e.g. as to not go beyond a certain point along the hallways when considering the example from above.

Additional my team also requires a different Cinemachine Extension that should confine the camera to a rectangle (X and Z axis simultaneously) so that i can be used to follow the player inside of a room (with black void beyond the walls of the room) without showing too much of black void e.g. avoid that the player moves to the right wall of the room and the complete right side of the screen is black void.

I know there is a Confiner Extension but this confines the actual virtual camera to a box collider and not the Follow position it looks down onto.

With the current way shown above these requirements seem to be very hard to implement. The way I do it already seem rather backwards as I essentially do part of the calculations again with toCameraVector what the vcam already does anyway. And this only works because the position so far had to be fixed in one axis so I could actually just completely override the RawPosition of the moving vcam in that axis. With the new requirements I can’t just clamp vcam.Follow.position to a rect, calculate backwards the “targetPositon” and then just set it as the RawPosition as this would completely override any sort of dampening, deadzone and soft zone of the FramingTransposer.

state.RawPosition = targetPosition;

What I essentially need is access to the Vector3 that is used as the reference where the camera wants to look at (i.e. the player which is the Follow, i.e. the yellow dot when previewing the framing transposer) to manipulate it beforehand in PrePipelineMutateCameraStateCallback() similar to what is already happening with Tracked Object Offset of the Framing Transposer. This would allow me very easily clamp the position of the player the vcam uses to a single axis or a defined rect on the floor of a room.
I thought I found what I need in state.ReferenceLookAt() but this seems to be only valid if the vcam has a LookAt. From the name I guess this makes sense but this sadly doesn’t help me then because I can’t have a LookAt because I need a fixed angle of 45° down onto the player.

So is there any other way to achieve what I want?

It sounds to me as though you are trying to re-implement work that CM already does.

Have you tried the CinemachineConfiner2D? It will work with a perspective camera, and if you define the confining polygon correctly, it will not allow the camera to pass beyond the zone it defines, even if the follow target does. Is that not what you need to do?

Not exactly. What I need is to basically not allow the Target to leave a certain zone, not the camera, but without actually confining the real Target GameObject to the zone, just the Vector3 the vcam gets from the target.

I actually did find a workaround by manipulating the Transform Composer’s m_TrackedObjectOffset so that the final position the vcam tries to track lies inside my defined zone. This was the only way I found to manipulate the Target position Vector3 between the reading it out from the Target and passing it into the Framing Transposer.

Here is a video and screenshot where we used this to lock the camera to one axis (but this approach is very universal to locking/confining in multiple axis as well)

ipjjns

In essenence I do this:

public override void PrePipelineMutateCameraStateCallback(CinemachineVirtualCameraBase vcam, ref CameraState state, float deltaTime)
{
    // The region the camera should try to track
    Bounds zone = new Bounds(Vector3.zero, Vector3.one);
    Vector3 originalTargetPosition = vcam.Follow.position;
    Vector3 newTargetPosition = zone.ClosestPoint(originalTargetPosition);
    ((CinemachineVirtualCamera)vcam).GetCinemachineComponent<CinemachineFramingTransposer>().m_TrackedObjectOffset = newTargetPosition - originalTargetPosition;
}

While I think this is an adequate solution I somewhat loose the feature to define a fixed trackedObjectOffset (e.g. slightly moving the target up so the middle of the player is tracked and not their feet) So a more official solution for manipulating the Target position would be nice to have. But we worked around this by having a new originalOffset field inside our extension that is added to m_TrackedObjectOffset in the last line of the example code above.
We first tried to simply read out m_TrackedObjectOffset on Awake() as and save it in originalOffset so the user can still use the “real” field in the inspector, but we noticed that even with “Save during play” disabled in the vcam the value in m_TrackedObjectOffset did not reset after leaving Play mode. So we simply ended up exposing originalOffset in the inspector for the user with a warning to not use Tracked Object Offset on the Framing Transposer itself.

I’m failing to see the difference between what you’re doing and simply confining the camera’s position to some axis. The end result is that you want the camera to remain on one axis, no matter how the target moves. You’re doing it by adding all sorts of complicated logic to fake the target position. Why can’t you do it by confining the camera position, which is a feature built into CM?

If you really want to fake the target position, perhaps an easier way would be to create an invisible game object with a custom script to position itself in relation to your follow target, but with whatever constraints you impose. Then use that invisible object as a follow target for the vcam. That keeps everything nice and simple.

The problem I have with the existing confinement extensions is that they confine the camera position which is problematic in our case because the camera is looking down at an angle onto the player and is not where the player is. This means any collider for the containment needs to be offset in the air by an angle and distance to the level. This comes with two major problems:

  • It’s hard to manually place and adjust these colliders so that they fit to the underlying level without calculating the position of the box collider (either manually or through a script) so they are in the correct angle and distance to the level floor.
  • This locks the camera in place in terms of distance and angle and so doesn’t allow us to change the angle or distance of the camera to the Follow target during runtime without having to separately move the box collider as well which imo would result in more “doing things CM already does”.

Here is a quick shoddy thing in paint to illustrate my problems with both an isometric and sideways viewpoint onto a room of our level with the box collider confinement for the camera in green and the zone we want to confine the target of the camera in, in blue.

We thought about this but it actually resulted in keeping it less simple on our end as it would require an additional script not on the vcam with the rest of the camera behaviour but on every potential target of our vcams that would have to update their position based on the vcam that is currently looking at it. Which also brings up the question of what if multiple active vcams look at the same target e.g. during blending?

Our extension would have to constantly look at the Target transform, try to find this additional script then then write the current parameters of confinement into it and hope that no other active vcam (e.g. during blending) writes after it into the same target script it’s own parameters.

To me this sounds a lot more complicated and also does not properly separate responsibilities between the camera and it’s target.

With out current approach the complete logic is inside the vcam responsible for a single room/hallways/ect. and the target/player does not have to have any knowledge of this, though I admit our solution is not the best. Hence I would prefer there would be a way to overwrite the Vector3 the vcam receives from the Target inside
PrePipelineMutateCameraStateCallback() like state.ReferenceTarget similar to it being already possible for the LookAt target using state.ReferenceLookAt.

Thank you for the description. I think I understand the issues a little better.

You could make a customized Framing Transposer that uses the LookAt target instead of the Follow target. Just copy CinemachineFramingTransposer.cs, rename it, and wherever it accesses the Follow target, make it access the LookAt target instead. Then you could use state.ReferenceLookAt.