Hey thanks for taking the time to write this up. I’m not sure I entirely understand your remarks, though.
Multiple dispatch: CM doesn’t send the same event multiple times, or at least not exactly. When a camera gets activated, an event is sent out, but it is sent out to 3 different sets of listeners:
Global
Per-brain in which the camera is live
The camera itself
If the handlers of those events want to manipulate the listener lists, they can do that, but they need to take care. I’m not convinced that it’s up to CM to prevent or manage this.
Or, are you suggesting that the events should be collected and sorted, so that all the global events are sent, then all the per-brain ones, then all the per-camera ones?
I can think of a couple of other possible changes to make:
Have only global events. Let the handlers filter by brain or camera if they want to. The downside is that it makes it harder to set up prefabs with their events pre-wired.
Send out events only for cameras that are actually live (i.e. contributing to the main camera’s final output). No events if a blend happens for cameras that are mixed out at some intermediate level (e.g. by a mixing camera or state-driven camera or timeline).
All good suggestions. I know some of your use-cases (I’ve built and maintained multi-camera setups both in Unity and in other engines), so I can make some educated guesses - but my feedback here is necessarily restricted to the use-cases I’m aware of. I don’t want to accidentally exclude someone else’s cases!
With that in mind, though, I can make some general comments:
Global-only events should be sufficient for almost everything, so long as they include the context of which Brain fired them, and so long as there’s no performance hit. For CM use-cases, I believe the number of events will always be extremely small compared to Unity’s general performance, so ‘performance hit’ should never be a concern here (if you’re doing more than a small number of camera changes per frame, the player would probably get motion sickness anyway ;))
Per-brain events should be trivial for us developers to simulate given the above - it’s trivial to have our callback methods do a ‘switch’ (or even an ‘if’) on the Brain that was passed-in as a param … for the cases where we actually want to limit it. So there shouldn’t (in general) be a need to have per-brain events separately (I think, but modulo my point at the start: I might be missing a use-case where this isn’t true)
The correct granularity is definitely one of the above - it’s either per-brain, or global - because Vcams in general are many in number, and are (by design) NOT responsble for / aware of what else is happening in the scene. That means in general that no individual camera will ever have enough context for the callbacks to work well (except for simple/trivial cases)
… the prefabs issue would be a reason for ‘additionally’ having per-camera, with a note in the docs to say ‘in general use the global/per-brain ones instead’ … but the way CM’s core design works we (as end-users) should be avoiding doing anything coupled too tightly to the individual vcams.
… I would like to see what some of the known use-cases are for these prefabs - what kind of thing are people pre-wiring to a specific vcam? It sounds like the kind of thing I should be avoiding (see above point), so … when is it a good idea?
A simple param (or macro-like method) for ‘is this camera contributing to the blend right now?’ is necessary in general. This is true for ALL features of CM that are ‘dynamic/evaluated/calculated’: in general, we’re happy to have a ‘Black Box’ of functionality (e.g. we want CM to handle all the blending automagically, using the info we gave on custom vcam/vcam blends, the current set of prioritized vcams, etc etc) – but we often need to know ‘OK, fine – but RIGTH NOW IN THIS FRAME what’s the current output / outcome / status of that blackbox?’
If you want to avoid making constant checks through the Update method, just use a Coroutine. But don’t forget that blends can be interrupted halfway through, so it’s important to account for that scenario as well.
Here is a simple script that you can attach directly to the CinemachineBrain and modify to suit your needs:
using Cinemachine;
using System.Collections;
using UnityEngine;
using UnityEngine.Events;
[RequireComponent(typeof(CinemachineBrain))]
public class CameraBrainEventsHandler : MonoBehaviour
{
public event UnityAction<ICinemachineCamera> OnBlendStarted;
public event UnityAction<ICinemachineCamera> OnBlendFinished;
CinemachineBrain _cmBrain;
Coroutine _trackingBlend;
void Awake()
{
_cmBrain = GetComponent<CinemachineBrain>();
_cmBrain.m_CameraActivatedEvent.AddListener(OnCameraActivated);
}
/// <summary>
/// Called by the <see cref="CinemachineBrain"/> when a camera blend is started.
/// </summary>
/// <param name="newCamera">The Cinemachine camera the brain is blending to.</param>
/// <param name="previousCamera">The Cinemachine camera the brain started blending from.</param>
void OnCameraActivated(ICinemachineCamera newCamera, ICinemachineCamera previousCamera)
{
Debug.Log($"Blending from {previousCamera.Name} to {newCamera.Name}");
if (_trackingBlend != null)
StopCoroutine(_trackingBlend);
OnBlendStarted?.Invoke(previousCamera);
_trackingBlend = StartCoroutine(WaitForBlendCompletion());
IEnumerator WaitForBlendCompletion()
{
while (_cmBrain.IsBlending)
{
yield return null;
}
OnBlendFinished?.Invoke(newCamera);
_trackingBlend = null;
}
}
}
@a436t4ataf Thanks for your great feedback. I think CM3’s approach to events - to be published in the next pre-release - will address pretty much all of your issues. There are global CameraActivated/Deactivated events, as well as BlendFinished events. There are also optional “events” behaviours that can be added to cameras or brains, which monitor the global events and filter for the attached object, making it easy to get per-brain or per-camera events if desired. Appropriate context is provided in all cases.
Currently in CM2 you have CinemachineCore.Instance.IsLive(vcam), which answers the question: is this vcam contributing to a camera output right now.
Just got this as my first result on a Google search on how to be notified when a camera blend ends. This is working for me, it should work correctly as long as you only have one virtual camera transition going on.
Create a CameraBlendMonitor.cs script and paste the following. The script should be attached to the same GameObject where CineMachineBrain is (a common scenario is attaching it to the main camera):
using UnityEngine;
using Cinemachine;
public class CameraBlendMonitor : MonoBehaviour
{
private CinemachineBrain _brain;
private bool _isBlending = false;
void Start()
{
_brain = GetComponent<CinemachineBrain>();
}
void Update()
{
if (_brain.IsBlending)
{
_isBlending = true;
}
else if (!_brain.IsBlending && _isBlending)
{
_isBlending = false;
Debug.Log("Blend completed");
// Call your function or trigger your event here
}
}
}
I am also Looking for a Solution for blending between Culling Mask in Main Camera when Virtual Cam is enabled , just like perspective override Cinemachine virtual Camera should also have culling mask Override and also Clear Flags Override if Possible , it will make it Easy for us.
This Kind of Setting Aren’t Available in Cinemachine VCam , there should be a Way to Override , just like Perspective Override Here :
You can do it with a custom script that overrides whatever you like in OnEnable and puts it back in OnDisable. Note that this particular setting won’t blend, so you’ll have to work with that limitation.
Not sure I get it right, shouldn’t isBlending return true at this specific time and active Blend populated ?
Also what is delta time here ?
public class CinemachineExtensionMaterialFade : CinemachineExtension
{
public override bool OnTransitionFromCamera(ICinemachineCamera fromCam, Vector3 worldUp, float deltaTime)
{
//Always false although there is a blend.
myBrain.isBlending;
}
Edit : @Gregoryl
Could it be related to the fact that we use ManualUpdate on LateUpdate instead of SmartUpdate ?
I see that the ActiveBlend value is calculated in this method. So maybe the Transition triggers right before our late update ?
I’ll try adding a frame delay and see.
Edit 2 :
Or could it be because the “Active Camera” which is a State Driven Camera does not change ? Transition is triggered from a state change.
In the end we query the state driven camera blend manually in the OnTransitionFromCamera Callback (myStateDrivenVcam.m_DefaultBlend), “IsBlending” and ActiveBlend is always false.
We needed the transition time on camera start blend, as it was suggested as an answer to OP, it felt related to all the issues at hand. We then do our additional blends during the transition.
It’s possible that in this case OnTransitionFromCamera is called before the StateDrivenCamera has set up its blend. It might make sense to introduce a callback that is invoked from CinemachineCore.CameraUpdatedEvent.
This sort of issue is one of the reasons why the events system was completely refactored for CM3, which has BlendCompleted events built-in. Would upgrading to CM3 be a possibility for your project? Be warned that for projects making heavy use of the CM API, this might be a nontrivial task.