OnCameraTransition / OnBlendComplete event

Is there an event or other function I can grab that will tell me when a camera transition has completed? This would greatly help me ping pong or otherwise make quick transitions between my cameras. All of the functionality I’ve currently found notifies me when a transition or blend begins, I would ideally like to know when it ends. How am I meant to do this by default?

2 Likes

There currently is no notification that a transition has ended.
You could perhaps do it this way:
When you get notification that a blend has begun, query the Brain for the active blend, check its duration and send your own notification at the end of that time.

That’s basically where I am right now. I catch the m_CameraActivatedEvent and then set a timer based on the current blend length. While it is working, emotionally I sort of hate this solution. Is there a central location in the brain responsible for running the blends where I could add my own event or am I stuck with this?

Thanks so much for the quick response!

2 Likes

You’re welcome to mess around in CinemachineBrain.cs - look for where the current active blend gets set to null. However, if I were you I wouldn’t mod the Brain. Just add your callback in a little standalone script next to the brain that listens for the camera activated event and broadcasts another one on the Brain’s behalf.

I know this thread is a bit old, but I wanted to pop in and add a +1 for a OnBlendComplete event.

I’m curious how the majority of people are handling the use case of knowing when one camera has fully transitioned to another.

2 Likes

Yes a OnTransitionEnd or OnBlendComplete would be reaaaally fine and it seems it would not take a lot of resources so why isn’t it implemented already ?

It’s very easy to do as an add-on script. Here is one. Add it to your vcam. It will fire an event when the vcam has just become live and the blend is finished. Note that for this implementation a cut counts as a very short blend.

using UnityEngine;
using Cinemachine;
using UnityEngine.Events;
using System;

public class CmBlendFinishedNotifier : MonoBehaviour
{
    CinemachineVirtualCameraBase vcamBase;

    [Serializable] public class BlendFinishedEvent : UnityEvent<CinemachineVirtualCameraBase> {}
    public BlendFinishedEvent OnBlendFinished;

    void Start()
    {
        vcamBase = GetComponent<CinemachineVirtualCameraBase>();
        ConnectToVcam(true);
        enabled = false;
    }

    void ConnectToVcam(bool connect)
    {
        var vcam = vcamBase as CinemachineVirtualCamera;
        if (vcam != null)
        {
            vcam.m_Transitions.m_OnCameraLive.RemoveListener(OnCameraLive);
            if (connect)
                vcam.m_Transitions.m_OnCameraLive.AddListener(OnCameraLive);
        }
        var freeLook = vcamBase as CinemachineFreeLook;
        if (freeLook != null)
        {
            freeLook.m_Transitions.m_OnCameraLive.RemoveListener(OnCameraLive);
            if (connect)
                freeLook.m_Transitions.m_OnCameraLive.AddListener(OnCameraLive);
        }
    }

    void OnCameraLive(ICinemachineCamera vcamIn, ICinemachineCamera vcamOut)
    {
        enabled = true;
    }

    void Update()
    {
        var brain = CinemachineCore.Instance.FindPotentialTargetBrain(vcamBase);
        if (brain == null)
            enabled = false;
        else if (!brain.IsBlending)
        {
            if (brain.IsLive(vcamBase))
                OnBlendFinished.Invoke(vcamBase);
            enabled = false;
        }
    }
}

You can use a similar strategy to put a companion script on the Brain that wakes up when a blend is started and polls until the blend is finished, at which time it fires an event and goes back to sleep. That way you have a global event fired instead of a vcam-specific one.

8 Likes

If anyone is looking for a shortish snippet you can use in existing code, this works with default blends:

    public Cinemachine.CinemachineVirtualCamera[] cameras;
    private int cameraCurrent;

...

        var cameraLive = CinemachineCore.Instance.FindPotentialTargetBrain(cameras[cameraCurrent]);
        float blendTime = cameraLive.m_DefaultBlend.m_Time;

        if (cameraCurrent == 1) {
                Invoke("ThatFunction", blendTime);        
        }

        if (cameraCurrent == 3 || cameraCurrent == 5) {
                Invoke("TheOtherFunction", blendTime);        
        }

..

    private void ThatFunction()
    {
        // Do Stuff
    }

    private void TheOtherFunction()
    {
        // Do Stuff
    }
1 Like

Another option that fires a global event when the Brain starts and finishes a blend. Put the script on the same GameObject as the CinemachineBrain component.

using UnityEngine;
using Cinemachine;

[RequireComponent(typeof(CinemachineBrain))]

public class CheckForCameraBlending : MonoBehaviour
{
    public delegate void CameraBlendStarted();
    public static event CameraBlendStarted onCameraBlendStarted;

    public delegate void CameraBlendFinished();
    public static event CameraBlendFinished onCameraBlendFinished;


    private CinemachineBrain cineMachineBrain;

    private bool wasBlendingLastFrame;

    void Awake()
    {
        cineMachineBrain = GetComponent<CinemachineBrain>();
    }
    void Start()
    {
        wasBlendingLastFrame = false;
    }

    void Update()
    {
        if (cineMachineBrain.IsBlending)
        {
            if (!wasBlendingLastFrame)
            {
                if (onCameraBlendStarted != null)
                {
                    onCameraBlendStarted();
                }
            }

            wasBlendingLastFrame = true;
        }
        else
        {
            if (wasBlendingLastFrame)
            {
                if (onCameraBlendFinished != null)
                {
                    onCameraBlendFinished();
                }
                wasBlendingLastFrame = false;
            }
        }
    }
}
10 Likes

Thanks for this!

I mean, is it so hard to implement this as a core feature of the package, especially after all these years??

14 Likes

+1 for this. I’m needing to do a “Slow mo” effect by decreasing the timescale for when the blend completes. I can’t do it before otherwise the blend will slow down…

1 Like

CinemachinBrain has an Ignore Time Scale option. When it’s set, blends will happen in realtime, regardless of time scale.

1 Like

Hello there,in a version of cinemachine i found a onCamlive event which will fire as soon as the blend starts to new cmvcam
But we need a special situation where the event should fire only and only when the transition has finished and completed and user has the full control of the current cvcam
I will appreciate if you can help me on this one

Thank you! This is fantastic.

You have the answer in the post above: https://discussions.unity.com/t/693562/2
When your script has detected that the blend is completer, fire a new event.

If blend is a behaviour of Cinemachine, Cinemachine itself should notify Start/End of blend. Currently it notifies start, but not the end. If notifying for blend end is as straight forward as “Start polling for blend end on target brain when a blend starts.”, why is this event not provided by Cinemachine by default? Seems to me like we are forced to implement an API that should already be there to begin with.

5 Likes

This really needs fixing in CineMachine. Getting these events to work is a bunch of trial and error and hacking around CineMachine to add what should be there from the start as @gabagpereira pointed out.

e.g. I needed:

  • Camera has finished blending in
  • Camera will start blending out

To get that, I had to:

  • Take @Gregoryl 's script (to get ‘finished blend in’)
  • Try to add code to hook the brain’s m_CameraActivatedEvent and use it to detect when this vcam is being replaced (since there’s no direct equivalent to Gregroyl’s call for vcam going away, only for it coming in)
  • …Discover that this doesn’t work, seems that current CineMachine has a nasty behaviour in event ordering: when you’re reacting to vcam’s m_OnCameraLive and then additionally register to brain’s m_CameraActivatedEvent … the brain event (which triggered the vcam event) falsely triggers AFTER the vcam event (events shouldn’t be re-entrant like this, that’s nasty design)
  • Modify it so that waits until blending-in (Gregoryl’s script) has fniished and only at that moment, in that frame, register with the brain to find out when the vcam changes again

All of which should have been built-in to CineMachine, and would have avoided wasting time discovering internal details of CM’s event-ordering (which IMHO is wrongly ordered right now - an event that subscribes to the cause of that event should not be re-triggering off itself in the same frame).

i.e. this code shouldn’t go wrong:

void OnCameraLive(ICinemachineCamera vcamIn, ICinemachineCamera vcamOut)
    {
        Debug.Log( "My cam started blend, changing: "+vcamOut+" -> "+vcamIn );
        enabled = true;
        OnAnimateInStarted.Invoke( vcamIn as CinemachineVirtualCameraBase );

        var brain = CinemachineCore.Instance.FindPotentialTargetBrain( vcamIn as CinemachineVirtualCameraBase );
        brain.m_CameraActivatedEvent.AddListener( CameraWillAnimateOut );
// the above event SHOULD NOT FIRE in this same frame, but it does - even though its already fired
    }

NB: I might have made some stupid mistake in the above - if so: even more reason this should be implemented inside CineMachine! - but once I made the change I described above it worked fine, so in principle it seems correct.

@a436t4ataf I feel your frustration. The reason CM doesn’t implement these events as a core feature is that blending can get complicated: blends can nest, they can be generated form places other than the brain. We’d have to support all the strange ways that people might be using CM. This thread is about a straightforward situation: notifications for top-level blend in and out on the brain. Since it is so easy to implement this, we leave it to the user rather than having to design for and support all sorts of weird cases.

As for the event ordering, yes it might seem unexpected that the global camera change event is fired after the camera-local one, but the events have to come in some order. I don’t see a strong reason for it to be one way rather than the other. For the record, the events are not re-entrant; they are fired one after the other.

For your specific use case, I would implement a global script that monitors brain blends, as @bruceweir1 did above. Let it expose the desired events, e.g.:

  • OnBlendStarted(brain, incomingCam, outgoingCam)
  • OnBlendFinished(brain, incomingCam, outgoingCam)

Your vcam-specific script can then register handlers for those, and check whether one of the cams is itself, and react accordingly.

Sure, but that complexity is why it needs to be provided by CM. CM’s architecture defines what is/isn’t possible in blending - if it’s too complex for CM to implement consistent events then that’s a strong sign that CM’s definition of blending is incomplete, is missing some expressivity that would enable CM to programmatically determine what is going on and what it means.

TL;DR: if CM can’t confidently know what’s happening, how can it confidently implement it correctly?

Re: events - I would say that the standard for UnityEvents is to isolate dispatch, so that if you plan to dispatch the same event twice you freeze the listener-lists. In basic cases no-one should be double-dispatching an event, and so .Invoke(…) works fine and we don’t need to worry about it. But if you have a design reason to double-dispatch (as here in CM), then the burden becomes on CM to do this properly and make sure that the set of listeners receiving the second copy of the event cannot be modified by the handlers to the first copy of it.

Although if it were me, I’d probably instead invest the time to improve the design of these events inside CM, such that this never happens (and in the rare cases that it does, I’d have a LogWarning or even an Exception - ConcurrentModificationException for instance, which is lazy (its not really an error, but at least it signals ‘CM should handle this but we didn’t write the code to handle it, so until we change the code, this is effectively an error’)) – the API for local-camera events is screamingly obviously only partially written, and if it had the obvious simple essential calls this wouldn’t be an issue in practice.