When I use cinemachine in two player game, I did not know how to use two camera with cinemachine brain.
Do you have a way to use two camera ?
Just create second camera and add it’s own Cinemachine Brain.
I’m sorry I didn’t make it clear enough.
I want to use two camera with differ virtual camera.
I added second Cinemachine Brain, both camera followed same virtual camera.
@Adam_Myhill Any way to set it up?
From the user docs:
You can set up a multi-camera split-screen with Cinemachine 2.0:
- Make 2 Unity Cameras, let’s call them Camera A and Camera B. Give each one its own CinemachineBrain, and set up their viewports.
- Now make 2 virtual cameras to follow the players. Assign those virtual cameras to different layers. We’ll call them layer A and layer B.
- Go back to the two Unity cameras, and set their culling masks so that the Camera A excludes layer B and the Camera B excludes layer A.
- That’s it! Camera A will be driven by virtual cameras on layer A, and camera B will be driven by virtual cameras on layer B. They will do their blending etc independently.
- Extend this idea to as many layers and cameras as you like.
Note: For CM3, instead of using layers, use Cinemachine Channels. It’s the same strategy, but doesn’t use up layers.
This problem was resolved thanks to your support!
Thank you for your help!
Does this work for showing the same scene from different angles on a split-screen?
Yes.
Tried that, but the cameras only show whatever exists in their assigned layer. I must be missing something, but don’t know what it is
Make new layers for the vcams. Have your Cameras see everything except the other guy’s vcam layer.
Wow! Thanks. I can’t believe how good Cinemachine is and I’m probably using less than 10% of what it can do.
PS. Do you think
"Go back to the two Unity cameras, and set their culling masks so that one camera sees layer A but not layer B… "
should say
“Go back to the two Unity cameras, and set their culling masks so that one camera excludes layer B …”
Quick follow-up. Do you have any suggestions for a video-recording asset to add to Unity Cameras? The Recorder by Unity (Unity Asset Store - The Best Assets for Game Making) is nice, but seems to be limited to one per scene.
Does the job. I hope they will add more options like output directory, etc.
@Gregoryl Wow, that is unexpectedly easy. It sure would be great though if a single camera could be used on several layers. Perhaps moving the check to a layer mask field on either/both the virtual camera/brain would allow a single virtual camera to be used by brains on separate layers.
Example:
- vcams 1 and 2 where 1 applies to all layers, and where 2 applies to only one; vcam 2 is inactive
- two unity cameras assigned to corresponding layers, each with a brain component attached
- vcam 2 is activated and brain on layer 2 transitions to vcam 2 while brain on layer 1 remains unchanged
As of now it seems the only way to achieve this effect is to duplicate one vcam between two layers, and reducing this duplication seems like it would facilitate a smoother experience when tweaking and maintaining camera angles.
@iwaldrop_1 I think you can get what you’re looking for by using a third vcam layer:
- Brain 1 sees vcam layers 1 and 3
- Brain 2 sees vcam layers 2 and 3
- vcam 1 is on layer 3, vcam 2 is on layer 2
- Initially vcam 1 is active, both brains see it
- When vcam 2 is activated, only Brain 2 will transition
hi guys, hi @Gregoryl , I have a kind of similar setup to what is asked here, I have a “security room” where you have multiple screens, each one shows the renderTexture that different cameras have assigned, basically you can see multiple cameras rendering at the same time, which is similar to split screen rendering.
I have two problems with the “set opposite cameras in different non intersecting layers” approach:
the first one is the obvious, once you get more than 4-5 cameras managing the layers exclusions and inclusions starts to be a mess, cam A should exlcude B,C,D,E, but not exclude other important layers, cam B should exclude A,C,D… etc, you get what I mean.
The second problem is that this approach is very limited to the small amount of layers Unity has, while we have less than 32 layers, some of them are already taken for physics exclusions, lighting, rendering, etc. so this is not really scalable.
would it be too hard to have an option to switch from the current way the Brains compute their affecting virtualCams to an approach where you explicitly set a List in the inspector or through code in each brain (note that I imagine this as an optional boolean that switches between the two algorithms)?.
Hmmm… no immediate plans for this. However, I think there is a way with the current code.
The CM brain has an API to override its vcam selection mechanism. This is what timeline uses. You can write a script that has its own vcam selection logic (possibly based on a list of vcams), and uses the brain’s override API to drive it.
thanks, I think thats a good starting point for me, is this API documented or is it internal? I had a look at the docs but I didn’t found anything at first sight, maybe you can point me in the direction of this feature? thanks again
Yeah it’s marked internal and not documented, although it does have XML doc in the sources.
Will look at promoting it to public for next release.
You need to check out CinemachineBrain.SetCameraOverride() and CinemachineBrain.ReleaseCameraOverride().
For a usage example see CinemachineMixer.cs.
If you go this route, you’ll have to manage your own blending, the way timeline does.