I just started using Cinemachine and I really like it. But I am having a problem with post processing. This video here seems to be obsolete because the way unity handles post processing changed, right?
What I want to do is have several virtual cameras in a scene (with overlapping paths) that have different Depth of Field values. When I use the workflow that is described in the video, nothing happens.
I do have a global volume like it is set up in Unity 2019.3 in the sample scene for HDRP. So the effects there work and I intentionally did not add Depth of Field. I tried that with the post processing as it is shown in the video but as I said, nothing happens then. My guess is something is over-ruled from the Standard Unity PostProcessing approach?
I also tried using the standard volumes from Unity, make them really small and a child of the virtual camera. That does not work as well because as soon as the virtual camera starts moving, the volume is moving differently. So it´s not really following the camera as it should.
Any ideas? Is there an updated workflow for Depth of Field for that?
If you’re using the latest HDRP (7.x) then things have changed. PostProcessing is now built-in: you should uninstall the PostProcessing stack. Instead, use the standard volumes that are built into HDRP. Within the VolumeSettings, you’ll find a subsection for PostProcessing. Use that instead of the old PostProcessing, and you don’t need a PP layer on the camera.
In the Cinemachine vcam, use the VolumeSettings extension instead of the PostProcessing extension.
When using the VolumeSettings extension on 2 vCams blended in Cinemachine, the Depth Of Field doesn’t seem to blend between the two. Is there a way to do this so that the blending is done by the Cinemachine active blend instead of blend by local collider? I was expecting the weight to blend according to Cinemachine, but it doesn’t seem to blend with how I’ve set it up.
@BigRookGames The depth of field does indeed blend along with the rest of the postprocessing. It’s possible that your settings are such that it doesn’t show. Can you post images of the DOF section of the profiles that you’re trying to blend?
It’s because your resolution in the Depth of Field profile is set to Half, which is not sufficient for focus pulling. Change it in your profiles like this:
Thanks for the update. I changed it, ensured volume was enabled on the vCam, set the DOF on the second blended cam to nothing, set the first vCam to high DOF, and it seems whichever volume i click enable most recently, it stays with those values. After doing that the first shot has no DOF applied when running :
If you want the vcam’s volume settings to blend in along with the vcam, you have to use the CinemachineVolumeSettings extension instead. Delete the volume, and replace it with this:
Is this the one that should be removed?
When removing it, it removed cinemachine along with it, and adding cinemachine back afterwards didn’t seem to help, as all cinemachine stuff is disabled now
Cinemachine should still be there - it’s not connected to PostProcessing.
Make sure to delete the CinemchinePostProcessingV2 folder from your assets - maybe that’s the problem.
If that fails, try creating a new HDRP project and adding Cinemachine. That will show you what you’re supposed to have.
Why does the sprite not have near and far depth of field blur, only 3d objects have it. If the sprite is in front of the 3d object, adjust the 3d object and the sprite has depth of field, for example: