Correct me if I’m wrong, but according to previous questions on here, and the documentation on camera depth, if you put a post effect onto a camera, it should, in theory, effect that camera and each camera below it in depth, right?
In my scene I have three cameras; One the UI, one rendering objects in the fashion of the gun in first person shooters, and one rendering the scene. I’m attempting to get Depth of Field running on the camera rendering the scene, but any time the other cameras are active they seem to remove it and I don’t know why.
I’m using the Depth of Field 3.4 that comes with Unity Pro, so nothing special on that end, and my UI and First Person camera are both clearing on depth and rendering only their respective layers (through Layer Masks). I guess I’m just wondering if this is the way its suppose to function, or if something is going wrong?
I’ve searched, but if this question has already been asked please redirect me.
So, by asking around and talking with people I think the problem may have been pin-pointed. According to This, setting a camera’s clear flag to Depth Only will “…discard all information about where each object exists in 3-D space.” Meaning that when the camera rendering the gun renders over the one rendering the scene it, in theory, discards all depth information.
I am still lacking a solution, but I have a few ideas. I know nothing about shader scripting, so how hard would be to make either a post effect that pulls the alpha channels of multiple cameras and adds them together to use as a mask on ANOTHER camera on top of everything that has DoF on it, OR filter depth information from one camera to another higher in depth?
It was, like most problems, a much simpler fix than I had anticipated and was stumbled across by a friend of mine. The camera on top of the depth order, in my case the UI camera, simply needed to be set to Vertex Lit under the Render Path drop down on the camera. Now I have no idea why this matters at all, which is why I never found it I’m sure, but thats it!