Hi.
I’ve been playing with an idea for more real depth of field than what can be achieved with the multiple camera setup, and wanted to get some input on the approach before I dive in.
The plan is to use a depth map as input to the PRO BlurEffect script and have that set the amount of blur for different parts of the screen.
Assuming this would work I’m faced with the task of generating the depth map, and this is where I hope someone might shed some light on the subject (I’ll worry about the problem of integrating with the BlurEffect script later). My first thought was to have some fancy shader generate it, but as I’m a bit stuck in my effort to learn shaders I was scouring the net for a starting point when I came across another approach. Here it was suggested to render the scene using a different camera with all objects white and black fog into a render texture. This would generate an image where objects become brighter the closer they are to the camera, just like a depth map.
So, any thoughts / ideas on what way to go? Other approaches? Maybe I can access an existing depth buffer directly?
Also, I’ve been generating some fog-based depth maps but feel I have too little control. I would like to be able to set it up to have a linear falloff from full fog at a custom distant point (possibly the far clip plane) to no fog at a custom near point (possibly the near clip plane), but the controls are inadequate. Is there any way of setting fog range?
thanks,
Patrik