Depth of field revisited

Hi.

I’ve been playing with an idea for more real depth of field than what can be achieved with the multiple camera setup, and wanted to get some input on the approach before I dive in.

The plan is to use a depth map as input to the PRO BlurEffect script and have that set the amount of blur for different parts of the screen.

Assuming this would work I’m faced with the task of generating the depth map, and this is where I hope someone might shed some light on the subject (I’ll worry about the problem of integrating with the BlurEffect script later). My first thought was to have some fancy shader generate it, but as I’m a bit stuck in my effort to learn shaders I was scouring the net for a starting point when I came across another approach. Here it was suggested to render the scene using a different camera with all objects white and black fog into a render texture. This would generate an image where objects become brighter the closer they are to the camera, just like a depth map.

So, any thoughts / ideas on what way to go? Other approaches? Maybe I can access an existing depth buffer directly?

Also, I’ve been generating some fog-based depth maps but feel I have too little control. I would like to be able to set it up to have a linear falloff from full fog at a custom distant point (possibly the far clip plane) to no fog at a custom near point (possibly the near clip plane), but the controls are inadequate. Is there any way of setting fog range?

thanks,

Patrik

Yes, generating some sort of “depth texture” and then doing a variable-sized blur is one way to do it. Or generating a depth texture, blurring everything by different amounts and then using depth to interpolate between several blurred versions.

The builtin fog in Unity is exponential squared, but in custom shaders you can set fog to other modes. Like:

Fog { Mode Linear Range 0.1, 1000 }

Accessing depth buffer from the shaders is not possible with the current graphics hardware. We’ll have to wait a bit until DX10 level hardware becomes widespead :slight_smile:

Thanks for the fog info, that gives me a good depth texture. Guess that was the easy part :slight_smile:

Let’s see if I can hack someting together from here.

/P

Some progress:

Not the cleanest setup but it works ok. Using 3 cameras, Main, Blurred Depth. Blurred Depth render into Render Textures. The Main Camera has a script that passes the textures to a shader that interpolates between Main Blurred using the Depth Texture (thanks Aras, variabled-sized blur would have killed me).

To render the depth map I have set up a seperate layer with duplicates of all objects in the scene. The duplicates all have the same shader that renders them white and applies a custom fog.

Fog {
     Color [_FogColor]
     Mode Linear Range 0.1, 20
}

Only thing needed here is to pass Material Propertes to the range so I can control the blur with the camera clip planes. Probably simple, but I couldn’t figure it out. Mode Linear Range _Start, _End doesn’t work. Help?

Tried to avoid doing duplicates by swapping the materials on all objects in OnPreRender then back in OnPostRender for the depth camera but that didn’t work. Looks like OnPreRender is called on all cameras before any OnRenderImage? Help here would be appreciated, right now it’s not very convenient to use with animated objects or physics.

The Image Effect script on the main camera has public properties for the 2 render textures (blurred depthMap) which are passed to the custom shader in the OnRenderImage (the GL stuff is beyond me).

RenderTexture.active = destination;

material.SetTexture("_MainTex", clean);
material.SetTexture("_BlurTex", blurred);
material.SetTexture("_DepthMap", depthMap);

GL.PushMatrix ();
GL.LoadOrtho ();
		
for (int i = 0; i < material.passCount; i++) {
     material.SetPass (i);
     ImageEffects.DrawGrid(subdivisions, subdivisions);
}

GL.PopMatrix ();

A step towards making it more user friendly would be to get rid of the multiple camera setup, but I have no idea on how to do that. Also the duplicate depth map objects would be nice to avoid.

I’ll post a package as soon as I’ve cleaned it up a bit, and hopefully someone can fiddle with it to make it as user friendly as everything else in Unity :slight_smile:

/P

Awesome.

The easiest way to make setup simpler would probably be to create the extra cameras automatically from a script in Awake.

You could just use some names in the shader (like _MyStart, _MyEnd) and somewhere from a script setup global float properties (Shader.SetGlobalFloat). No need to put them into each material.

That’s one caveat of the current rendering system - it’s not easy to change object materials per-camera. We have refactored that part for Unity 2.0, so there it will be much more intuitive.

Anyway, you’ve got really cool stuff here!

Here’s the package.

Cleaned it up and added a script that handles the setup. Make sure you have the Pro Assets installed, then just import the package.

Open the scene in the DepthOfField folder and try it out. It should work as it is.

To set up your own scene with DepthOfField,

Add the script DepthOfField to your Camera and set it up:

Near Far determines where the blur starts and ends (Everything closer than Near has no blur and everything further away than Far is fully blurred)
Depth Map Layer Index - layer where all duplicated depth objects end up. This layer only renders to the DepthCam. Create a new layer and set this variable to the layer index
Ignore Duplicat Layer Index - anything in this layer will not be duplicated as depth map objects. Create a layer and add stuff you dont need blurred(E.g. the FPControler).
then there’s the DepthMaterial and the 2 render textures (Blur Depth). There’s matching Assets in the package.
Finally the DepthOfField shader is located in the Shaders folder.

That should do it. Let me know how it runs and I’ll add it to the wiki when it’s working properly.

Notes on things that could be improved:
The duplication of objects for use in rendering the depth map is a bit unclean. It will probably mess up Physics/Game AI etc. Should do this differently…

Should probably reveal variables controlling the BlurEffect so you can change the maximum blur

Also, I guess there’s areas that could be optimized, my shader/openGL knowledge is limited :slight_smile: .

32552–1184–$depthoffieldunitypackage_199.zip (29.6 KB)

That looks nice. Possible improvement would be to modify the blur, so it ignores pixels that are closer to the camera than the current one, to avoid having sharp objects bleed into the background. (note the red ghosting around the red balls)

Another feature would be the ability to allow objects close up to be blurred as well. (Imagine when focusing on an object 10 meters away, then everything close up will be out of focus as well.)

Hmmm… maybe it’s time for me to learn something about shaders – this looks fun.

Have you tried clearing the depth texture to white? If you did that, the background would be completely unblurred - this you combat by using a pre-blurred skybox. It would prevent objects from leaking on to the background. They might still leak onto each other, but it wouldn’t be so visible…

I had to try this out. It kind off works, the only problem is that blurred objects have a sharp outlines against the skybox, which looks wrong.

That’s one caveat of the current rendering system - it’s not easy to change object materials per-camera. We have refactored that part for Unity 2.0, so there it will be much more intuitive.

Just looking into this again. We have reached 2.0. Is there a workflow for easily changing materials per camera?

I want to remove the duplicate object parts of this code, so it would be great to turn the objects all black and render a white fog, or something like that.

Yes. You can change material from OnPreRender() and set it back from OnPostRender(). It’s not exactly “easy”, but it works predictably at least :slight_smile:

Nicholas suggested to use a projector for the depthmap rendering. It’s simple and efficient, only drawback is that it doesn’t work with transparent objects. I turn it on and off in OnPreCull, for the different cameras. Works great.

Lately I’ve been thinking it can all be done with one camera - rendering depth in the alpha (projector only doing alpha) and storing an unblurred copy in a rendertexture before blurring it. Then do the blending. Haven’t had time to test it yet.

Any cool ideas on the transparent objects issue would be much appreciated :slight_smile:

/Patrik

Hi Patrick

I spend a few hours today modifying your original dof setup into using the suggested projector trick instead of duplicating geometry. It works really well. I tried it with some transparent objects and as you wrote the result looks really wierd.

I tried to turn my projector on / off using OnPreCull as you suggested, but could not get it to work, how did you do that?

Your one camera idea sounds really cool, right now I use 3 and my Draw Calls are getting seriously high.

I have a script that generates everything and adds scripts with OnPreCull on all three cameras. The active state of the projector is set to false for the normal+blur camera and true for the depthmap camera. So for the blurcam, it looks like this:

void OnPreCull(){
     Dof_2_0.depthProjector.active = false;	
}

Since the projector method won’t do transparent objects correctly I’m looking for an approach that can fix it. Before going at it I’d like to bounce off some thoughts here to get input on how to best do this.

One idea is to have two materials for rendering the depth map, one for non-transparent objects and one for transparent objects. Then check the shader used by an object and apply the correct material in OnPreRender() and back in OnPostRender(). This would require multiple cameras and a lot of material swapping each frame. I’d like to do the whole setup once, in Awake or Start and use only one camera.

One sollution is to duplicate all objects with code in Start() (the original dof uses duplicate objects, created and setup in the editor), but I imagine there will be issues with sync on animated objects and complex physics setup so I’d rather try to solve it with shaders/materials. Also, this requires multiple cameras.

Now, it would be cool if I could look through all shaders used in the scene and simply add an extra pass for drawing depth (in alpha channel) to them, checking if the original shader is transparent or not. Then I could do the entire setup in Start, and only use one camera. Per object glow uses an extra pass for drawing a ‘glow alpha channel’, so that part seems to work, but editing shaders during runtime is not possible right? Maybe there’s an alternative way of doing this, without the need to modify the shader?

Any input appreciated :slight_smile:

/Patrik

I have been working with using two materials for all transparent objects that I have in my scene.
My depth camera then changes between those two materials using OnPreRender and OnPostRender. So now the depthmap looks right again.

But strangely it doesn’t seem to change the final output.

objects in the back still becomes very clear and sharp when objects move in front of them. see picture above.

I spent a few hours last night trying to locate the problem, and I think the problem is in the DepthOfField shader. I’m not that much into writing shaders but I think the problem is this line

float4 output = float4( col1.x * (1-b)+col0.x*b, col1.y * (1-b)+col0.y*b, col1.z * (1-b)+col0.z*b, 1);

Maybe this is the time to learn about shaders

/ bjerre

nice screens. I’m really looking forward to seeing how you’re project turns out.

I think that ‘effect’ is impossible to get around with the way we’re doing things right now. You could try using transparent materials for the depth rendering and see how that looks, although I’m pretty sure that will make transparent objects look blurred all the time.

Perhaps there’s a solution with a grabpass for transparent objects that gets whatever is behind the object and applies blur before rendering the object itself. But how to get blur amount for the current object if it’s not in the depthmap… hmm… Maybe a per object depth value stored in the shader?

I think it’s time to look into how Depth of field is done in other engines.

/Patrik

Looking pretty good guys, despite transparency.

I’ve read a little bit about this-

It seemed to me the best solution would be another render pass in all materials.

I think the only way to solve your transparency would be to somehow do the blur at render time as a shader process, rather than as a post process. However, I don’t know if that’s possible.

Anyway, one of these days I’m not putting out fires, I’d love to try to optimize this thing for real use. I have a game on the back burner I’d love to use it in.

I’m sure you’ve looked at the ATI article.

http://64.233.167.104/search?q=cache:ya_CZNPbsfUJ:ati.amd.com/developer/shaderx/ShaderX2_Real-TimeDepthOfFieldSimulation.pdf+depth+of+field+real+time&hl=en&ct=clnk&cd=1&gl=us&client=firefox-a

You guys might want to check out the latest GPU gems book, it’s got a good article in it on how CoD4 handled DoF in their game.

At work, we do a 4 iteration downsampling of the framebuffer, recursively running it through a downsample, then doing a gaussian, and repeating the process.

We do a coarse gaussian (we don’t rotate the kernel, which would be a small improvement I think) at the first two steps, and then two piecewise (discrete axis) gaussians on the x and y axes only to get a broader amount of blur on the lower resolution buffers (since we’ve got fewer pixels to deal with, we can use a broader kernel).

You’re likely not going to be able to find a solution to your alpha issues (I know we’ve tended to work around them by compromising on the data side), because if you’re implementing your DoF as a post pass, you don’t have the depth information available for every alpha pixel you’ve drawn (potentially with multiple planes of overdraw), and conversely, if you try to deal with it in the shader, you’ll kill your framerate because typically alpha is already stressing your fill performance, and adding multiple levels of gaussian lookups for each alpha pixel you want to draw is going to kill you.

We tend to limit our overt alpha use in situations we’re using DoF in to 1-bit punchthrough alpha, and just live with it.

Good luck!