mobile LINEAR depth of field - possible?

Hi all,

I was thinking about a really simple and limited depth of field effect - it doesn’t even need to be that good - think of windwaker on the gamecube. They must have had some method or other to achieve it in a very fast manner on such limited hardware.

Do you know how they did it? The way I see it is there’s a couple of ideas to follow:

  1. doing multiple camera renders sliced by distance - probably looks a bit fugly where the joins are and of course there is the transformation costs to deal with.

  2. a simple shader which is the entire scene rendered quarter of the size, blurry (post fx) - but then drawn back over the main scene with more transparency for anything closer up… this in theory would sort of give a decent DOF?

Accuracy isn’t important but speed is. I was wondering what ideas you guys had before embarking further on this concept. The game I am working on would really benefit visually by this as it would make the foreground pop, and reduce visual confusion.

Thanks guys!

I don’t know a whole lot about shaders but this is an interesting problem. I think I would go with option 2, but rather than rendering at a quarter size I would make the blurred version by apply a blur filter to the original render. If you use a small texture and up scale it, that would be the opposite of AA, so it would adding aliasing.

So i guess the steps are, render main camera normally, make a copy of that which is blurred, then generate a depth map and then use that as a alpha channel for the crisp version rendered on top of the blurred version.

I got no idea how to make that happen but the theory sounds solid :smile:

Could you not use frustrum culling with image effects on multiple composited cameras?

EDIT: Sorry, this was kind of your option 1 above.

I’m guessing image effect blur would be incredibly performance hungry on mobile though. Another option would be some sort of vertex shader that pushes verts outwards more towards the distance… This would be your fast COC. But it’s only theory.

Option 2 is most likely. Get half-res screen buffer and blend in based on distance. Lower res for more blur.

Basically screen-grab, down-res it to half then up-res it back to full. Then down-res to quarter and up-res to full, etc…

It’s how bloom and dof work in Shadow of the Colossus on PS2, guessing Wind Waker’s the same…

Interesting! how does the quarter thing work if you could elaborate on that, I got a little lost there! :slight_smile:

Well, essentially the lower res you downsize it to, the more blurred it becomes when you res it back up.

It’s a cheap blurring function :stuck_out_tongue:

So if you need more blurry bits, you can blend in the down-resed-more version.

Some info:

The downscaling works by replacing groups of 2x2 pixels by one pixel in the lower resolution with the averaged color of the 4 finer pixels in the higher resolution. (If you take the average of 4x4 pixels instead of 2x2 pixels the quality can be improved a lot.)
Averaging 2x2 pixels is a single(!) texture read at the position in between the 2x2 pixels; thus, this is very efficient. Moreover, the resulting downscaled image is only a quarter of the size, thus, the image filter has to process only 1/4 of the number of pixels of the original image.

You can continue downscaling to even coarser levels. (In the end you would create all the levels of a mipmap and in fact this is an operation that is integrated very efficiently in many OpenGL drivers as automatic mipmap generation.)

The simplest interpolation of pixels from any of the coarse image levels is just a single texture lookup (with bilinear interpolation); thus, this is also extremely efficient.

The full algorithm is described in GPU Gems, Section 23.6 Reverse-Mapped Z-Buffer Depth of Field: http://http.developer.nvidia.com/GPUGems/gpugems_ch23.html ; by the way, NVIDIA also holds a patent on this technique but I haven’t heard that they ask anyone to license the technique.

That is very cool, thanks for the link!

Very interesting information, thank you :slight_smile:

Basically if we render to a texture on iOS, do we get the mip maps as well “for free” from these gpus or is that not supported?

Hi hippo,
I tried using Trilinear sampling with hardware mipmaps when we did our DOF. It worked on PC with no result on IOS. Its a pity because it looked great on the PC. I think it gave a better look than my final manual sampling. For those that are interested I’m covering how we did it here.

Kind regards,
Brn