Full Dynamic Lighting - Shadow maps VS Shadow Volumes

Personally, I’ve noticed that games that use shadow volumes seem to have more lights that cast shadows than games that use dynamic shadow mapping. Could it be that shadow volumes have all lighting put into a few buffers, rather than a buffer for each dynamic shadow mapper to calculate shadows? I’m not sure if current dynamic shadow mapping technology does this or not, I assume it doesn’t, I may be mistaken.

There are pros and cons to both, like Shadow Volumes not being suited for higher-polygon meshes, but then you have the artifacts seen in shadow mapping, whereas Shadow Volumes are almost pixel-perfect.
(I’ve seen some artifacts coming from spheres, though.)

Let’s say you wanted soft-shadows, Shadow Volumes fundamentally create sharp crisp shadows, a blur pass on the shadow buffer would cause a few artifacts, and I haven’t seen soft-shadow artifacts in games with dynamic shadow mapping, just shadow map bias artifacts, where shadows are incomplete in corners.

Shadow mapping has performance drawbacks in terms of high-resolution shadows, so usually you would see games with blurry low-res shadows, maybe with an extra blur pass to compensate for the low resolution.
You could also have a “Shadow Distance” like in Unity, perhaps the same with Shadow Volumes, enable whether or not to extrude and create shadows.

I’ve seen older games use static shadow volumes as some sort of pre-baked lighting technique, the lights don’t move and there are very crisp shadows, I don’t think dynamic shadow mappers can do this, there’s always lightmapping however… It sure takes forever though.

Dynamic shadow-volumes looked great back in 2003 and 2004!

And in 2009…
Chronicles of Riddick: Escape from Butcher Bay (Assault on Dark Athena Remastered version)

Games could even use them in their mechanics, like hiding in the shadows. (Thief: Deadly Shadows, Chronicles of Riddick: Escape from Butcher Bay, etc)

You could probably use more modern shadow mappers (Have a camera point straight down at a textureless gray plane and average out several pixels), but it would probably be more of a pain than it’s worth, and any shadow-mapping artifacts could cause you to be spotted aswell!

So what do you guys think?
I think you could fake volumetric lighting by applying a soft sun ray texture to the extruded polygons created by dynamic shadow volumes and maybe give them a slight soft-particle effect.

The difference between shadow maps and shadow volumes is roughly like difference between Doom’s “3D” engine and a true 3D engine like Quake. First is faster and can run on lower end hardware but has certain limitations and doesn’t look so well. Latter requires beefer hardware, but effect is worth it.

Anyway, I think volumetric lighting and shadowing techniques are the future.

Unrelated note: I’d love future GPUs to support bezier/nurbs 3D objects so we won’t have to use polygons where it doesn’t make sense (basically any curved or round parts such as spheres, cylinders, bottles, etc.)

Not to continue the unrelatedness too much, but this is something I’ve thought about before. I concluded that it would probably stall hardware development far too significantly for it to make any sense for manufacturers or developers for the GPU side to change from the traditional render pipeline.
Think about how much more complex it is to grab a colour from a texture mapped to a shape in 3D space defined by a mathematical equation, than it is to do the same for an easily triangulated polygon. Also, how do you UV map a bezier extruded shape without using approximated volume mapping which will never work for all cases?

Considering the added complexity will reduce performance given the same power of hardware, it makes no sense for manufacturers or developers to push for tech that is actually worse, save for maybe a slight benefit of utilising a bezier to make a shape.

A far better solution would be a pipeline that converts bezier extrusions to polygonal objects that tessellate, which has been done, though I’ve not seen anything like it for Unity (hint hint Unity devs).
Interesting investigation done by Intel: https://software.intel.com/en-us/articles/using-nurbs-surfaces-in-real-time-applications

Anyway… back to shadows… :smile:

1 Like

Thanks. Interesting white paper. I don’t understand half of it, but maybe someone at Unity does and could implement it, especially since Blender and many other (AFAIK) 3D packages support NURBS surfaces.