Shadows in volumetric light

I’m working on some volumetric light now based on a nvidia paper I think. I have the light working but I’m having trouble computing shadows.

I render out a depth buffer relative from the lights point of view. How would I tell if an object is obstructing my light. I tried computing a depth value and comparing it to the depth buffer but the shadow has a strange offsets. Can someone tell me how to compute a position in the frustrum to compare to the depth buffer. As you can see the shadowing is incorrect.

Here is what I have now

You need to transform your vertex using the same transform you used to generate the depth buffer. Typically this requires you to pass light view and projection matrix to your shader.

Basically:

vertexInLightSpace = mul(objectToLight, v.vertex), compare vertexInLightSpace.z with whatever is stored in your depth buffer. If vertexInLightSpace.z is closer to the camera than what is stored in the depth buffer, the pixel is in light.

Thanks for the reply.
Yeah that seems like what I was doing. I actually looked into it a bit more.

I think the actual problem is the _CameraDepthTexture is coming out funny.

I think it might be related to
http://forum.unity3d.com/threads/60524-Rendering-depth?highlight=depth+texture

I noticed that swapping between deferred and foward outputs different depth textures.

Is anyone else having these problems?

Thats expected that they are different as forward does not do the full thing (thats why its forward and not deferred, if it did it as well, it would be as slow as deferred as major parts of the performance hit come from the buffer renderings)

From what I understand deferred renders the g-buffers for depth, normals and spec. So there is that overhead but it comes at an advantage that the final computation of the lighting is cheaper because it is done using the g-buffers.

But why would the depth textures be different?

Base pass renders each object once. View space normals and specular power are rendered into single ARGB32 Render Texture (normals in RGB channels, specular power in A). If platform hardware supports reading Z buffer as a texture, then depth is not explicitly rendered. If Z buffer can’t be accessed as a texture, then depth is rendered in additional rendering pass, using shader replacement.

I don’t think directx offers axcess to the z buffer. So the depth has to be rendered via a shader replacement. Maybe I’m just not getting something.

thanks

Hey, really cool. How about you hook us up with this thing when your done. :slight_smile: