I render out a depth buffer relative from the lights point of view. How would I tell if an object is obstructing my light. I tried computing a depth value and comparing it to the depth buffer but the shadow has a strange offsets. Can someone tell me how to compute a position in the frustrum to compare to the depth buffer. As you can see the shadowing is incorrect.
You need to transform your vertex using the same transform you used to generate the depth buffer. Typically this requires you to pass light view and projection matrix to your shader.
Basically:
vertexInLightSpace = mul(objectToLight, v.vertex), compare vertexInLightSpace.z with whatever is stored in your depth buffer. If vertexInLightSpace.z is closer to the camera than what is stored in the depth buffer, the pixel is in light.
Thats expected that they are different as forward does not do the full thing (thats why its forward and not deferred, if it did it as well, it would be as slow as deferred as major parts of the performance hit come from the buffer renderings)
From what I understand deferred renders the g-buffers for depth, normals and spec. So there is that overhead but it comes at an advantage that the final computation of the lighting is cheaper because it is done using the g-buffers.
But why would the depth textures be different?
Base pass renders each object once. View space normals and specular power are rendered into single ARGB32 Render Texture (normals in RGB channels, specular power in A). If platform hardware supports reading Z buffer as a texture, then depth is not explicitly rendered. If Z buffer can’t be accessed as a texture, then depth is rendered in additional rendering pass, using shader replacement.
I don’t think directx offers axcess to the z buffer. So the depth has to be rendered via a shader replacement. Maybe I’m just not getting something.