I have a system in my game where I take a depth image from the top at start, for to be used as “baked” shadows. I just put a orthographic camera top and hit render, and turn that render texture into a plain texture2d.
I’ve achieved decent results with this method. However, there seems to be this annoying thing, where the shadow tears, if the view angle is more steep, see next picture.
You’re experiencing your basic shadow map problems all shadow maps have. In this case it’s an edge case of both shadow acne and peter panning caused by the aliasing of the shadow map texture’s “grid” aliasing with the actual surface positions and exposing “holes” in the shadow map where the resolution of the texture means it’s missing some details.
Here’s a visual representation of the problem. Here we have an extreme “depth map” and a line that represents a vertical wall down the center. You can see how some parts of the shadow map cover the wall and some don’t. This would lead to vertical stripes appearing on the wall.
Normal biasing exists to try to avoid this problem by pushing the the walls in or out by some arbitrary amount so it falls inside or outside of the “jagged edge” of the texture. Unfortunately the “perfect” bias amount isn’t obvious to calculate as it might seem for 3D space, especially for walls that aren’t perfectly vertical.
Thanks, I was hoping for you to come around. Basically I just have to amp the bias and live with it.
I’m now working on a blob shadow system that similarly projects a circle mesh to hug a “surface” depth texture, and I was wondering, is there any theoretical limit to how many of these depth layers (or ground patches at different levels) a 32 bit float texture can represent, and at what is the maximum distance while retaining accuracy? My world is now at 8 x 32 max y bounds, and it seems to work well at all platforms, but what is the maximum and how would you go about calculating it?
I like the solution so far as it doesn’t need CPU raycasting and works well on slopes or slope seams.
edit: I read somewhere that float can represent things accurately up to 7 decimals. This means the maximum accuracy is height/depth until we reach that point?
Depth textures are stored as a value between 0.0 and 1.0, so the limit depends on what texture format you’re using to store the depth image in and the range you’re trying to cover. If you’re using an RFloat for example, that’s a 32 bit float, which means your minimum precision is (depth range / (2 ^ 23)). So if the range between the highest and lowest point is 256 units, then you have a minimum precision of 0.000030517578125 units. And because of the nature of floating point values, it’s only that “bad” for the range between “0.5” and “1.0” of the depth texture. So you can get quite a lot of layers and still be fine. The bigger problem is the artifacts from aliasing which are harder to avoid, and arbitrary biasing of vertices along their normal can cause the shadow surfaces to intersect in weird ways if you go too extreme and have detail that’s smaller than the bias.