Using Directional Lights at small scales

I am working with very small meshes for Physics reasons, and am having some trouble with lighting.

In the first example, the sphere mesh is at x1 (default scale) and there is a directional light from above. The shadow looks correct.

In the second example, the sphere is x0.01 scale, and if the sphere gets too close to another mesh, the shadow becomes very small and then disappears (as shown).

I’ve played around with the quality settings, light settings (bias etc.) but nothing seems to affect it.

What’s best practice when using lights at these sort of scales?



EasygoingMagnificentBarb

Shadow in real time work by taking a picture of the world from the light perspective, so the size and resolution of that picture will define the “coverage” of the shadow (ie how much a pixel cover at a given distance). Also that’s why shadow also has a range, ie there is a limit where

This image being a depth image, we just compare the fragment position to the distance of the light, then sample the texture and compare if the fragment is below the data, which is mean the precision of the depth matter, ie how the well the fragment is correctly behind or above the depth value. Shadow acne is an artefact of precision problem.

What does that mean for multiscale shading? well you use different shadowing technique to handle the various scale.

  1. it’s possible to have multiple shadow texture for each scale, that’s the idea behind cascaded shadow map, but this generally applied to large scene.
  2. you can have per object texture, generally used for cinematic view of character, probably too expensive
  3. use a post process of the type contact shadow, wich deal with the depth buffer, and work well with small scale but not at meso scale or large scale. It’s generally a good complement to unity’s shadow technique.

Hi NeoShaman, thanks very much for your informative reply. It looks like I have much more to learn on shadows…

I wish some part weren’t drop by the browser when hitting post lol

Strictly speaking, shadow acne is a product of aliasing from sampling two grids that don’t match orientation (the screen fragment’s world position and the shadow map being the two grids).
https://computergraphics.stackexchange.com/questions/2192/cause-of-shadow-acne

The solution for shadow acne is biasing, which is implemented as some form of offset. This is usually done on the shadow geometry itself during the shadow map rendering, pushing vertices inward by their vertex normal (aka: “normal bias”) or away from the light (aka: “bias”). Or it is done as an offset in the shadow map sampling position, sampling the shadowmap from a position pushed slightly away from the screen fragment’s world position along its surface normal, which is called receiver plane biasing. The later option isn’t used by Unity, so all you should really have to care about is the bias and normal bias settings on the light.

Here’s a 0.01 scaled sphere with a directional light using a bias and normal bias of 0.0.
4423318--403498--upload_2019-4-12_14-5-34.png
Notice the small marks on the sphere itself, that’s shadow acne. A very small amount of bias (0.001 in this case) can fix that.
4423318--403501--upload_2019-4-12_14-6-57.png

But the default bias of 0.05 is 5 times larger than the sphere, hence why the shadows disappear.

1 Like

I was simplifying, I know the artifact is cause by the “stair case” poking of the surface, I thought it wasn’t necessary to expend on that to explain the various solution and trade off.

Thanks for the super helpful example @bgolus !