Hi, I am struggling to balance edge detection using both normal differences and depth differences.
The problem is I am rendering flat orthographic views and if I have sensitive depth-difference detection (for thin objects on a wall like a picture say), but rounded objects pickup thick dark areas near their edge due to depth differences.
So, can you detect the edge of thin boxes on surfaces, while not over-detecting ball edges?
The solution is to not use depth or normals. Not by themselves at least.
You need more information. You either need a random per object id that you assign and render out for every object, or you try and have a per pixel “roundness” value that you render out that modifies the depth offset required (which you scale by distance & pixel normal). See how Obra Dinn or Mars First Logistics do theirs for inspiration on what it takes to get good outlines.