Been working on a intersection glow shader in Amplify Shader Editor (they said they do support on unity forums now, so that’s why we’re here, if there’s a more appropriate spot, kindly share a link!) mostly by replicating and dumbing down water shader tutorials.
It’s basically there and good, but it’s not showing the glow at certain angles. This makes sense, because we’re creating the glow effect from the cameras view, using screen depth onto the objects below it. But this means that if there’s no object behind the glow shader, there’s no glow, which looks janky at certain angles. The question is, is there some way to draw the glow onto the mesh of the shader where it’s intersecting another mesh, instead of only where there’s an object behind it? Images examples below.
What you’re seeing is as good as it will ever get.
The “intersection” glow is really only checking the camera depth texture at the current pixel, and comparing it with the depth of the current surface. If the two depths are close, start to glow, if not don’t. It’s not checking any other pixels nearby, and knows nothing of the objects it’s “intersecting” with, so if the object isn’t visible behind it, it won’t glow. If you’re viewing an object at an angle, it’ll fade out faster because it’s getting further away faster.
That’s it.
It’s a cheap approximation that most people don’t notice isn’t remotely accurate. Check out these kinds of glows or water shaders for 99% of games out there and you’ll see the same issue.
The problem is the “correct” way to do it is not cheap. You could try sampling all pixels within some world scaled radius around that pixel, but that won’t help if the surface you want isn’t visible. And if you get too close now suddenly you’re sampling the entire screen and your performance goes in the toilet. This is the same problem SSAO has and is why SSAO is often so noisy or so blurry, disappears when something goes in front of something else, and often shrinks when you get too close.
The “real” solution is to have a 3D volume SDF of your scene and have your intersection shader sample that to see if it’s close. Unity doesn’t have any built in tools for that, and the third party assets that do it are crazy expensive (performance wise, several are free on Github) and not designed for this specific use case.
Alternatively if you are only ever going to be intersecting with cubes and spheres you could pass information about those shapes you care about to your intersection material and do it all analytically. But I suspect your example above is just a test case and not actually how simple you want it. However that is the approach a lot of AAA studios use, sometimes just to augment the approximated depth texture based approach.
Ok thanks, I figured it would be something like this. Really appreciate the extra pointers and details, that’s been keeping me busy researching those. The glow/intersect shader material will currently only ever be on a cube/sphere, but the intersection will be with complex/arbitrary geometry. I’ll probably keep it as is for now and when I have the ability to grok something like your last statement better, I can circle back to see if that’s a possibility with the use case. I don’t understand all of the elements at play there other than the high level concept.