This is an odd question, but I’m trying to change the color of an object based on it’s distance from another object (and ultimately many other objects)
I have been able to update a texture based on the depth buffer, but only with objects BEHIND the target.
I’m trying to change the color at a certain distance so like this but inverted. So if the cube is close to the plane the pixels within range will start to change colors…
Essentially I’d like a group of pixels to start glowing the close an object gets to the plane.
I’m also trying to do it using the _CameraDepthTexture. Is there a way to calculate distances between the plane (where the shader is attached) and the objects using that depth texture?
The camera depth texture is useful for calculating the depth between a surface and the closest opaque object behind it … at that single pixel on screen. In a lot of cases this is a good enough approximation for getting how close an object is to the surface, but it’s not actually the same thing. However the vast majority of games you’ve seen with any kind of force field glow or water foam effect where that force field or water surface is touching another object is only doing this and nothing else, because this is fairly cheap.
To actually get the actual distance to the closest object you’d have to sample ever texel in the depth texture in a radius around the current pixel, reconstruct the world position from the depth texture, and get the distance. That is very expensive, which is why almost no one does it.
And even then it’d only really work on objects behind the surface. For objects that are in front of the surface, it’s occluding that pixel, so you can’t see it. But also you can only see the opaque faces closest to the camera, you have no idea what is on the back of said opaque object, because the camera can’t see it, and thus neither can the camera depth texture. So you’d be getting how far away the object’s faces close to the camera is to the surface, and not how close the object is to the surface.
The real solution is signed distance fields, or SDFs. You’d need to calculate an SDF for the entire scene, or the specific objects you care about, or at least pass in the position and some approximation for those objects’ shape(s) to the shader. A common method is to use spheres, and pass in a list of world space sphere positions and radii that represent the objects you care about and use those to get the “distance” to certain objects. This is relatively inexpensive, and is like what the Portal games did for its force fields. Some people today use pre-computed SDFs in low resolution 3D textures for more complex shapes, or even generate the SDF 3D texture at runtime for animated objects.
But these are the kinds of things you’d have to do to get something more accurate.
@bgolus Thank you so much for the response, I appreciate the detailed explanation. I hadn’t considered (though I should have) occlusion being one of my big problems.
Thank you for the recommendation with SDFs I’ll be trying that for sure.