I’m currently working on a project using VR. I want to colour the areas of the hand that are touching an object in a different colour, so the user knows how much and which parts of his hand are colliding with other objects.
From what I have read, the best way to do this would be using a shader, but I have no clue how or where to start looking.
Any help would be greatly appreciated.
This perhaps a larger question than you realize. Going by the question, my guess is you have no experience writing shader code. In which case this is going to be a fairly significant and deep dive for you to get anywhere useful. Not trying to dissuade you, just warn you.
So at a high level there’s a hand full of ways this kind of problem could be approached.
Camera Depth Texture
The most common method would be using the camera depth texture. This is a screen space texture that Unity can generate of the view that stores the closest depth of opaque objects at each visible pixel. If you search online, you can find examples of doing shorelines, force fields, and all sorts of similar “is something close to something else” kind of effects. So it might seem like this would be a quick and easy way to handle the problem. But, there are a few problems.
The depth texture is all opaque objects in the scene, which likely includes your hands. You must have transparent hands, or at least hands that are rendered as part of the transparent material queue, to avoid being rendered into the camera depth texture. If your hands are opaque, it’s not hard to change the material queue to be part of the transparent queue, but it also means they can’t receive shadows.
If your hands are already transparent, then this might be okay. But the basic examples out there work by comparing the depth of the mesh being rendered and the depth of the camera depth texture at the same pixel position. It’s not actually how “close” it is to something. It’s also only the closest depth in the camera depth texture, there’s no volume. So if you can’t see what you’re holding (like the side or back of an object) there’s no way for this kind of effect to know that, because the camera depth texture don’t know anything about those surfaces.
There’s also the problem of unless you’re only making those areas actually close to the surface glow, there’s no way to make something like the back or sides of the fingers / hand colored without having a large range check. That might mean parts of your hand are colored even when they aren’t actually touching. You could use multiple samples in an area around each pixel, but that gets costly quickly, and it’d be noisy. Neither of which are particularly good for VR.
SDFs
The next step, and this is a big step, is to use SDFs, or signed distance fields. This isn’t something Unity has any support for built in, but there are several projects out there that show how to generate SDFs for arbitrary geometry. And there are analytical SDFs for basic shapes like spheres, capsules, and boxes that can be used. This is also potentially quite expensive. And while it will let you get the distance to the closest surface much more accurately, it doesn’t actually solve the problem of changing the color only when touching something. Like the camera depth texture approach, you can have it glow when close, but you can’t guarantee it’s touching. So, this probably isn’t the answer either.
So those are kind of the two major ways to approach the problem using mostly shader based solutions, and honestly both aren’t really useful. So are shaders not the solution?
They still are, but not for detecting when you’re touching something. For that you’ll likely need to handle those calculations on the CPU. Use physics objects and calculate the collisions / intersections between the hand (probably approximated with spheres or capsules), and then have a shader that as an input takes a bunch of “is x spot touching” properties and then color the hand appropriately. That means the shader doesn’t really do any of the hard work, it’s just a lot of manually setup masks done with textures or vertex colors in the content, and a shader that will make the tip of the finger change color because _Finger1TipTouching == 1.0
.
Could you come up with a solution?
I though this might be easy, by assigning some Shaders on two objects (1 object is the Hand, 2nd object is the Object that your hand can collide with)
What objects do you interact with? If it’s simple geometry (spheres, boxes) you can do it by passing the implicit geometry as uniforms to your shader and then do a per-pixel proximity test against the geometric primitive.
If you want to do it for arbitrary meshes it might be easier to render in multiple passes and use the depth-buffer of everything before you render the hands to deduct an approximation of the proximity/intersection