I wrote a simple ghost shader (based on the rim lighting sample), and I get good results, but not perfect. The glaringly obvious thing is that it’s all alpha blended, so you can see parts of the model through other parts (see the shoulder pads on the “ghost statue”):
Is there any way to enable z-buffering, and still get the correct affect? Maybe render it to a texture first, and then use that texture as the base for the ghost shader? Any comments and/or suggestions welcome.
hmmm… could be. But the thought of other passes sounds intriguing. I wonder if I can create two passes, and write to the zbuffer only during the first pass, and change the second pass to write if it is less/equal…
edit: Yes, that was the trick. Thanks for nudging me the right direction:
This sounds neat. Could you sketch out how to do this? I am curious how to write a volumetric fog shader (I know the math, but not sure how to determine/store the entry/exit positions…)
So far I’ve only tried making the simplest kind of volumetric fog. It uses one depth texture to determine the distance from the currently rendering surface, to the nearest (in the z direction) surface that has already been rendered onto the depth texture (either using the default one, or one that’s custom). Then it mixes the color of whatever has already been rendered at that pixel, with a fog color, based on that distance.
But there are still a few things you might want to handle, like what happens when the camera is inside the fog volume, or when you’re trying to render concave objects. Luckily, there are some articles that describe these things in more detail.