If I am rendering particles to a custom rendertexture, is it possible to have them occluded correctly by geometry in the scene?
I’ve been working on the assumption that a persistent copy of the zbuffer is running somewhere, and that the depth textures referred to in the docs are generated 16 or 24 bit greyscale image versions of the z-buffer?
No, not really. Of course you could render scene into your render texture with some shader that only affects the depth buffer, but does not write anything into color channels. Then particles will be occluded correctly, but that is somewhat expensive.
Especially in OpenGL, there’s no way at all to share main depth buffer with anything else (even with FBOs that’s a huge hole in the spec in my opinion).
The grayscale images you’re talking about are most likely the shadowmaps, they aren’t the main depth buffer (they might be depth buffer, but rendered from light’s point of view).
Thanks for answering so quickly Aras, appreciated.
After aggregating all of the particle information, I currently redraw it to a scene via a full screen quad… I suppose I could potentially have this quad labeled as an alpha material, and if instead of just using the model space coordinates as my projection coordinates directly (i.e. it’s modeled to fit the view frustum), I could instead transform the view plane to the correct depth in the scene and fit it to the camera frustum that way.
I would assume that by setting it as an alpha material, it would then draw after the opaque pass, and that by actually computing a useful depth for the quad and turning on ztest I could then occlude the original particles that way?
Edit: In unity, I’m working under the assumption that the viewproj z coordinate describes a non-linear relationship from 0 (near clip) to 1 (far clip), is that correct?
Yes, a quad with particle “results” would almost work. Of course the quad is planar, so if your particles form a 3D cloud, then the occlusion will not be entirely correct.
This approach would be very similar to what Yoggy does in Avert Fate for the refractive explosions. He renders sort-of-normalmapped particles into a render texture, and then creates a quad in front of the explosion (not a fullscreen quad, just one that is enough to cover the particle system) that uses normal map to distort the view. Yoggy had a presentation and example project at Unite on this and other tricks, and these should be made public soon.
The z coordinate in after projection matrix is still a world space z, I think (I can never remember those things). It’s the non-linear z-buffer depth after doing z/w division (-1…1 in OpenGL, 0…1 in Direct3D, and something else on the Wii… just for fun :)).