[SOLVED] "Fake Depth" - Rendering Backfaces as Mesh Top

I’ve been trying to figure this out for almost a month now and, by this point, I’m out of ideas.

I was hoping someone could point me in the right direction; if not give me the solution entirely.

What I have:

  • What’s happening in the images is that water-looking cubes are being clipped (based on a world-space y position)
  • then in a post-processing effect the backfaces are being replaced with a texture representing the “top” of the slice
  • the texture coordinates of this “top” texture are based on the pixel’s screen position
  • a custom lighting function is applied to this “top” texture where the normal is always facing up (0,1,0)

(ignore the transparency difference between the left and right, they’re being rendered in a special way to solve sorting issues; also, there sphere is just there to help display the pixel depths)

Now that you understand what’s going on, my issue:
I also need these backfaces to overwrite the depth as if what you’re seeing is the top of a cube and not the back of a sliced cube.

I know how to write to the depth buffer by returning this from the ‘frag’ function

struct output
{
      fixed4 color : SV_Target;
      float depth : SV_Depth;
};

But this is were my ideas run dry. I can’t figure out how to calculate the depth values that turn the backface into the “top”.

I’ve looked into ray-marching but I don’t understand it in the slightest so I’m not sure if that’s the way to go.

P.S.
Maybe I’m doing this in an overcomplicated way but this was the cleanest way I could get what I wanted. As a side note, the reason for doing it this way is because each ‘water cube’ needs to be able to have a unique clipped height and that “top” needs to always remain flat (facing up) no matter how the cube is rotated.

P.P.S.
Oh! I Should also mention I do have access the the position of each vertex for each of the cubes… in case I need that for a solution you’re going to suggest.

This is simultaneous much easier, and much harder than it may seem. Having a back face, or really any face, render as if it’s an arbitrary flat plane isn’t in itself that hard. You need to get the camera to surface position ray (world position of the fragment - the camera position) and do a ray plane intersection.

// rayDir needs to be normalized
float rayPlaneIntersection( float3 rayDir, float3 rayOrigin, float3 planeNormal, float3 planePos)
{
    float denom = dot(planeNormal, rayDir);
    denom = max(denom, 0.000001); // avoid divide by zero
    float3 diff = planePos - rayOrigin;
    return dot(diff, planeNormal) / denom;
}

You’ll need a ray origin (the world space camera position, _WorldSpaceCameraPos.xyz) and a ray direction (normalize(i.worldPos.xyz - _WorldSpaceCameraPos.xyz)) The ray plane intersection function above gives you a distance from the rayOrigin to the plane surface along the rayDir. So to get the world position of the plane at a specific pixel it’s:

float3 worldPosOnPlane = rayOrigin + rayDir * rayPlaneIntersection(rayDir, rayOrigin, planeNormal, playPos);

Convert that into a depth value to use with SV_Depth, use the existing UnityWorldToClipPos() function to get the clip space position for that world space position, and divide the z by w.

float4 planeClipSpace = UnityWorldToClipPos(worldPosOnPlane);
depth = planeClipSpace.z / planeClipSpace.w;

And you’re done!

Sort of.

What you’ll find when you do this is your back faces will render on top of your front faces. Because they’re now rendering as a plane. And that plane would be in front of your front faces.

There are kind of two ways to go about fixing this.

One option is, assuming you’re rendering these boxes individually, you could do a ray box intersection in the shader of a box the same size as your mesh box, and clip anywhere where the plane is outside of the confines of the box.

The other option would be to render the front faces as they are first, render the back faces but only to the stencil, and then render the back or front faces again only where the back faces where visible. The one caveat to this method is you need to render the boxes before the sphere, or anything else that might intersect with the virtual plane.

Thank-you SO MUCH for your help! As I said I’ve been working on this, off and on, throughout the entire month (not just the depth, but all the water shader issues: Stencil, Transparency Sorting, etc.).

Luckily I’d already devised a method for sorting the “tops” with intersecting objects as well as their own meshes.

  • The water isn’t rendered with the main camera, instead, it’s rendered with a second camera and a RenderWithShader call on that camera that marks backfaces with a transparency of 0.5 (even though the replace shader is rendering objects as opaque).
  • This allows me to use the post-processing effect to only apply the “top” texture to pixels with an alpha of 0.5 in the output render texture, when blending the water onto the source.
  • A vertex color of (1,0,1,1) is used on the water cubes so the blending can identify where to overlay the water meshes, in the final output, and where they’re obscured by other objects so as to not overlay in those pixels)
  • Only down-side is now it’s 4 render calls per frame (Main Camera, Water Camera, Water Face Transparency, Water Final Depth). I tried spreading the water calls out into 3 renders over 3 frames but that causes some weird flickering issues on the “tops” (they flicker really bright if there is less that 2 of render calls per frame. I guessing it has something to do with Unity managing (in the background) RenderTexture member variables).

Here’s the final result if you’re interested:
7085578--843442--WaterFalseDepth_01_Optimized.gif

I still need to figure out how to get the top texture to scale properly with distance to camera; but it shouldn’t be too hard now that I have all the correct depth values (even if some of them are fake ;))

7085578--843442--WaterFalseDepth_01_Optimized.gif