Manual Occlusion Culling within a semitransparent Mesh [Solved]

So, I have a semitransparent mesh (for example a terrain).
I figured out how to make backface culling - so, I dont see the polygons which are’t looking at me (lets put it this way).
The second problem is: I have polygons which are far away and shouldn’t be seen because closer polygons are bigger and hiding then. How can I sort out and hide those far polygons on a shader side?
I don’t even know the right name for it. As I read here : its probably called occlusion culling.
And the way unity handles it is probably
ZWrite on - to write depth info and ZTest param to handle depth. but I’m not sure.

Also, I wonder does unity adds some code depending on ztest param or the videocard handles it.
I have a very complicated case and would love to know how to actually add a depth value , for example, in a vert shader and hide stuff , for example, in a frag shader. but by writing the code and not just by using ZWrite/ZTest.

1 Like

@bgolus I read the post and realized that I actually have non-transparent mesh.
But the thing is → Im using line typology. So I’m sort of have polygons then I’m making lines in geometry shader and render them in frag shader.
I just wonder how can I write depth value in vert shader manually and how can I use those values on frag shader to hide stuff? I’m literally need code for it coz I dont wanna use ZWrite / ZTest. I wanna write it myself.
Not sure that it’s possible tho.

The vertex shader and geometry shaders can’t write depth, they can only pass on information to the next part of the pipeline. Ultimately only the fragment shader can do anything with that information, and then only really using ZWrite and ZTest.

Understand a vertex shader only has knowledge of the individual vertices, and the geometry shader still only knows about vertices. Those parts of the shader pipeline do not know where they are on the screen, they don’t even really know about the polygon surfaces, just the vertices, and they’re both just setting up data that eventually gets fed into the hardware rasterization (the actual point when the GPU calculates the pixel position of the surface). The fragment shader is the first time it knows where on screen something is and is the only place it can write data.*

Now you could have your fragment shader write out to a separate, manually created depth texture and do your depth tests there, but that is expensive and complicated and a lot more work than it sounds like as you’ll need to use a RWTexture.

Another alternative is to rebuild the mesh each frame on the CPU to reorder the polygons based on distance from the camera, but that’s also very slow.

If you can describe what you’re trying to do, or better show example images, there’s probably a better way to accomplish what you want. My best guess from your loose description is you actually want to do a depth only prepass, or maybe use a geometry shader to do barycentric lines instead of using line topology so you can do it in one pass.

  • In DX11 Geometry Shaders can write out the vertex data to a new mesh, but this doesn’t help you. This is how Unity does it’s skinned meshes in DX11.

@bgolus thank you so much for the reply!

so, here’s an example. i have a mesh with a terrain.
the regular mesh built with polygons.
then i make 3 lines out of each polygon in a geometry shader ( each line is a set of 2 new created triangles → so 6 new polygons out of each initial polygon in a geometry shader).
then i render them in a frag shader.

so, i made backface culling work by just dot function.
now i wanna make depth work and hide overlapping polygons.
how can i do it?

i tried

float depth = LinearEyeDepth (tex2Dproj(_CameraDepthTexture,
                                                         UNITY_PROJ_COORD(i.projPos)).r);

and tried to compare it with i.projPos.z
where

                o.pos = mul(UNITY_MATRIX_MVP, v.vertex);
                o.projPos = ComputeScreenPos(o.pos);

but it seems like _CameraDepthTexture doesnt really work. or has weird values.
im using OSX btw - i hope it shud work… coz DOF doesn’t for example.
Anyways, i turned the camera buffer ON by GetComponent<Camera>().depthTextureMode = DepthTextureMode.Depth;

so,im really confused. as far as i understood i had to compare _CameraDepthTexture with ComputeScreenPos(o.pos) but im doing something wrong or maybe the depth buffer just doest work (i mean _CameraDepthTexture).

Yep, a depth pre-pass is going to be fastest way to get this mostly working.

Not knowing what your shader looks like I have no idea if the data in the depth texture is going to be of any use. You need a shadowcaster pass to render to the depth texture, which having “Fallback Diffuse” or something similar will add. You’ll want to look at the particle shaders for an example on how to use the depth texture though. You need specific data for both reading from the depth texture and comparing against it (it’s not just the z from ComputeScreenPos).

For the depth prepass see this page:

// extra pass that renders to depth buffer only
    Pass {
        ZWrite On
        ColorMask 0
    }

With lines it’ll cause a little bit of z fighting and over occlusion, so you’ll also need an offset towards the camera, either on the lines or on this depth only pass. So try:

// extra pass that renders to depth buffer only
    Pass {
        Offset 0 1
        ZWrite On
        ColorMask 0
    }

edit: fixed weird thing with the code blocks getting shifted.

@bgolus thank u a lot for the reply / i’go thru it tomorrow

u wrote "So try: . . . " and then nothing) _
was there anything important? or u meant i have to try the stuff u wrote before?)

Wow, for some reason the code blocks got shifted. The first one should be where the second one is, and the second one should be at the end. Fixed the original.

1 Like

@bgolus
this is so confusing.
so, i made the shader you wrote and applied it as a material to my landscape and it looks like this

so, i have questions

  1. is it the way it shud look like ? coz i thought there will be some sort of a gradient depending on a depth…
    but maybe the depth values not changing that fast and u just not seeing it.
  2. how can i use this shader as a depth texture for the camera? i mean how i made camera render depth texture using this shader. i read a lot of stuff in documentation and still didnt find a code for it.
  3. you mentioned

You need specific data for both reading from the depth texture and comparing against it (it’s not just the z from ComputeScreenPos).
can you please write the code how i suppose to do that. coz I also didnt find an example.

Dang, it’s so much written in the documentation but there’s no just simple example.

The depth only shader doesn’t write any color at all, only to the depth buffer. What you’re seeing in the above is just an artifact of writing to the depth buffer before the sky is drawn.

Can you post the full shader you have? Including the one with the line geometry shader.

@bgolus i wrote you a PM

so, just wanna mention that it was a bug in my wireframe shader and it wasnt related to depth buffer. now everything works.
also, @bgolus thank you very much for helping me!