Write to Depth from Fragment

Hi

I’ve got a shader, a background, that should write to the depth, so object in front get clipped by it.

I’ve checked a few threads and answers. It seems to almost work.

Here’s what I do (relevant code)

....
			struct fragmentOutput {
				float4 c:COLOR;
				fixed4 z:smile:EPTH;
			};	

			fragmentOutput FRAG(vertexOutput i)
			{
				fragmentOutput o;
 				float depth;      // world space depth is stored in this variable (0 to "infinity")

				depth = 1.0 / (_ZBufferParams.x * depth + _ZBufferParams.y);       // this is what I got from "UnityCG.cginc"  
				o.z.zw = EncodeFloatRG(depth);       // depth should be encoded in zw? also from "UnityCG.cginc"  
				o.c = float4(c, 1.0);
				return o;
				
			}

What I’m experiencing so far, it seems depth.z has some effect (but only between 0 OR 1), depth.w seems to have no effect.

It compiles fine, the problem found here: http://forum.unity3d.com/threads/66153-Writing-depth-value-in-fragment-program seems to have been fixed.

Still no luck.

Shouldn’t the depth output be a float instead of fixed4? Why did you think fixed4 would work?

Anyways, your problem would be solved much more easily and efficiently by the stencil buffer.

EDIT: Oooh I see… the depth encoding functions you see there are for encoding depth into the color buffer, which is used in the deferred lighting path. Here, you’re writing directly into the depth buffer, so just output depth. Also, there’s no such thing as a world space depth. I’m not sure how you’re computing it, but you should pass projected position from your vertex shader and in the fragment shader, output pos.z / pos.w as depth.

Hi Dolkar

Thanks for looking at my problem.

Stencils don’t really work (as far as I understand them), because I want the engine to determine if something is in front or in the back of the background. The Background is on a flat plane and I want to trick the Engine to think it’s an actual 3D room (with the help of the depth values, read from a texture), so It can draw or clip all the other objects. I’ve done this a couple of times in a compositing application, so it should work (only with fixed camera of course).

Mhm I’m working in Forward Rendering… is it not going to work there?

Read my edited post.

So, you’re rendering an image on a flat plane and reading the depth from a texture or something?

Yeah. I pre-render the scene in a 3D Application (Houdini). Then put the rendered image on a plane, film it from the same perspective and add real time 3D object to the scene. In Unity. Now I would like to use the depth information (that can also be rendered out) to clip the objects in Unity.

After some googling, it seems this won’t ever work on iOS. (No way to write the depth in a fragment program). I’ll use a custom Depth texture and Test against it in the shader. Hope this doesn’t get to slow.

The difference between Depth Buffer and Depth Texture is important.

If you want to use the standard depth tests (ZTest LEqual), you need to have the depth data in the depth buffer. To make code work across platforms, most effects in Unity use DepthTextures instead. When it is supported, they can get their information from the depth buffer, but as far as I know, you can’t simply set the depth buffer from a fragment shader.

The code you see in the fragment shader of Depth writing shaders, is only meant to write to the depth texture when that data can not be retrieved from the depth buffer.

You CAN write depth from the vertex shader if ZWrite is enabled. A value between 0 and 1 with DirectX, and a value of -1 and 1 with OpenGL.
I’m just basing this on my own experiments, which were all done on a Windows 7 Machine, so your results may vary.

If you apply the effect manually in an image effect, you should be able to compare the depth information from the Unity scene with the depth information from Houdini. Keep in mind that you might have to convert the depth values you get from Houdini, based on the near and far clipping planes in Houdini (in stead of those used by your ‘currently processing’ camera), and any other conversions done by Houdini during export, to get both depth textures in the same space.

Thanks alot for the clarifications RC-1290

Writing the depth information in a Vertex shader wouldn’t help in my case. So I got around to do it with a renderTexture instead. I’m rendering the depth information to the renderTexture (so I can still pan / zoom the camera) and use that at the beginning of my Fragment shader to discard any fragments that are behind that value.
Works like a charm, almost no speed hit actually (I’m anyway rendering ambient occlusion, where I also need the depth in an extra renderTexture so I packed it in there). The resolution can be half, or even lower than that.