matrices and render to texture depth values

A couple more questions:

  1. Does Unity expose (through an include or what have you), the light view matrix and light projection matrix (from spotlights, for example) of a light?

If not, can I similarily access a projectors’ matrices in my CG code? I think you can do this via the fixed pipeline by using Matrix[_Projector] (for the viewing matrix) and Matrix [_ProjectorClip] (for the projector projection matrix). Am I understanding those two matrices correctly? Anyway, I would like to be able to access these matrices in my shader code.

  1. Can I access the depth values of a ‘render to texture’. Does it, for example, place the depth value in the alpha channel?

I’m working my way to a shadow map shader and help with these two questions will get me much closer.

Cheers,
Paul Mikulecky
Lost Pencil Animation Studios Inc.
http://www.lostpencil.com

  1. not at the moment. You can do that yourself via scripts of course. For example, to render a shadow map you’d have to use a camera (maybe on the same object as the light). Now use a script like this:
var shadowMaterial : Material;
private var texMatrix : Matrix4x4;
function Start()
{
	texMatrix = Matrix4x4.identity;
	texMatrix[0,0] = 0.5;
	texMatrix[1,1] = 0.5;
	texMatrix[2,2] = 0.5;
	texMatrix[0,3] = 0.5;
	texMatrix[1,3] = 0.5;
	texMatrix[2,3] = 0.5;
}

function LateUpdate()
{
	var c : Camera = camera;
	var shadowMatrix = texMatrix * c.projectionMatrix * c.worldToCameraMatrix;
	shadowMaterial.SetMatrix("ShadowMatrix", shadowMatrix);
}

That calculates the canonical texturescale * projection * camera matrix and sets that as a matrix property in the material. Now just use it in the shader: somewhere inside CGPROGRAM part do a

float4x4 ShadowMatrix;

And there you are.

  1. no, the depth buffer can’t be accessed in any way at the moment. To do self-shadowing shadow maps right now you’d have to output depth yourself (i.e. packing the depth value into RGBA channels, and depacking them when reading the texture).

Hi Aras,

Thanks again for the informative reply!

Good tip for #1.

As for #2, basically encoding the floating point depth value in fixed point - got it. But how do I, in Unity, get at the depth value of the fragment from a camera in the shader? Do I use on the ‘render to texture’ texture from the camera in #1? and the access the z using:

tex2Dproj (sampler2D   tex, float4 szq);

Or do I use the DEPTH semantic on the render texture from the render to texture camera? Or some other way… .sorry about the basic questions I’m in learning mode. :smile:

By the way, great couple of articles in ShaderX4!

Cheers,
Paul Mikulecky
Lost Pencil Animation Studios Inc.
http://www.lostpencil.com[/code]

You compute the depth yourself (in world space, clip space, anything else you can think up).
For example, using clip space depth: pass the clip space position to the fragment program (just output the same as to POSITION semantic), then in the fragment program z/w is the depth (in -1…1 range IIRC). Reading it using the above matrices: in a vertex shader you’d pass a float4 to the fragment program like this:

o.sh = mul( _Object2World, v.vertex );
o.sh = mul( ShadowMatrix, o.sh );

Then again, sample the shadow map using

float4 shadow = tex2Dproj( _ShadowMap, i.sh );

and compare with the depth:

float depth = i.sh.z / i.sh.w; // plus some bias, e.g. *0.99

…maybe I should do a small example of this stuff sometime :slight_smile:

Ahh, Ok, I think I understand. I was under the impression that the depth was already embedded in some structure (a texture or something else) and that I could just sample it directly. Thanks!

Cheers,
Paul Mikulecky
Lost Pencil Animation Studios Inc.
http://www.lostpencil.com

Hey Aras,

So this is the way to communicate between javascript and the shader, right? In the above case passing a matrix of our own making to the shader.

Well, I have the following code in javascript (this does nothing useful, just a test). Essentially I’m setting the color of the object to pure red via javascript:

function LateUpdate() 
{ 
	var my_color : Vector4; 
	my_color = Vector4 (1, 0, 0, 1);
	testmaterial.SetVector("MyColor", my_color);
}

And then I have this in the shader that is assigned to testmaterial:

			CGPROGRAM
				// profiles arbvp1
				// profiles arbfp1
				// fragment frag
				// vertex vert
				#include "UnityCG.cginc"
	
				float4 MyColor;

				struct vs_input {
				    float4 vertex : POSITION; 
				    float3 normal : NORMAL;
				};
				struct vs_output {
					float4 pos : POSITION;
					float4 diffuse : COLOR;
				};
					
				vs_output vert (vs_input v) 
				{
					vs_output Out;

					Out.pos = mul(glstate.matrix.mvp, v.vertex); 
					Out.diffuse = MyColor;	
					return Out;
				}
				
				float4 frag (vs_output f) : COLOR 
				{
					float4 outcolor;
					outcolor = f.diffuse;
					return outcolor;
				}
			ENDCG

I get a syntax error on this line: //fragment frag. Which is pretty weird. If I change ‘float4 MyColor’ to ‘uniform float4 MyColor’. There are no errors and it takes the color from the property swatch it creates in the Unity GUI instead of the value I passed through javascript.

Suggestions? Am I missing something fundamental here?

Cheers,
Paul Mikulecky
Lost Pencil Animation Studios Inc.
http://www.lostpencil.com

Oops. Never mind. Pays to assign the proper material to the object (not just the shader)!

Cheers,
Paul Mikulecky
Lost Pencil Animation Studios Inc.
http://www.lostpencil.com

Hi Aras. sorry for the dig up of this old thread. But i get interrested in it.

Could you please detail more the process please ?

As far as i understand, a shadowmap must be drawn in the light view. If the light is directionnal,
a parallel camera should be setup on the light, and the scene rendered from this point of view,
storing only Z data in a texture. Am i right or wrong ?

Anyway, what shoud be stored in this shadow texture?
Z ? W ? both ? what would be in the sader ? some helpers like UNITY_OUTPUT_DEPTH(i.depth)
in fragment ? or should the depth encoded in RGBA with

distance = EncodeFloatRGBA(i.depth.x); // Z
return(distance);

or should i encode both Z and W in RG and BA ?

Things must be detailed precisely ( as it’s not cooking :wink: ) and are not clear to me.

Here is for the rendering of the shadowmap from the light point of view…

but now. from the observer view… i understand, we must transfer the previously rendered shadowmap in
the observer view cam shader. then for each rendered pixel in the observer view, calculate the UV
coordinates in the shadowmap.
what would CLEARLY be the needed matrix for achieving this ? I undersdtand the matrix is set up in
a script and transfered to the shader. and it seems the code above:
o.sh = mul( _Object2World, v.vertex );
o.sh = mul( ShadowMatrix, o.sh );

do the right job.
o.sh = mul( _Object2World, v.vertex ); takes the v.vertex in object coords and transform then in world coords, then…
o.sh = mul( ShadowMatrix, o.sh ); take those calculated world coords and ‘simulate’ the light view camera to retrieve
UV texture coordinates.
Am i right or wrong ?

Then, from those UV coordinates we retrieve the Z ? W ? Z/W? W/Z? finally , what do/should we retrieve from this shadowmap
with the:
float4 shadow = tex2Dproj( _ShadowMap, i.sh );
???

Finally, this depth info we retrieved from the shadowmap., we compare it with the
depth of the observer view… but what are the coordinates those depths are compared in ?
as comparing a Z in objects space and a Z in light space is a nonsense.

I confess the best whole explanation of this principle would be 2 scripts and 2 shaders. or better…
a package what would load and run showing all the mechanics used in this shadowing process.

Maybe it’s whal i’ll offer here when i understand the whole thing and make a clear and simple
explanation. But first of all, i have to get this working in unity.

Thanks Aras and all for your incoming answers !

regards.

bump !
are the topics i post in too hard for community to be answered ? :frowning:

Sorry to dig up the old thread, but I was struggling with this recently. In addition to the above, to actually get the depth value out of the depth buffer texture, you need to do:

        float4 shadow_depth4 = tex2Dproj(_RenderPaintTexture, i.sh.xyw);
        float shadow_depth = DecodeFloatRG(shadow_depth4.zw);

Hope that helps save someone some time who might be looking to implement a custom depth buffer shadowing solution.

Sorry to resurrect this thread, but this has been the closest I’ve found.

I’m making a custom spotlight, and for obvious reason I’m using UnityDeferredCalculateLightParams. However, I have to pass it all the variable it needs, including “unity_WorldToLight” matrix.

However, for spot light, what you post is close but not really that. For example, I know the object scale doesn’t impact the matrix. Or I feel the first 3 value of texMatrix would be negative. Any chance you could update that for Unity 2017 or am I doing some mistake in there?