Vertex's Position in Another Camera's Space?

How, given that I want to use surface shaders (the answer can be, just use fragment and vertex shaders, you know how to use those!), does one convert from a vertex’s worldPos (given by the input structure to the surface shader) to the same position in the “screen space” of a random camera (not the main camera, so using the input structure’s screenPos will not work).

A little background. My scene has 2 cameras. In a shader for the second camera, I want to find the position of the given vertex in the first camera’s “screen”. So, analogous to the “screenPos” of that vertex for the first camera.

I’m currently passing a matrix into the structure via a script. Each piece of geometry in my little scene has this script.

using UnityEngine;
using System.Collections;

public class SetLightVPMatrixInMaterial : MonoBehaviour {
	
	public Camera lightCamera;
	
	// Matrix used to go from [-1, 1] to [0, 1] in each axis
	private Matrix4x4 bias;

	// Use this for initialization
	void Start () {
		bias = new Matrix4x4();

		bias.SetColumn(0, new Vector4(0.5f, 0.0f, 0.0f, 0.0f));
		bias.SetColumn(1, new Vector4(0.0f, 0.5f, 0.0f, 0.0f));
		bias.SetColumn(2, new Vector4(0.0f, 0.0f, 0.5f, 0.0f));
		bias.SetColumn(3, new Vector4(0.5f, 0.5f, 0.5f, 1.0f));
	}
	
	// Update is called once per frame
	void Update () {
		// Fetch the light camera's modelviewprojection matrix
		if ( lightCamera )
		{
			// Moving from unit cube [-1,1] to [0,1]  
			Matrix4x4 vpMatrix = bias * lightCamera.projectionMatrix * lightCamera.worldToCameraMatrix;
			
			// Set the viewProjection matrix to a value in the material
			renderer.material.SetMatrix("_LightViewProjectionMatrix", vpMatrix);			
		}
	}
}

In the shader I do the following

      	struct Input {
			// Analogous to the in attributes in GLSL shaders
			float3 worldPos;
		};

		void surf (Input IN, inout SurfaceOutput o) {
			// Convert the worldPos to the light's coordinate system to compute the UV coordinates of
			// the given vertex for the light's render texture
			float4 lightScreenPosW = mul(_LightViewProjectionMatrix, float4(IN.worldPos, 1));			
			float3 lightScreenPos = lightScreenPosW.xyz / lightScreenPosW.w;

This code seemingly gives me something close. I believe the lightScreenPos.x and y are what I want them to be, but I don’t think that lightScreenPos.z is what I expect. I want that value to be the value of the current vertex being run through the shader’s z position as being viewed by the camera that I pass into the script above.

I read something along the lines that using camera.worldToCameraMatrix has some issues with z coordinates, but I’ve tried fiddling and haven’t been able to fix too much.

So how to I go from a worldPos to a camera’s screen space that isn’t the current camera?

Bump…(I’ll try not to do this too often)

I don’t know, your approach looks fine to me. Here is my code:

#pragma strict
@script ExecuteInEditMode()

public var otherCamera : Camera; 

function Start() {
   var cameras : Camera[] = FindObjectsOfType(Camera) as Camera[];
    for (var c : Camera in cameras) 
    {
        if (c.name == "Other Camera")
		{
			otherCamera = c;
		}
    }
}


function Update () {

	renderer.sharedMaterial.SetMatrix("_OtherMatrix", 
		otherCamera.projectionMatrix * otherCamera.worldToCameraMatrix);  
}

and the shader

Shader "Custom/Other Camera" {
    Properties {
      _Color ("Main Color", Color) = (1,1,1,1)
      _MainTex ("Texture", 2D) = "white" {}
    }
    SubShader {
      Tags { "RenderType" = "Opaque" }

      CGPROGRAM

      #pragma surface surf Lambert

      struct Input {
          float2 uv_MainTex;
          float3 worldPos;
      };

      float4x4 _OtherMatrix;
      float4 _Color;
      sampler2D _MainTex;


      void surf (Input IN, inout SurfaceOutput o) 
      {
          float4 clipCoors = mul(_OtherMatrix, float4(IN.worldPos, 1.0));
          float4 normalizedCoors = clipCoors / clipCoors.w;
          float3 screenCoors = float3(0.5 + 0.5 * normalizedCoors.x, 
             0.5 + 0.5 * normalizedCoors.y,
             0.5 + 0.5 * normalizedCoors.z);
          

          //o.Albedo = _Color.rgb * tex2D (_MainTex, screenCoors.xy).rgb;
          //o.Alpha = _Color.a; 
          o.Emission = float3(screenCoors.z);
      }
      ENDCG
    } 
    Fallback "Diffuse"
  }

Why do you think that this is the wrong depth?

Edit: Oh, now I see that you do the perspective division after applying the viewport transformation (the “bias” matrix). That’s incorrect.

Thanks for your reply. I’ll try mine again without the bias matrix and using the explicit 0.5 * and + 0.5 (though the math should be the same…maybe I have a bug?)

The way I’m debugging this is…

My first camera renders to a “render texture.” It has a very simple shader on it when checks if IN.screenPos.z is above a certain threshold. Above a threshold, green is rendered, else red. So, given a simple scene consisting of just a plane, I get output like this.

909094--34046--$Screen Shot 2012-04-25 at 1.59.00 PM.png

The second camera (which is places in roughly 90 degrees or along the x/y plane if the first camera for now) converts IN.worldPos to the “screen space” of the first camera. The second camera’s shader checks the “otherCameraScreenSpace”.z is above the same threshold that as in the first shader.

Below the threshold, the second shader just outputs the color from the texture. Above the threshold, blue is output.

Shader "Custom/RenderGeometryWithShadows" {
	Properties {
		_MainTex ("Depth Texture (RGBA)", 2D) = "black" {}
	}

	SubShader {
		Tags { "RenderType"="Opaque" }
		LOD 200
		
		CGPROGRAM
		#pragma surface surf DiffuseOnly

		sampler2D _MainTex;
		
		// Passed in via a script
		uniform float4x4 _LightViewProjectionMatrix;
		
		half4 LightingDiffuseOnly(SurfaceOutput s, half3 lightDir, half atten)
		{
			half diff = 1;

			half4 c;
			c.rgb = s.Albedo * diff;
			c.a = s.Alpha;
			return c;
      		}
      	
		struct Input {
			float3 worldPos;
		};

		void surf (Input IN, inout SurfaceOutput o) {
			// Convert the worldPos to the light's coordinate system to compute the UV coordinates of
			// the given vertex for the light's render texture
			float4 lightScreenPosW = mul(_LightViewProjectionMatrix, float4(IN.worldPos, 1));			
			float3 lightScreenPos = lightScreenPosW.xyz / lightScreenPosW.w;

			float4 encodedDepth = tex2D(_MainTex, lightScreenPos.xy);

			// Make sure that the vertex was visible from the light to begin with.
			if (
			    lightScreenPos.x >= 0 
			    lightScreenPos.x <= 1 
			    lightScreenPos.y >= 0 
			    lightScreenPos.y <= 1 
			    lightScreenPos.z < 0.4 )
			{
				o.Albedo = encodedDepth;
			}
			else
			{
				o.Albedo = float3(0, 0, 1);
			}
			o.Alpha = 1;
		}
		ENDCG
	} 
	FallBack "Diffuse"
}

Given that set up, I would expect that, for the same threshold value that I would always have the plane look exactly like the screen shot above, but instead of green, see blue. But that isn’t the case.

For the same threshold that produced the above screen shot, I end up just seeing all blue. Meaning that the depth in my second shader (computed via matrix multiplication) is less than the z in my first shader (produced by using IN.screenPos). If I change the threshold to be higher (like 0.6) I eventually see red on the screen, but regardless, the amount of red I see on my screen is always less than I expect unless the threshold is so large it covers the entire geometry.

Given those facts is why I think that the computed z is wrong. Now, that may not be the case. Maybe the Z in the first shader is wrong, but something is wrong somewhere.

Did you see my edit? You do the perspective division after the viewport transformation (the multiplication with the bias matrix). This makes a difference. (But I don’t know whether that explains what you see.)

Just saw your edit. I don’t think that this makes a difference. The initial w value for the vector is 1. I’ve used similar code in straight glsl with no problems whatsoever. Just for debugging purposes, I took out the “bias *” part of the matrix setting code. I’ve also put your code

          float4 clipCoors = mul(_OtherMatrix, float4(IN.worldPos, 1.0));

          float4 normalizedCoors = clipCoors / clipCoors.w;

          float3 screenCoors = float3(0.5 + 0.5 * normalizedCoors.x, 

             0.5 + 0.5 * normalizedCoors.y,

             0.5 + 0.5 * normalizedCoors.z);

into my shader and get the same results. Thus I believe my bias way and your way are the same conceptually and mathematically.

Hmm, the way I see it, for the x coordinate:
0.5 + 0.5 * clipCoors.x / clipCoors.w != (0.5 + 0.5 * clipCoors.x) / clipCoors.w
Thus, I would still assume that it makes a difference.
clipCoors.w will be != 1 for perspective projections.

Anyway, could you post all code for a complete example and explain what doesn’t work as expected? (I think the code you posted before was incomplete.)

Well, right. I just meant in this example I believe they produce the same results, not in general :slight_smile: Sure, I’ll post some full code here in a bit.

Shader for first camera…

Shader "Custom/RenderDepthToTextureShader" {
	Properties {
	}

	SubShader {
		Tags { "RenderType"="Opaque" }
		LOD 200
		
		CGPROGRAM
		#pragma surface surf None
		
		half4 LightingNone(SurfaceOutput s, half3 lightDir, half atten)
		{
     		half4 c;
     		c.rgb = s.Albedo;
      		c.a = s.Alpha;

       		return c;
  		}
  	
		struct Input {
			float4 screenPos;
		};

		void surf (Input IN, inout SurfaceOutput o) {
			float3 pos = IN.screenPos.xyz / IN.screenPos.w;
		
			if ( pos.z < 0.4 )
			{
				o.Albedo = float3(1, 0, 0);
			}
			else
			{
				o.Albedo = float3(0, 1, 0);
			}
			o.Alpha = 1;

		}
		ENDCG
	} 
	FallBack Off
}

As mentioned before the shader just makes geometry red/green depending on if the IN.screenPos.z is above/below a given threshold.

The first camera has the following script to set the replacement shader as the material on the objects in the scene (currently just a plane).

using UnityEngine;
using System.Collections;

public class SetCameraShader : MonoBehaviour {

	public Shader shader;

	// Use this for initialization
	void Start () {
	}
	
	// Update is called once per frame
	void Update () {
	
	}
	

	void OnPostRender() {
		Camera camera = Camera.current;
		
		if ( camera  shader )
		{
			camera.SetReplacementShader(shader, "RenderType");
		}
	}
}

The first camera also has a render texture set for it so that it renders all output to a texture.

Now, the geometry in the scene (the plane) has this script with the first camera set as the “lightCamera” parameter.

using UnityEngine;
using System.Collections;

public class SetLightVPMatrixInMaterial : MonoBehaviour {
	
	public Camera lightCamera;
	
	// Matrix used to go from [-1, 1] to [0, 1] in each axis
	private Matrix4x4 bias;

	// Use this for initialization
	void Start () {
		bias = new Matrix4x4();

		bias.SetColumn(0, new Vector4(0.5f, 0.0f, 0.0f, 0.0f));
		bias.SetColumn(1, new Vector4(0.0f, 0.5f, 0.0f, 0.0f));
		bias.SetColumn(2, new Vector4(0.0f, 0.0f, 0.5f, 0.0f));
		bias.SetColumn(3, new Vector4(0.5f, 0.5f, 0.5f, 1.0f));
	}
	
	// Update is called once per frame
	void Update () {
		// Fetch the light camera's modelviewprojection matrix
		if ( lightCamera )
		{
			Matrix4x4 vpMatrix = bias * lightCamera.projectionMatrix * lightCamera.worldToCameraMatrix;
			
			// Set the viewProjection matrix to a value in the material
			renderer.material.SetMatrix("_LightViewProjectionMatrix", vpMatrix);			
		}
	}
}

Finally, the shader for the material used in the plane is as such:

Shader "Custom/RenderGeometryWithShadows" {
	Properties {
		_MainTex ("Depth Texture (RGBA)", 2D) = "black" {}
	}

	SubShader {
		Tags { "RenderType"="Opaque" }
		LOD 200
		
		CGPROGRAM
		#pragma surface surf DiffuseOnly

		// Declare a sampler2D named _MainTex
		sampler2D _MainTex;
		
		// Passed in via a script
		uniform float4x4 _LightViewProjectionMatrix;
		
		half4 LightingDiffuseOnly(SurfaceOutput s, half3 lightDir, half atten)
		{
			half diff = 1;

			half4 c;
			c.rgb = s.Albedo * diff;
			c.a = s.Alpha;
			return c;
		}
      	
		struct Input {
			float3 worldPos;
		};

		void surf (Input IN, inout SurfaceOutput o) {
			// Convert the worldPos to the light's coordinate system to compute the UV coordinates of
			// the given vertex for the light's render texture
			float4 lightScreenPosW = mul(_LightViewProjectionMatrix, float4(IN.worldPos, 1.0));			
			float3 lightScreenPos = lightScreenPosW.xyz / lightScreenPosW.w;
			float4 encodedDepth = tex2D(_MainTex, lightScreenPos.xy);

			// Make sure that the vertex was visible from the light to begin with.
			if (
			    lightScreenPos.x >= 0 
			    lightScreenPos.x <= 1 
			    lightScreenPos.y >= 0 
			    lightScreenPos.y <= 1 
			    lightScreenPos.z < 0.4 )
			{
				o.Albedo = encodedDepth;
			}
			else
			{
				// Not visible from light
				o.Albedo = float3(0, 0, 1);
			}
			o.Alpha = 1;
		}
		ENDCG
	} 
	FallBack "Diffuse"
}

The input texture for this second shader is the render texture of the first camera. Thus, the output from camera 1 = input to camera 2. The second camera uses the same threshold in lightScreenPos.z and either outputs blue or the input from the texture.

I believe that the lightScreenPos.z is incorrect in the last shader (though the x and y seem fine) because, as described in a post above, I would expect the final output of the second camera to show the exact same picture as the output from the first camera with the exception that instead of green, those sections of geometry are blue instead.

Instead of getting what I expect, I see “less” red in the output than there is red in the render texture. Thus, I believe that the z coordinate I compute in the second shader is not the same as IN.screenPos.z in the first shader. The z always seems to be smaller.

Could you post a screen shot of the effect?

There will be small differences due to the limited resolution of the render texture; thus, you cannot expect to have exactly the same line in both pictures.(In my test implementation there appears to be less red in some places and more red in other places.)

the bias matrix is unneeded/may give funny results on d3d and opengl while you are calculating the custom lightviewprojection matrix.
And then for the calculating screeposw you should not multiply with worldpos but you should calculate from homogenous position passed from a custom vertex function.

Still i dont understand what you are trying to achieve. Try to show a photo of the final effect you want and there might be easier ways to achieve it?

Can you explain why it’s not needed? Generally speaking, in the GLSL land I came from, multiplying MVP * localVertex game you homogenous coordinates in the “screen space”. The results are in ranges [-1 <= x,y,z <= 1]. I’ve literally used the exact bias matrix when doing shadow mapping with openGL/GLSL on my PC at home. Does Unity not work this way with the worldPos and the cameraToWorldMatrix?

I’m completely new to Unity, so how do I make a custom vertex function that affects the surface function? I see how to make the vertex function in general, but I’m not sure how that affects the surface functions input structure. If my vertex function (for example) takes the local vertex and multiplies with it with model-view-projection matrix, will the input structure’s “worldPos” really then be model * projection * view * model * local vertex?

This example I’m giving here is not what I ultimately what I want to do. Ultimately what I want to do is to do some simple shadow mapping in Unity. I’ve done it in GLSL in about an hour, Unity is taking longer due to the learning curve. For my shadow mapping, I’m taking the scene from a “light camera” and encoding the depth of the scene as seen by the “light camera” into the RGB channels of a texture (since depth textures are not available via iOS on Unity I believe; shadows are also not available on iOS).

Then, when viewing from the “main camera”, I’m want to calculate the current vertex’s position in “light camera screen space.” If the current vertex’s depth in “light camera screen space” is farther away than the depth encoded in the texture for that x/y, then that implies that the current vertex is “occluded” from the “light” and thus will be in shadow.

This whole process depends on being able to convert a given vertex’s coordinate to “light camera screen space.” The code I’m giving here is really just to help me debug and is useless in and of itself.

I don’t believe that it is precision related. It’s a lot “less”.

My scene now consists of a plane with a sphere over it. The “render texture” is as such with a z threshold of 0.4

905089--33857--$Screen Shot 2012-04-26 at 11.23.08 AM.png

When viewing from the main camera (which is about 90 degrees to one side of the first camera that made the render texture).

911995--34163--$Screen Shot 2012-05-03 at 10.14.19 AM.png

As you can see it is all blue. The z threshold in “light camera screen space” is below any of the geometry in the scene even though I expect the z value of 0.4 to be roughly halfway up the plane and through half of the sphere.

Moving the threshold in the second shader to be 0.6 instead of 0.4, the screen shot is below.

911995--34164--$Screen Shot 2012-05-03 at 10.26.21 AM.png

With a threshold of 0.6 in the second shader gives almost, but still less than, the amount of red I expected with a threshold of 0.4. So the z value just seems “off” in my particular scene. I haven’t confirmed if the difference between expected and actual values is linear based on z. I don’t think it’s precision issues.

The little bits of green in the sphere though, I’m not 100% sure where those come from. Probably precision related problems right around the center of the sphere in the first shader. That is a different problem though.

I’m using a Mac and am using “iOS” as my platform in Unity if that makes any difference.

Bump. Martin, Aubergine, did you guys see my responses?

Yes, I saw it (and I sent a message that you haven’t answered ;). My problem was that I’m not able to reproduce your problem. Could you put your example in a unity package and make it available for download?

Oh, I didn’t see your message. Sure, I can put it in a package and make it available for download as soon as I figure out how to do that. I literally posted all the code I’m using (I removed a lot of commented out code for ease of reading) in this thread. Perhaps it is a mac problem or unity pro problem or a problem with me :slight_smile: I’ll see if I can figure out how to get the source for download. It’s really confusing me :frowning: