reconstruct the view space position from depth problems and questions

Hi everyone

I’ve been trying to create a SSAO shader from scratch for weeks now as a way to learn screen-shaders. The problem is, i’m still stuck at reconstructing the view space position

i have a lot of questions regarding this particular step but my main concern right now is that i actually don’t know what should the output of the reconstruction be

i’ve seen one or two images with white,blue,magenta,cyan areas and others with green,yellow,red,black ones so i’m confused to what result should i be getting, could someone be kind enough to show me an image of a reconstructed view space position scene?

also, i’m pretty sure my math problems right now have something to do with the matrix i’m using, some people use the inverse projection matrix, some use the CameraToWold, Unity uses the MV matrix, some don’t use any at all, i understand there is more than one method but no matter what i do i can’t get anything that seems right

so, besides the image, could someone explain me:

  1. whats the difference between inverse projection matrix that i can get from my camera and the model view matrix times (-1,-1,1)?
  2. why do some use a reconstruction to world space instead of view space?
  3. isn’t the projection matrix from camera.projectionMatrix the same thing as UNITY_MATRIX_P from inside the shader?

sorry if sound too noobish, i’m extremely confused about this part of the shader

Are you referring to the depth of the scene? If so, there’s nothing you need to do yourself really as Unity already does that for you.

You need to define that the camera needs to render to a depth texture with:

camera.depthTextureMode = DepthTextureMode.Depth;

Now you can access the depth texture in your shader when you specify:

uniform sampler2D _CameraDepthTexture;

When you sample that texture, you will get a value that ranges from 0 to 1:

// Sample the depth texture, Unitys macro makes sure it is consistent between different platforms.
float pixelDepth = SAMPLE_DEPTH_TEXTURE(_CameraDepthTexture, i.uv);
// Convert the sampled value to a 0-1 value (0 camera to 1, far clipping plane)
pixelDepth = (Linear01Depth(pixelDepth));

If you want to construct the normals from depth in your SSAO shader, this should be enough to get you going on, but Unity also provides you with a depth + normal texture which you once again need to set “active” in your camera object the same way you would enable depth, sampling is a bit different though:

uniform sampler2D _CameraDepthNormalsTexture; // the sampler you need to define

float4 currentDepthNormal = tex2D(_CameraDepthNormalsTexture, i.uv);
	
float currentPixelDepth;
float3 currentPixelNormal;
DecodeDepthNormal(currentDepthNormal, currentPixelDepth, currentPixelNormal);

currentPixelDepth holds the depth value (0 - 1) and currentPixelNormal holds the normals.

thx Rld_ but thats not really the problem, i can access depth and normals just fine, i’m talking about the one of the steps to create the SSAO effect, in which you need to reconstruct the view space position of each fragment and later sample it to get the occlusion factor

but thanks anyway, your explanation and code would help me a ton a few weeks ago :stuck_out_tongue: i’m sure it’ll help someone searching for it

I guess I got a little confused because you are talking about the projection matrix, which you don’t really need. Perhaps you’re using some other technique?

I recently implemented SSAO myself and it basically boils down to this:

  • Define a unity sphere from which you can sample
  • Sample a value from a random sample texture (rgb noise texture, or however you want to do it)
  • Sample the current depth and normal based on the UV coordinates.
  • for each sample
  • create a random direction (reflect(sampleSphere[sampleindex], randomsample))
  • make sure it points outwards
  • construct a sample UV (UV + randomDirection)
  • sample the depth and normals again with this sampled UV
  • calculate the difference and use that to add to your occlusion factor

I studied the implementation they have used in Unity and also checked out this: http://www.gamerendering.com/category/lighting/ssao-lighting/ which was a good help in understanding what’s going on.

Hope it helps. :slight_smile:

uhm… maybe i’m mixing up different techniques, i’m following this tutorial: john-chapman-graphics: SSAO Tutorial

indeed the SSAO from unity doesn’t seems to be needing any reconstruction but the ambient obscurance version does indeed reconstructs it

from what i could understand the view space position of the fragment lets me see where exactly is the sample i’m trying to get in order to create the occlusion factor

Ah I see, well that’s not really that hard. The link he provides pretty much sums it all up for you, but I guess you managed your way through that. I don’t really have that much time to construct an example and see what I come up with, but let me then at least try to give you an answer to your questions.

  1. Can’t give you a definitive answer to that, but I guess it gives a proper approximation. Perhaps some have the projection matrix already multiplied in their view matrix in code and take it out this way.

  2. Probably more precision for a better result.

  3. Yes it is. https://docs.unity3d.com/Documentation/Components/SL-BuiltinValues.html

From what I can see from your link and what I have and have seen in the relevant sources I used, the only real difference seems to be the reconstruction of the fragments space. I can only guess it gives a better result, but as I am happy with the result I got, I’m not likely to change it soon. :slight_smile:

thx, i’ll probably try both ways to see the differences between them, if i get to some sort of conclusion and better understanding of these techniques i’ll post it here

Hey Rld_

thanks for the help, I am trying to write an SSAO myself and I managed to get all the information needed for the final calculation, I am just missing the last point of your list :

  • calculate the difference and use that to add to your occlusion factor

so from my understanding we have the depth and the normals of the occludee(the point we want to know the occlusion) and of the occluder(the random point we calculated). Once we have this information, what is the equation that will give us the amount of occlusion ?
You are talking of a difference between those information, but we are talking of a value that is Depth which has one value and a vector3 Normals, how do we use those together ?

thanks for your time, much appreciate it!!

M

Hey there,
I am trying to build an SSAO myself but I am having some problem to understand the last step of the shader :
I managed to obtain the Depth normalized value and the normal vector, for occludee (the point we want to calculate the occlusion) and occluder(the point that is occluding the occludee).
I am not quite understanding the last step :

  • calculate the difference and use that to add to your occlusion factor

what do u mean with this sentence ?
just to confirm we are on the same page, the Depth value that I have is one value between 0 and 1 and the normal is a vector 3 .
how can we calculate the difference ?

thanks a lot for your time

M

Sorry for the late responce, haven’t been very active lately and have been on vacation. :slight_smile:

Anyway, you want to know the difference between the depth and orientation of your current and sampled point and use that to determine how much of an impact it has on the shadowing.

Due to how our scenes were built up, I was able to get away with just using the depth. I took the difference between the depth of the 2 sampled points and used that as the occlusion factor. The bigger the difference, the more I added to to the occlusion factor.

You can also add the orientation to it (dot product on the 2 sampled normals) and use that as well in your calculation.