Custom SSAO Shader - Need help please

Hello everyone,
I have been reading online about the SSAO shader and how the occlusion is calculated.
For the purpose of learning, I started to write it on my own by trying to understand how the implementation is.
But the results don’t look like AO. Can someone please guide me in achieving the effect?
Rendering Mode : Forward
Passing the 64 varied samples in float3 kernel[ ] using a c# script.
Noise scale = screenwidth/4 and screenheight/4

shader code:

fixed4 frag(v2f i) : SV_Target
                {

                    float3 fragPos = tex2D(_CameraDepthTexture, i.uv).xyz;

                    float3 normal = normalize(tex2D(_CameraDepthNormalsTexture, i.uv).rgb);
                  
                    float3 rvec = normalize(tex2D(_Noise, i.uv * _NoiseScale).xyz);
                    float3 tangent = normalize(rvec - normal * dot(rvec, normal));
                    float3 bitangent = cross(normal, tangent);
                    float3x3 tbn = float3x3(tangent, bitangent, normal);

                    float occlusion = 0.0;

                    for (int i = 0; i < 64; i++)
                    {
                        float3 samp = mul(tbn, kernel[i]);
                        samp = fragPos + samp * _AORadius;

                        float4 offset = float4(samp, 1.0);
                        offset = mul(mat, offset);
                        offset.xyz /= offset.w;
                        offset.xyz = offset.xyz * 0.5 + 0.5;

                        float3 occPos = tex2D(_CameraDepthTexture, offset).xyz;

                        occlusion += (occPos.z >= samp.z + bias ? 1.0 : 0.0);
                    }
                    return 1.0 - occlusion / 64.0;
                }

Here’s the output:

I’m confused as to what exactly is going on in the first two lines;

float3 fragPos = tex2D(_CameraDepthTexture, i.uv).xyz;
float3 normal = normalize(tex2D(_CameraDepthNormalsTexture, i.uv).rgb);

Are you manually overloading these textures, because this isn’t how they are normally accessed at all? Another thing to note is that whilst using sample offsets in tangent space is a smart way to only sample in the visible hemisphere, doing a matrix multiplication in the core loop isn’t ideal or really necessary - instead, you can define your samples in a full sphere and check them against the surface normal in the shader using a dot product, flipping them if they are behind the surface.

I would also suggest you visualize every step of your code (like that depth and normals) to see that you’re actually getting something and not feeding garbage to the later steps of the code.

Thank you @Namey5 and @Olmi for your comments.
I also realized that just blindly following the steps wont be helpful. So here’s what I did:
In my vertex shader I kept all the reqd variables ready which might be needed. Later I realized that only screen position might suffice:

v2f vert (appdata v)
            {
                v2f o;
                o.vertex = UnityObjectToClipPos(v.vertex);
                o.uv = v.uv;
                o.oPos = v.vertex;
                float4 vertex = float4(v.vertex.xy * 2.0 - 1.0, 0.0, 1.0);
                float4 uv_ray = float4(2.0 * v.vertex - 1.0);
                o.ray = mul(unity_CameraInvProjection, uv_ray).xyz;
                #if UNITY_REVERSED_Z
                    o.ray.z = 1.0 - o.ray.z;
                #endif
                o.scrPos = ComputeScreenPos(vertex);
                return o;
            }

In the fragment shader, here is where I am stuck at presently:

fixed4 frag(v2f i) : SV_Target
            {
                float3 _n; float _d;
                DecodeDepthNormal(tex2D(_CameraDepthNormalsTexture, i.scrPos.xy), _d, _n);
                _d = LinearEyeDepth(_d);

                _n = tex2D(_CameraDepthNormalsTexture,i.uv) * 2.0 - 1.0;
                _n = normalize(_n);
       
        //reconstructing the point from the fragment position:
        ...
    }

To reconstruct the point I got several methods and online resources, but couldn’t figure out which one to follow:
TanAspect* (screenPos * 2 – 1)* depthValue, - depthValue
where TanAspect = Vector2(camera.TanFovy * camera.Aspect, -camera.TanFovy)

Now I am currently figuring out how to get this sorted. Any help please will be much appreciated.
Ideally it should be simple to apply the inverse projection matrix from the camera and get the resulting point. But its not that simple.

Thank you once again :slight_smile:

It is actually pretty close to just multiplying by the inverse projection matrix, you just need to do things in a specific order;

...

struct v2f
{
    float4 pos : SV_POSITION;
    float2 uv : TEXCOORD0;
    float4 viewDir : TEXCOORD1;
};

v2f vert (appdata v)
{
    v2f o;
    o.pos = UnityObjectToClipPos (v.vertex);
    o.uv = v.uv;
    //Use the UVs to figure out the NDC space positions of the far frustum plane
    o.viewDir = mul (_InvProjectionMatrix, float4 (o.uv * 2.0 - 1.0, 1.0, 1.0));
    return o;
}

half4 frag (v2f i) : SV_Target
{
    //These normals are already in view space, so keep them as is
    DecodeDepthNormal (tex2D (_CameraDepthNormalsTexture, i.uv), float depth, float3 normal);
    depth = LinearEyeDepth (depth);

    //Perspective divide and divide by far plane distance to normalize
    float3 viewDir = (i.viewDir.xyz / i.viewDir.w) * _ProjectionParams.w;
    //Multiply by depth to get view space position
    float3 viewPos = viewDir * depth;

    //The maximum distance at which a sample can contribute
    //This should ideally be scaled by your AO radius
    const float maxDist = 1.0;

    //From there, you can do the AO in view space
    float occlusion = 0;
    float3 off;
    float4 pos;
    for (int x = 0; x < 64; x++)
    {
        //These samples should be in a full sphere
        off = kernel[x];
        //Flip if the sample goes through the surface
        off = dot (off, normal) < 0.0 ? -off : off;

        pos.xyz = viewPos + off * _AORadius;
      
        //We only need the z-coord, so store the clip-space stuff in the other components
        pos.xyw = mul (_ProjectionMatrix, float4 (pos.xyz, 1.0)).xyw;
        //Screen UVs are stored in .xy
        pos.xy = (pos.xy / pos.w) * 0.5 + 0.5;

        //View-space depth is negative, so flip it and subtract scene depth
        pos.z = -pos.z - LinearEyeDepth (tex2Dlod (_CameraDepthTexture, float4 (pos.xy, 0, 0)).r);

        //Add to occlusion if the sample is less than maxDist behind the scene
        occlusion += pos.z > 0.0 && pos.z < maxDist;
    }

    return 1.0 - (occlusion / 64.0);
}

Something like that should work.

@Namey5 Thank you for the code, that really helped. Here is what I have been able to render:

I am assuming I am not creating samples really well. There is some mis-match with samples. Everytime I enable/disable the script which re-creates samples, I get different AO:

I still require some help, if you have time?
I have posted the code here:
https://github.com/theMaxscriptGuy/UnitySSAO

Thanks for all the help. Almost there. need to figure out on improvising. Thank you for helping me learn some core-concepts.

Regards,
Videep