TransformWorldToShadowCoord function in HDRP?

I am trying to port the custom lighting nodes in LWRP to HDRP, but this line throws an error:
float4 shadowCoord = TransformWorldToShadowCoord(WorldPos);

I cannot find the HLSL file in the API containing this function. If someone can point me to the location in the API, then I can include that file.

Thank you.

lwrp → Shadows.hlsl

Thank you, larsbertram1.
I am looking for the equivalent file in hdrp, so I can port the custom lighting to hdrp.

Hello, all the source code of HDRP is available on github here: https://github.com/Unity-Technologies/ScriptableRenderPipeline
And the function you’re looking for is here: https://github.com/Unity-Technologies/ScriptableRenderPipeline/blob/master/com.unity.render-pipelines.high-definition/Runtime/Lighting/Shadow/HDShadowAlgorithms.hlsl#L47

Thank you, meadjix!

Hello,
First excuse me for the revival of this thread but I’m facing the same error. The link doesn’t exist anymore, do you have the function’s name or another link, I couldn’t find on the net,
Thank you

The SRP repository has been moved to Graphics, here’s the new link:
https://github.com/Unity-Technologies/Graphics/blob/master/com.unity.render-pipelines.high-definition/Runtime/Lighting/Shadow/HDShadowAlgorithms.hlsl#L30

1 Like

Hi,

I can’t find the function inside the file that you send…

1 Like
float4 EvalShadow_WorldToShadow(HDShadowData sd, float3 positionWS, bool perspProj)
{
    // Note: Due to high VGRP load we can't use the whole view projection matrix, instead we reconstruct it from
    // rotation, position and projection vectors (projection and position are stored in SGPR)
#if 0
    return mul(viewProjection, float4(positionWS, 1));
#else

    if(perspProj)
    {
        positionWS = positionWS - sd.pos;
        float3x3 view = { sd.rot0, sd.rot1, sd.rot2 };
        positionWS = mul(view, positionWS);
    }
    else
    {
        float3x4 view;
        view[0] = float4(sd.rot0, sd.pos.x);
        view[1] = float4(sd.rot1, sd.pos.y);
        view[2] = float4(sd.rot2, sd.pos.z);
        positionWS = mul(view, float4(positionWS, 1.0)).xyz;
    }

    float4x4 proj;
    proj = 0.0;
    proj._m00 = sd.proj[0];
    proj._m11 = sd.proj[1];
    proj._m22 = sd.proj[2];
    proj._m23 = sd.proj[3];
    if(perspProj)
        proj._m32 = -1.0;
    else
        proj._m33 = 1.0;

    return mul(proj, float4(positionWS, 1.0));
#endif
}

Hello! I’m getting an undefined punctual shadow filter algorithm error after including the shadow hlsl file. Do you have any tips on how to get rid of this error?

4 Likes

Sorry for ressurecting this once more… I am a bit lost here…
The method “TransformWorldToShadowCoord” exist in shadow.hlsl (URP) with exactly the same signature.

But in HDShadowAlgorithms the method “EvalShadow_WorldToShadow” has a different signature (besides the different name).

So, how is suppose to “translate” one thing (URP) in the another (HDRP)?

5 Likes

Hello so I’m using HDRP and yeah I’m also having this issue with TransformWorldToShadowCoord, using EvalShadow_WorldToShadow gives another error.

Shader error in ‘Master’: undeclared identifier ‘TransformWorldToShadowCoord’ at Assets/GetLighting.hlsl(12) (on d3d11)

Shader error in ‘Master’: ‘EvalShadow_WorldToShadow’: cannot implicitly convert from ‘float3’ to ‘struct HDShadowData’ at Assets/GetLighting.hlsl(12) (on d3d11)

not really sure how to proceed with this.

I’ve got the same issue. I can see I need to use EvalShadow_WorldToShadow but how do I set up the HDShadowData for the function to work?

It would be suer helpful for some actual examples

Bump, stuck on the same issue

nothing yet? i really need this for our project :frowning:

So yeah, in HDRP you cannot get the same kind of information sadly (at least you cannot get it without doing some over complicated stuff that might break a few month).

The only thing you can get for sure is the main light direction by using the “main light direction” node.
So the rest need to be hardcoded (color, attenuation… etc) in order to port this to HDRP

1 Like

Hello. I found a way to use the EvalShadow_WorldToShadow() function, it is possible to decompose the HDShadowData structure which is actually 4 float3 data and one float4 (pos, rot0, rot1, rot2, and proj), this can all be sent separately. The bool perspProj indicates whether the coordinates are projected (if it’s a spot light then true, if it’s a directional light then false). If someone wants to call this through a custom function node in the shader graph, they can simply transfer the entire algorithm into a custom function, and it would look like this (for a directional light):

c
void CausticAttenuation_half(in float3 WorldPos, in float3 lightPos, in float3 rot0, in float3 rot1, in float3 rot2, in float4 proj, out float4 ShadowCoord)
{
float3x4 view;
view[0] = float4(rot0, lightPos.x);
view[1] = float4(rot1, lightPos.y);
view[2] = float4(rot2, lightPos.z);
float3 transformedPos = mul(view, float4(WorldPos, 1.0)).xyz;

float4x4 projMatrix = 0.0;
projMatrix._m00 = proj.x;
projMatrix._m11 = proj.y;
projMatrix._m22 = proj.z;
projMatrix._m23 = proj.w;
projMatrix._m33 = 1.0;

ShadowCoord = mul(projMatrix, float4(transformedPos, 1.0));

#endif
}

And you can get pos, rot0, rot1, rot2, and proj from the light object using C# like this:

csharp
…

void UpdateShadowData()
{
if (mainLight != null && decalMaterial != null)
{
Matrix4x4 shadowMatrix = mainLight.shadowMatrixOverride;

// Extract the light position
Vector3 lightPos = mainLight.transform.position;

Vector3 rot0 = new Vector3(shadowMatrix.m00, shadowMatrix.m01, shadowMatrix.m02);
Vector3 rot1 = new Vector3(shadowMatrix.m10, shadowMatrix.m11, shadowMatrix.m12);
Vector3 rot2 = new Vector3(shadowMatrix.m20, shadowMatrix.m21, shadowMatrix.m22);
Vector4 proj = new Vector4(shadowMatrix.m30, shadowMatrix.m31, shadowMatrix.m32, shadowMatrix.m33);

decalMaterial.SetVector(“_LightPos”, lightPos);
decalMaterial.SetVector(“_Rot0”, rot0);
decalMaterial.SetVector(“_Rot1”, rot1);
decalMaterial.SetVector(“_Rot2”, rot2);
decalMaterial.SetVector(“_Proj”, proj);
}
}
…

Send the data to the material, create a custom function node in the shader graph and connect the values, but what’s next, we presumably got the UV coordinates of the shadows, but to perform attenuation in the decal shader or similar effects, logically we need to sample the shadow map/atlas using these coordinates to get the shadow intensity values, how to do that I have no idea, if someone knows it would be nice if we could come up with a solution to this problem together… It’s a bit illogical that URP can provide a more advanced effect than HDRP… (I personally would like to use this for underwater caustic attenuation which is a decal).

1 Like

Hello, I tried another function HDShadowUtils.ExtractPointLightData() and made some modifications to it. I used it to retrieve my RSMBuffer for the point light.

Matrix4x4 MyExtractPointLightVP(NativeArray<Matrix4x4> cubemapFaces, VisibleLight vl, uint faceIdx, float nearPlane, bool reverseZ,
            out Matrix4x4 view, out Matrix4x4 proj, out Vector4 deviceProjection, out Matrix4x4 deviceProjYFlip, out Matrix4x4 vpinverse, out Vector4 lightDir)
        {
            const float k_MinShadowNearPlane = 0.01f;
            
            if (faceIdx > (uint)CubemapFace.NegativeZ)
                Debug.LogError($"Tried to extract cubemap face {faceIdx}.");

            // var splitData = new ShadowSplitData();
            // splitData.cullingSphere.Set(0.0f, 0.0f, 0.0f, float.NegativeInfinity);

            // get lightDir
            lightDir = vl.GetForward();
            // calculate the view matrices
            Vector3 lpos = vl.GetPosition();
            view = cubemapFaces[(int)faceIdx];
            Vector3 inverted_viewpos = cubemapFaces[(int)faceIdx].MultiplyPoint(-lpos);
            view.SetColumn(3, new Vector4(inverted_viewpos.x, inverted_viewpos.y, inverted_viewpos.z, 1.0f));

            float nearZ = Mathf.Max(nearPlane, k_MinShadowNearPlane);
            // float guardAngle = HDShadowUtils.CalcGuardAnglePerspective(90.0f, viewportSize.x, HDShadowUtils.GetPunctualFilterWidthInTexels(punctualShadowFilteringQuality), normalBiasMax, 79.0f);
            proj = HDShadowUtils.SetPerspective(90.0f, 1.0f, 0.01f, vl.range);
            // and the compound (deviceProj will potentially inverse-Z)
            Matrix4x4 deviceProj = HDShadowUtils.GetGPUProjectionMatrix(proj, false, reverseZ);
            deviceProjection = new Vector4(deviceProj.m00, deviceProj.m11, deviceProj.m22, deviceProj.m23);
            deviceProjYFlip = HDShadowUtils.GetGPUProjectionMatrix(proj, true, reverseZ);
            HDShadowUtils.InvertPerspective(ref deviceProj, ref view, out vpinverse);

            Matrix4x4 viewProj = CoreMatrixUtils.MultiplyPerspectiveMatrix(proj, view);
            // HDShadowUtils.SetSplitDataCullingPlanesFromViewProjMatrix(ref splitData, viewProj, reverseZ);

            Matrix4x4 deviceViewProj = CoreMatrixUtils.MultiplyPerspectiveMatrix(deviceProj, view);
            return deviceViewProj;
}

I tried using viewproj in the render pass to transform to the light.

void RenderRSMGIBuffer(RenderGraph renderGraph,
            HDCamera hdCamera,
            TextureHandle colorBuffer,
            in LightingBuffers lightingBuffers,
            in BuildGPULightListOutput lightLists,
            RSMGIBuffers rendergraph_rsmgiBuffers,
            ref PrepassOutput prepassOutput,
            TextureHandle vtFeedbackBuffer,
            ShadowResult shadowResult,
            CullingResults cullResults)
        {
            bool debugDisplay = m_CurrentDebugDisplaySettings.IsDebugDisplayEnabled();

            using (var builder = renderGraph.AddRenderPass<RSMGIPassData>(debugDisplay ? "RSM Debug" : "RSM",
                out var passData,
                debugDisplay ? ProfilingSampler.Get(HDProfileId.RSMDebug) : ProfilingSampler.Get(HDProfileId.RSM)))
            {
                builder.EnableAsyncCompute(false); 
                var rendererList = renderGraph.CreateRendererList(RSMBufferRendererList(cullResults, hdCamera));
                builder.UseRendererList(rendererList);

                passData.noUseDepthBuffer = builder.UseDepthBuffer(prepassOutput.depthBuffer, DepthAccess.ReadWrite);

                int index = 0;
                passData.rsmGIPassData_rsmgiBuffers.rsmNormalBuffer = builder.ReadWriteTexture(rendergraph_rsmgiBuffers.rsmNormalBuffer);
                passData.rsmGIPassData_rsmgiBuffers.rsmFluxBuffer = builder.ReadWriteTexture(rendergraph_rsmgiBuffers.rsmFluxBuffer);
                passData.rsmGIPassData_rsmgiBuffers.rsmPositionBuffer = builder.ReadWriteTexture(rendergraph_rsmgiBuffers.rsmPositionBuffer);             
                
                builder.AllowRendererListCulling(false);
                
                // start: render rsm for each light
                NativeArray<Matrix4x4> cubemapFaces = new NativeArray<Matrix4x4>(HDShadowUtils.kCubemapFaces, Allocator.TempJob);
                for (int lightID = 0; lightID < cullResults.visibleLights.Length; lightID++)
                {
                    VisibleLight visibleLight = cullResults.visibleLights.ElementAt(lightID);
                    var light = visibleLight.light;
                    
                    const int faceCount = 6;
                    // for (int faceID = 0; faceID < faceCount ; faceID++)
                    for (int faceID = 0; faceID < 1; faceID++)
                    {
      
                        
                        Matrix4x4 view;
                        Matrix4x4 deviceProjectionYFlip;
                        Matrix4x4 deviceProjectionMatrix;
                        Matrix4x4 projection;
                        Matrix4x4 invViewProjection;
                        Vector4 deviceProjection;
                        Vector4 lightDir;

                        // int cascadeShadowSplitCount = 6;

                        var usesReversedZBuffer = SystemInfo.usesReversedZBuffer;
                        
                        // lightVP
                       
                        Matrix4x4 lightVP = MyExtractPointLightVP(cubemapFaces, visibleLight, (uint)faceID, light.shadowNearPlane,
                            usesReversedZBuffer, 
                            out view, out projection, out deviceProjection, out deviceProjectionYFlip, out invViewProjection, out lightDir);
                        
                        builder.SetRenderFunc(
                            (RSMGIPassData data, RenderGraphContext context) =>
                            {
                                BindGlobalRSMPassBuffers(data, context.cmd);

                                context.cmd.SetGlobalMatrix("LightVP", lightVP);
                                context.cmd.SetGlobalInt("gCubeFaceN", faceID);
                                context.cmd.SetGlobalInt("gLightN", lightID);
                                
                                context.cmd.SetGlobalVector("gPointLightPos", new Vector4(light.transform.position.x, light.transform.position.y, light.transform.position.z, 1));

                                context.cmd.SetRandomWriteTarget(1, data.rsmGIPassData_rsmgiBuffers.rsmNormalBuffer);
                                context.cmd.SetRandomWriteTarget(2, data.rsmGIPassData_rsmgiBuffers.rsmFluxBuffer);
                                context.cmd.SetRandomWriteTarget(3, data.rsmGIPassData_rsmgiBuffers.rsmPositionBuffer);
                                
                                CoreUtils.DrawRendererList(context.renderContext, context.cmd, rendererList);
                                
                                // context.cmd.ClearRandomWriteTargets();
                            }
                        );
                        builder.EnableAsyncCompute(false);     
                        
                    }
                }
                cubemapFaces.Dispose();
                // end: render rsm for each light
            }
        }

But I only got strange output textures.



If I comment out varyingsType.vmesh.positionCS = mul(LightVP, float4(varyingsType.vmesh.positionRWS, 1.f));

just use the GameView camera to render, it looks like this:



So, it seems that the LightVP above didn’t convert to light space, but rather applied some transformations to the current camera’s screen space.