In full screen shader graph, transform from view space to world space

I want to transform view normal to world normal in full screen shader graph.
I tried with UNITY_MATRIX_I_V and unity_CameraToWorld.
The expected result did not come out.
In the full screen shader graph, the UNITY_MATRIX_I_V and unity_CameraToWorld matrices seem to be different from the expected view to world matrices.
Thank you for your help.

Yes, UNITY_MATRIX_I_V and unity_CameraToWorld are indeed different. unity_WorldToCamera and unity_CameraToWorld follows the conventions of Unity’s scene transforms, where +Z is forward, +Y is up, and +X is right. Where as UNITY_MATRIX_V and UNITY_MATRIX_I_V follow OpenGL conventions where +Z is back, +Y is up, and +X is right. This means the two matrices use different handedness.

However I’m most curious what exactly you’re trying to do. Apart from view facing sprites, there’s few things to do with the view matrices w/o also using the projection matrices. Like if you’re trying to convert the camera depth texture / scene depth to world space. So if you’re doing stuff with those there’s additional gotchas. UNITY_MATRIX_P converts from the OpenGL view space calculated by UNITY_MATRIX_V into clip space, but there’s no UNITY_MATRIX_I_P to use if you need to convert from clip space back to view space. There are unity_CameraProjection and unity_CameraInvProjection, and you might assume that these pair up with the unity_WorldToCamera and unity_CameraToWorld… but they don’t! The unity_CameraProjection matrix is always the OpenGL view space to OpenGL projection transform, where as UNITY_MATRIX_P is always the OpenGL view space to current graphics API projection transform, which unless you’re using OpenGL / GLES / WebGL won’t match the unity_CameraProjection, but can match if you using OpenGL / GLES / WebGL.

Thanks for explaining. It helps for understanding matrices…

I have a custom renderer feature using two shader graphs.
First shader graph create view normal texture using normal vector node as a final color and set normal texture property in second shader graph.
The second shader graph (full screen shader graph) sample the normal texture using uv and converts to world space normals.

In the second Shader Graph, the converted world space normal is expected to be the same as the world space normal texture(It can be created by first shader graph).

Your explanation helps me understand matrices, but I don’t quite understand what the problem is in my case.
Anyway, I can extract the view space normal. Can’t I just convert view to world in full screen shader graph?

void ViewNormalToWorldNormal_float(float3 ViewSpaceNormal, out float3 WorldSpaceNormal)
{
// Only use r,g,b value from Sample Texture 2D node.
float3x3 viewToWorldMatrix = (float3x3)UNITY_MATRIX_I_V;
// float3x3 viewToWorldMatrix = (float3x3)unity_CameraToWorld;
WorldSpaceNormal = Normalize(mul(viewToWorldMatrix, ViewSpaceNormal));
}

I tried UNITY_MATRIX_I_V and unity_CameraToWorld but both is not what I expected.

Can you tell me more about my case?

Also I asked on stackoverflow. I think this might help you understand more about my situation: unity game engine - Difference between world-normal calculated by sampling view-normal texture and world-normal texture - Stack Overflow?

There’s another difference I forgot to mention between the UNITY_MATRIX_V and unity_WorldToCamera matrices. The UNITY_MATRIX_V and its inverse are the transform matrices currently being used for rendering. When rendering a post process, that’s the view matrix used to render the full screen triangle… which is an identity matrix… not the scene camera’s transform matrix. However the unity_WorldToCamera and inverse remain the main camera’s transform matrix.

But it’s easy to do the conversion. Multiply the view space Z by -1 to convert from the OpenGL transform to the scene transform handedness.

However, if you’re controlling the rendering of the normals yourself, why aren’t you just rendering out world space normals to begin with? Skip the conversion entirely?

Because I need view space normal also.
From your answer, Unity uses OpenGL’s convention when converting view space.

void ViewNormalToWorldNormal_float(float3 ViewSpaceNormal, out float3 WorldSpaceNormal)
{
ViewSpaceNormal *= float3(1,1,-1);
float3x3 viewToWorldMatrix = (float3x3)unity_CameraToWorld;
WorldSpaceNormal = normalize(mul(viewToWorldMatrix, ViewSpaceNormal));
}

But this code also not my expect…

Specifically, I would like to get the y value of the world space normal.
This is because the effect is created using the degree to which the world space normal is directed upward(0,1,0).
So for getting the cosine, I need correct y value of world space normal.

The world space normal value converted to the above code is different from the actual world space normal value…
The y value of the normal at the top of the base cube is not shown as 1. Also, the result of logic using cosine value is different as well (for testing, the first shader graph created world normal texture).

The shader code you posted should work, assuming the unity_CameraToWorld is still the correct matrix when you use it (if you’re using multiple cameras, it may not be), and assuming the view normal you’re feeding into it is correct.

What format is the render texture you’re rendering to? Are you rendering to a half or float format? Or are you rendering to a UNORM or sRGB format? If it’s the later, you should be using a half format (aka R16G16B16A16_SFloat).

I change RenderTextureDescriptorcolor format to ARGB Half, It almost seems to be working fine.
It looks like the texture was created with world space normal. And the normal of the top surface of the cube is also well set with y set to 1.

But there are still some problems. The y of the cube’s side normals is not 0.The value is close to 0, but slightly greater than 0.
Since I’m working on creating a scene for testing, there is only one camera.
The same thing happens if set the color format to ARGB Float.

There seems to be very little difference in the values. Is this level of difference unavoidable?

I would output world space to the render texture, and then check it in the post process. If it’s still off, then there’s something else going wrong somewhere in your expectations.

The view normal stored to ARGB Half texture and transformed back to world normal won’t ever be “perfect”, like the top of a box might not be WorldNormal.y == 1.0, but it should be WorldNormal.y > 0.999. But if you use ARGB Float or output world normal to begin with, it should be == 1.0.

It seemed like it would be so. Transformation of normal vectors works well. Thank you so much for sharing your knowledge and helping me!