Converting projection matrix in a shader

Heyo!

I have a specific use case where I construct an oblique matrix inside a shader (basically planar reflections that accurately reflects even if the “plane” is uneven through vertex displacement) on a per vertex basis, but for the last step I need to run it through GL.GetGPUProjectionMatrix to get an accurate matrix, is there a way to do this in a shader?

I can’t set it from a script since I won’t have the oblique matrix until I render the vertex.

In c#, all of the matrix construction utilities generate OpenGL style matrices, that is to say the clip space Z is between -w near clip and w far clip. GetGPUProjectionMatrix converts that into a matrix where the Z is 0.0 far to w near, which is what all non OpenGL/ES APIs use.

You can avoid that by not using OpenGL style matrix construction in your shader to begin with.

Oh duh, now that you say it it feels so obvious haha, thanks!

There’s one additional thing it can do, which is flip the projection matrix. You can handle this by multiplying the clip space y by _ProjectionParams.x, which will be either -1.0 or 1.0 depending on if the projection should be flipped or not.

1 Like

I’ve got it working, but as a last step I would like to make it into a custom node so I can plug it into shader graph. Since shadergraph needs the position in object space I would need to convert it from clip space, something that seems possible judging by this thread (which, unsurprisingly, is a reply from you haha).

I’m not the greatest at shaders, but would this be feasible in my case? The clip pos is generated by a modified view projection matrix in my custom node, but since shadergraph will use the regular view projection I figure I just need to convert it from clip space using the regular inverse view projection. I tried doing what was specified in your reply in that other thread, but couldn’t get it to work. Maybe I’m misunderstanding clip space in general, and clip space is specific to each view projection matrix?

There is always the option of generating a shader from the graph and editing it manually, but it’s going to be a lot of extra labour and quite inflexible if I ever want to change the shader graph.

Homogeneous clip space is the final value a vertex shader outputs. It’s a 4 component representation of the screen space position which exists for multiple reasons, not the least of which being that’s what a perspective projection matrix outputs. But also because it interpolates linearly in screen space. But that’s a whole extra can of worms.

If you are indeed generating a custom view projection matrix to modify the vertex positions, resulting in a “final” clip space position, you would need to convert it back to object space using the inverse of the regular view projection and inverse world matrices (the world to object transform).

With that you should be able to get the vertices to end up in the correct position, but it may produce visible warping on the surfaces. And that is unavoidable using Shader Graph.

How much warping are we talking about? This is my shader graph setup (this is what I’m unsure of), and this is the result (very heavily warped, sometimes not even visible).

9730423--1391308--upload_2024-3-27_9-41-51.png

The warping will be apparent in the surface’s UVs, the meshes should still appear at the correct pixels.

The thing is you don’t need to do half of those nodes. Assuming output of your custom function is in clip space already, multiply the output of that by the InverseViewProjetion matrix, and then apply the World to Object transform. Basically just these three nodes:
9732400--1391776--upload_2024-3-27_14-23-44.png
Everything else is for getting something that is not in clip space back into clip space.

Ugh, yep, you’re right, that worked just fine. Thanks again, you are a life saver!

In the end I had to go with manually modified shaders, since when it went through the regular view projection again after converting back to object space it wouldn’t clip at the simulated near plane anymore (makes sense).

The last issue I’m encountering I’m finding hard to understand why it happens, it might just be a natural side effect of what I’m doing, but maybe someone knows something I don’t.

The effect works just fine when viewed from the front, however when the mesh is halfway over a difference in reflection height in other orientations (back, left and right) some of the vertices are seen through vertices that should be in front of them.

Front (working).
9744796--1394407--upload_2024-4-2_12-51-23.png

Back and left, some triangles are seen when they shouldn’t be.

9744796--1394413--upload_2024-4-2_12-52-28.png9744796--1394422--upload_2024-4-2_12-53-6.png

9744796--1394413--upload_2024-4-2_12-52-28.png
9744796--1394419--upload_2024-4-2_12-52-44.png