Hello everyone, does anyone know how to make a triplanar shader graph with parallax mapping.
I currently have a triplanar shader that has a base map, a normal map, and a occlusion map. But I have been trying to add a height map using the Parallax Mapping Node. And so far I haven’t been able to get it to work.
I’m wondering if it is possible to do this and if it is how to get it to work?
You would have to manually implement triplanar mapping yourself, as the parallax mapping needs to happen in the “middle” of the triplanar mapping steps.
Basically you’d need to do the parallax mapping separately for all 3 axis of the triplanar mapping, then reuse the calculated UV offset from the parallax mapping to sample all of the textures before combining them, and doing the additional adjustments to the normals to keep them correct.
It also should be noted you’ll need to implement your own parallax mapping, as the built in node assumes you’re using UVs with the same orientation as the base mesh’s UVs as parallax mapping uses tangent space to convert the view direction into UV space. Triplanar uses procedural UVs, so the tangent space the node uses doesn’t match the tangent space of the UVs you get for triplanar mapping.
So, to answer your question, it is possible to use triplanar mapping with parallax mapping together, but it’s not possible to use the Triplanar and Parallax Offset nodes together.
I will have to try find some other solution then, because right now I am procedurally generating a mesh at runtime. And this mesh has a carpet texture on it, that is why I need the parallax mapping to give the carpet some depth to it.
But when I use the normal lit shadier I haven’t been able to get the carpet material to tile right. And with the Trilinear shader it tiles better but there is no parallax mapping.
So I will have to come up with some other way of fixing it.
Triplanar mapping is overkill for this then. Really it just sounds like you aren’t properly setting the mesh’s UVs when generating it. If you want to mimic what you were seeing with triplanar mapping, assign the xz vertex positions to the UVs.
You are probably right, I will just keep messing with the uv mapping. I was also suspecting there might be something wrong with the way that I am mapping the uvs. But then I figured I would just try to use a Trilinear shader. But since that is a dead end I will go back to trying to fix the uvs.
As a side project, I wanted to create a node that will use a simple version of triplanar projection with parallax occlusion mapping. To test this I created the triplanar projection that lerps between uv’s and then sample the texture. Using normal sampling 2D and parallax node works well. Also the triplanar works good, but when I combine them it seems like a mess (looks like inverted somehow).
Do you have any suggestion for this issue. Did I understood correctly what you are explaining here?
The way Shader Graph’s Parallax Occlusion Mapping node is written, it really only works if the input UVs are the mesh’s existing UV0. Any other UVs, like the world space derived UVs of triplanar mapping, will always be broken. There’s no fix for this apart from not using Unity’s Parallax Occlusion Mapping node and write your own custom one using a Custom Function node. You can use a lot of Unity’s existing functions for this, but the fact the built in node calculates its own view direction vector makes it otherwise unusable for non UV0 use cases.
Funny note, Unity’s devs know the correct way to handle triplanar parallax offset mapping. They have an internal function that does it for their own materials! But it’s not exposed to Shader Graph.
Do you know any good resources from where to start implementing my own parallax function that will work? Also do you know any good optimization techniques for this. I made some masks to control the steps based on the viewer angle and distance to the position, but using this with multiple materials can decrease the performance pretty fast.
I wanted to use this for terrain layering of a mesh that is a terrain (this one is created at runtime). I have 6 layers, using triplanar custom node (this is the version without parallax, but using texture array) for each of them to get the color. Then for each color and height limit I do some interpolation between 2 neighbor layers, then some branching (that maybe a problem) to chose between the interpolation of the levels. I attached the triplanar sampling subgraph / the merging step that lerps between 2 neighbor levels and branches / and the interpolation process between 2 layers.
Do you have any suggestions on how I can improve this?
I have now implemented a parallax offset, so that it will not kill the performance. The only problem here, I think is that I need a way to compute the view direction correctly based on triplanar information. Do you have any suggestions on that? I’m still getting the height effect wrong if I use the parallax offset subgraph as in the screenshot.