Hopefully this is simple, but the following two things are empirically not equivalent, and I don’t understand why:
and
Is it incorrect to think that the Transform node will convert from a World position to Object position? If I plug these two different graphs into the Position output, they cause the object to be draw at very different places. Hopefully something simple I just don’t get…
I tried this again today under Shader Graph 4.6, and it still doesn’t behave the way I’d expect. If I plug the above into the Position output of my graph, the object doesn’t rendering at the correct place. So I assume this isn’t a bug, and I just don’t understand what the Transform node is doing…
What am I missing? Does it not make sense to take the World position of a vertex, convert it to the object’s local position, and then provide that to the Position output? Is that not equivalent to connecting the local position directly to the position output?
Did you find what happened there? I’ve been trying shadergraph and the operation works if I manually multiply the position with the desired matrix, but if I use the transform node it works weirdly as you said.
You can see his reply to a few of my posts on this same issue. Turns out that camera position needs to be taken into account when trying to do this transformation. It’s still not clear to me whether the HDRP team considers that to be a bug that will be fixed in the future, or just the way things are.
Yes, please. It should work “as expected” even if you later convert the positions to camera space. I spent quite some time wondering why was it working different than doing the matrix operations by myself
thx i fixed my problem using GetCameraRelativePositionWS() before TransformWorldToObject()
i wonder if this apply to directions as well. ej: normal and tangent
Hey, anything new on that topic? I feel like it is still not “solved” nor clear how the transform works when the camera relative rendering is enabled. @dgoyette