Shader graph: World to Object transform not working as I expected

Hopefully this is simple, but the following two things are empirically not equivalent, and I don’t understand why:

3938782--336850--upload_2018-11-28_15-8-12.png

and

3938782--336853--upload_2018-11-28_15-8-37.png

Is it incorrect to think that the Transform node will convert from a World position to Object position? If I plug these two different graphs into the Position output, they cause the object to be draw at very different places. Hopefully something simple I just don’t get…

I tried this again today under Shader Graph 4.6, and it still doesn’t behave the way I’d expect. If I plug the above into the Position output of my graph, the object doesn’t rendering at the correct place. So I assume this isn’t a bug, and I just don’t understand what the Transform node is doing…

What am I missing? Does it not make sense to take the World position of a vertex, convert it to the object’s local position, and then provide that to the Position output? Is that not equivalent to connecting the local position directly to the position output?

Did you find what happened there? I’ve been trying shadergraph and the operation works if I manually multiply the position with the desired matrix, but if I use the transform node it works weirdly as you said.

wyatttt provided some detail over in the Shader Graph thread:

Feedback Wanted: Shader Graph page-34#post-3998545

You can see his reply to a few of my posts on this same issue. Turns out that camera position needs to be taken into account when trying to do this transformation. It’s still not clear to me whether the HDRP team considers that to be a bug that will be fixed in the future, or just the way things are.

Hi,

It’s still not clear to me whether the HDRP team considers that to be a bug that will be fixed in the future, or just the >way things are.

It is not really a bug but rather a missing feature not taken into account by Shader Graph. We should fix that for 2019.1.
And yes, this is due to camera relative usage in HDRP: https://github.com/Unity-Technologies/ScriptableRenderPipeline/wiki/Camera-Relative-Rendering

Yes, please. It should work “as expected” even if you later convert the positions to camera space. I spent quite some time wondering why was it working different than doing the matrix operations by myself :roll_eyes:

I’m having the same issue here, it’s very confusing why the camera is involved in this process.

1 Like

Please fix this so that stuff like pixel snapping can be done properly.

Just noting that this is still as issue in 2019.1 with HDRP 5.13. Not sure if this needs to be reported or if it’s already tracked internally.

In 2019.1, this still does not give the correct object position:

You still need to do something like this:

Seems to still be broken on 2019.2 / HDRP 6.7.1 (position with space set to WS gives RWS).

Since nobody posted a case number here I reported this:

(Case 1162188) [HDRP] wrong naming - “World Space” is now “Camera Relative Space” and Transform Node should get that as an option

1 Like

Same problem!
So, I cannot properly get object-space position of camera at all, or there is some trick?

Upd.: yes, there is “how to get object-space camera position”:

1 Like

Just out of personal curiosity, is this the same wonkyness as in this one? Matcap shader with normalmap

I had trouble in both lwrp and hdrp, although the results were different.

thx i fixed my problem using GetCameraRelativePositionWS() before TransformWorldToObject()
i wonder if this apply to directions as well. ej: normal and tangent

Hey, anything new on that topic? I feel like it is still not “solved” nor clear how the transform works when the camera relative rendering is enabled. @dgoyette