Getting relative camera/object positions in shader in Bounded volume

My goal is to vary the material based on the angle while the user is walking around it. I understand this data isn’t available in C# scripts, but thought it might be possible in the shader. I tried using the position from the Camera node and calculating the direction to the simulation geometry as described here and calculating the angle wrt the Y-axis using arctan2 from the Math node, but it doesn’t seem to work.

Is this a futile exercise since it’s impossible, or might my implementation just be buggy?

The Camera direction node data does seem to be ‘correct’, but without the ‘simulation location’ of the object, it’s not helpful.

No, this should be entirely possible, but there are coordinate system differences between RealityKit and Unity that make it somewhat tricky. Notably, some nodes (like the Camera node) output a “world space” that is relative to the physical environment, and some (like the Position node) output a “world space” that is relative to the bounded volume. This is something we’ve brought up with Apple, but so far we’ve just had to accept the difference. Basically, you just need to be aware of which coordinate system you’re working in and adapt your shader accordingly. For this purpose, it’s likely that you’ll be best served by working in the world space relative to the physical environment. That means using the camera position output from the Camera node and an object position using Position with Space: Object and using the Transform node to transform from object to world space.

If you show us your shader, we can probably help you debug it. You can either post a screenshot or submit a bug report with a repro case and let us know the incident number (IN-#####).