If I access the world position in the vertex or fragment shader, either by using world space with the position node, or transform object space to world. then the output of the shader differs from the editor. in some cases I’ve seen it be completely black, in the provided exactly it seems to be outputting the world position on the plane.
Thanks! For that example, it looks like the issue was a bug where we weren’t respecting the default value for the “color” custom interpolator. Connecting a (white) Color node to it seemed to fix the issue (otherwise, it was trying to use the vertex colors, which end up returning UV0 if there are no vertex colors on the mesh–that’s a known issue). I will get that fixed in a subsequent release.
Is it just the custom interpolator’s default or are there any other nodes? the original full shader did have it’s Color interpolator hooked up only it appeared as black. i’ll try and create a repro of it
OK! After much debugging, it turns out the issue here is that we use the fragment-specific versions of the MaterialX transform nodes when the Unity Transform node is used in subgraphs (versus when the Transform node is used in the main graph, where we can see that it’s connected to a vertex output and thus use the vertex-specific versions). So, the workaround is either to use the Transform node in the main graph rather than a subgraph, or use the subgraphs that contain the Transform node in the fragment stage. Anyway, we’ll fix this in a subsequent release.
As an aside, though, you might have issues with the “world space” that you’re getting being different between Unity play mode and visionOS. The next version will include this section in the shader graph docs to explain:
Notes on Transform and Transformation Matrix Nodes in VisionOS
The matrices returned by the Transformation Matrix node and used by the Transform node are obtained directly from visionOS and currently assume a world space that does not match either the simulation scene or the output of the Position, Normal Vector, Tangent Vector, or Bitangent Vector nodes. The “world space” output of those nodes is relative to the transform of the output volume–that is, it does not change when a bounded app volume is dragged around. The Transform and Transformation Matrix nodes, on the other hand, assume a world space that is shared between all app volumes. To get geometry in this world space, use the geometry (e.g., Position) node with Space: Object and transform it with the Transform node set to From: Object and To: World.
if the position node set to world space would return a world position relative to the output volume, but the transform node transforms in the world volume space, how would i go about modifying the relative world position and transforming it back into object space so it could be output through the position block node of the vertex shader? or is the volume transform only translation with no rotation or scale? thanks
Unfortunately, there’s no great way to do this at the moment. The best way I can think of offhand would be to pass the Unity world-to-object matrix as a shader property (which would have to be updated any time the object moves in world space) and use that to transform from Unity world space (as obtained from the Position / VolumeToWorld nodes) back to object space to output in the Vertex block.
We’re hoping that Apple eventually gives us an option to get the transform relative to the volume.