Polyspatial Volume Rendering Custom Scripted Shaders

It appears that custom scripted shaders are not supported in Polyspatial.

Even this simple unlit shader won’t work in Polyspatial:

Shader "Custom/RedVertFragShader"
{
    Properties
    {
        _Color ("Color", Color) = (1,0.0,0.0,1.0)
    }
    SubShader
    {
        Tags { "RenderType"="Opaque" }
        LOD 100

        Pass {
            CGPROGRAM
            #pragma vertex vert
            #pragma fragment frag

            float4 _Color;

            float4 vert(float4 v:POSITION) : SV_POSITION {
                return UnityObjectToClipPos (v);
            }

            fixed4 frag() : COLOR {
                return _Color;
            }

            ENDCG
        }
    }
}

I know that this could easily be written in shader graph, but how would one go about more complex shaders, like volume rendering with raymarching. I assume that custom nodes will be needed to translate these to shader graph, are those supposed in poly spatial?

Thank you

EDIT: for example this volume rendering example with a custom raymarching script doesn’t work when built to Vision Pro: https://www.youtube.com/watch?v=hXYOlXVRRL8

1 Like

I have the exact same problem. It would be greatly appreciated if any UNITY Staff member can take a look at this and get back to us with solution.

1 Like

@kapolka It’d be amazing if you could share some insights on this. Thank you very much!

This is correct (for MR mode, and assuming you’re rendering to the frame buffer rather than a RenderTexture). RealityKit on visionOS simply doesn’t support Metal shaders (which is what ShaderLab shaders compile to on Apple platforms). Instead, all shaders have to be supplied to the RealityKit ShaderGraphMaterial as MaterialX. We support this by converting Unity shader graphs to MaterialX. There’s some more information in the documentation, including a list of supported nodes. We support Custom Function nodes in a limited sense, by parsing a subset of HLSL, though there’s nothing you can do with Custom Function nodes that you can’t do with other nodes (it’s just a matter of providing a more compact/convenient form). You can also see the nodes that Apple supports in their MaterialX implementation. We aim to at least support everything they do, as well as some of Unity’s higher level nodes, like, say, the procedural nodes (which are implemented internally using that subset of HLSL to turn them into MaterialX nodes).

We have considered attempting to convert ShaderLab shaders (like the one you posted above) to MaterialX directly, and if you’re interested in that, you can vote for it on our road map. However, we’ll still be limited by what we can convert to visionOS’s MaterialX implementation. That means we can’t support things like dynamic loops (only loops that can be completely unrolled) or, for example, sampling the depth buffer (because that’s not a feature that visionOS supports).

You can use ShaderLab shaders as usual in VR mode, since that uses Unity’s Metal-based renderer rather than RealityKit. You can also use them to render to RenderTextures (which, again, uses Unity’s renderer) and then use those in materials that you supply to PolySpatial/RealityKit.

1 Like

Thank you very much for the answer.

I think rm volume rendering can be implemented with non dynamic loops so it’d be great to see support for ShaderLab.

For now would it be possible to implement (approximate?) volume rendering in mixed reality with render textures? I am not very familiar with their extensive applications.

I want to achieve something very similar to the volume clouds demo: https://www.youtube.com/watch?v=hXYOlXVRRL8

Yes, though stereo rendering and depth compositing is the tricky part (assuming you don’t just want to show the rendering on a 2D plane). There’s a thread here with some more information, but basically, you can render to separate RenderTextures for each eye to get a stereo effect (though you have to estimate certain parameters) and can use a displacement map for depth compositing.

Hello, and thank you very much for your response. The linked thread was really helpful in understanding this approach. I’ve been working on this the past couple of days, and I have a way to generate two accurate looking render textures of my desired volume for each eye. Even rendered onto a plane these look pretty good.

However, I am not exactly sure how to use the stereo render textures to approximate the volume rendering.

Could you please elaborate more on the “displacement map for depth compositing” part, I do not have prior experience with using displacement maps for this purpose, which parts of the documentation should I be looking at?

This is more of a general graphics technique than a specific feature of Unity, but the idea is to render the depth map into a floating point texture and use it along with the rendered color texture(s) in a material applied to a subdivided quad mesh (for instance, with 256 x 256 equally spaced points). Then you would sample the depth map texture to offset the vertices of the quad mesh in the vertex stage of your shader graph, so that the final result is rendered roughly at the correct distance away from the camera (as opposed to a flat plane). That means if you combine visionOS objects (that is, standard GameObjects with MeshRenderers) with the contents of the RenderTexture, they will occlude each other (approximately) correctly. However, if you don’t need to combine object types like that (that is, your volume visualization doesn’t need to correctly intersect with other objects), then you probably don’t need to worry about depth compositing.

If you do, though, it’s worth noting the restrictions pointed out on that thread: you can’t render to to depth texture directly; you have to copy the depth buffer to a texture with format GraphicsFormat.R16G16B16A16_SFloat. This post is the one that best describes the approach.