How StructuredBuffer can be used in shaders for Android?

When I use StructuredBuffer in shaders for Android, they always draw objects purple, and their Shader.isSupported return false.
My Android device supports OpenGL ES 3.2, and SystemInfo.supportsComputeShaders returns true, though.
Of course it passes target 4.5. Drawing fails only if the shader contains StructuredBuffer.
I tried to turn on Requires ES 3.1 check at player settings, which had no effect.
What is the way to make them work?

These are the codes:

Shader "TestShader" {
    Properties{
        _MainTex("Texture", 2D) = "white" {}
    }

    Category{
        Tags{ "Queue" = "Transparent" "IgnoreProjector" = "True" "RenderType" = "Transparent" }
        Blend SrcAlpha OneMinusSrcAlpha
        Cull Off
        Lighting Off
        ZWrite On
        ZTest On
      
        SubShader{
            Pass{
                CGPROGRAM
                #pragma vertex vert
                #pragma fragment frag
                #pragma target 4.5

                #include "UnityCG.cginc"

                sampler2D _MainTex;
                StructuredBuffer<float> _Colors;

                struct appdata_t {
                    float4 vertex : POSITION;
                    float2 texcoord : TEXCOORD0;
                };

                struct v2f {
                    float4 vertex : SV_POSITION;
                    float2 texcoord : TEXCOORD0;
                    float4 color : TEXCOORD1;
                };

                v2f vert(appdata_t v) {
                    v2f o;
                    o.vertex = UnityObjectToClipPos(v.vertex);
                    o.texcoord = v.texcoord;
                    o.color = float4(_Colors[0], _Colors[1], _Colors[2], 1);
                    return o;
                }

                fixed4 frag(v2f i) : SV_Target {
                    float4 color = tex2D(_MainTex, i.texcoord) * i.color;
                    clip(color.a - 0.001);
                    return color;
                }

                ENDCG
            }
        }
    }
}
using UnityEngine;

public class TestComponent : MonoBehaviour {
    ComputeBuffer buffer;
    float[] array;
    float time;

    void Start() {
        buffer = new ComputeBuffer(4, sizeof(float));

        var mr = GetComponent<MeshRenderer>();
        mr.material.SetBuffer("_Colors", buffer);

        array = new float[3];
    }

    void Update() {
        time += Time.deltaTime;
        if (time > 0.1f) {
            array[0] = Random.Range(0.5f, 1f);
            array[1] = Random.Range(0.5f, 1f);
            array[2] = Random.Range(0.5f, 1f);
            buffer.SetData(array);
            time -= 0.1f;
        }
    }

    void OnDestroy() {
        buffer.Release();
    }

    void OnGUI() {
        var mr = GetComponent<MeshRenderer>();
        GUILayout.Label("ComputeShader is supported: " + SystemInfo.supportsComputeShaders.ToString());
        GUILayout.Label("This shader is supported: " + mr.sharedMaterial.shader.isSupported.ToString());
    }
}

Hi!
Most likely your device doesn’t support StructuredBuffers in vertex shaders. It’s not a requirement for OpenGL ES 3.1+ to have it.
Using them in fragment shaders would be fine, though.

6 Likes

Thanks for the information!

Hello , I have a similar issue and I want to be sure it’s not a player setting probleme or else :
I am working on android note 8, OpenGL ES3.2, I use computebuffers in a shader for DrawMeshInstancedIndirect to draw several colored spheres.
and I have the following error read in adb shell:
“GLSL link error: The number of vertex shader storage blocks is greater than the maximum number allowed
and of course nothing of what I want to show appears on the screen”

Is it my device which lack computebuffer support or a software pb ?
What can I do to replace my current solution to adapt for Android devices to draw many spheres without the DrawMeshInstancedIndirect and vertex shader , which works great on pc?
thanks for you help !

@DocTatur yes, this is the same thing. Basically, many devices support 0 SSBOs in VS.

Is there a reliable source of information on which platforms support StructuredBuffer reads in vertex shaders? I’m running into the same issue on Android now.

You can check in runtime (2019.3 only):

Most platforms do (if they support compute). There’s at least one GPU manufacturer that didn’t implement reading it from vertex shaders, but only on OpenGL. Their Vulkan driver supports it.

3 Likes

Is it possible to use StructuredBuffer to send data from C# script to a GLSL domain shader by any chance? Maybe as a shader storage buffer?

@bhavyanshmishra why are you writing GLSL shaders? :slight_smile:
Wouldn’t it be easier to use HLSL?

@aleksandrk Hm, I thought HLSL was mainly for Direct3D applications and I had to use OpenGL. I don’t have much experience with HLSL so I’d appreciate any information on that. However, I do already have a shader in GLSL which performs tessellation for me on Unity Linux. I’d open to port over the code to HLSL if that’s my only option but is there no way to send arrays of floats or structs to GLSL domain shader from Unity C#, efficiently at regular intervals at a minimum rate of 30 Hz?

Here is my current GLSL shader:

Shader "Custom/Tessellation"
{
    Properties
    {
        _MainTex ("Texture", 2D) = "white" {}
        _factor("Tessellation scale",Range(1.0,64.0)) = 1.0
    }
    SubShader
    {
        //Cull Off
        Pass
        {
            GLSLPROGRAM
            #version 460   
            uniform float _factor;
            uniform sampler2D _MainTex;
            uniform float regions[1536];

            layout (std430, binding=2) buffer shader_data
            {
              vec4 camera_position;
              vec4 light_position;
              vec4 light_diffuse;
            };
          
            #ifdef VERTEX
                in  vec4 in_POSITION0;
                void main()
                {
                    gl_Position =  in_POSITION0;
                }
            #endif

            #ifdef HULL          //GLSL Tessellation Control Shader

                layout (vertices = 4) out;
                void main()
                {
                    if (gl_InvocationID == 0)
                    {
                        float tessLevel = 16.0;
                        gl_TessLevelInner[0] = tessLevel;   //Inside tessellation factor
                        gl_TessLevelInner[1] = tessLevel;   //Inside tessellation factor

                        gl_TessLevelOuter[0] = tessLevel;   //Edge tessellation factor
                        gl_TessLevelOuter[1] = tessLevel;   //Edge tessellation factor
                        gl_TessLevelOuter[2] = tessLevel;   //Edge tessellation factor
                        gl_TessLevelOuter[3] = tessLevel;   //Edge tessellation factor
                    }
                    gl_out[gl_InvocationID].gl_Position = gl_in[gl_InvocationID].gl_Position;
                }
            #endif

            #ifdef DOMAIN        //GLSL Tessellation Evaluation Shader
                layout (quads) in;
                void main()
                {   
                    vec4 a = gl_in[0].gl_Position;
                    vec4 b = gl_in[1].gl_Position;
                    vec4 c = gl_in[2].gl_Position;
                    vec4 d = gl_in[3].gl_Position;
                    vec4 e = gl_in[4].gl_Position;
                    vec4 f = gl_in[5].gl_Position;
                    vec4 g = gl_in[6].gl_Position;
                    vec4 h = gl_in[7].gl_Position;

                    float u = gl_TessCoord.x;
                    float v = gl_TessCoord.y;


                    // Quad
                    //vec4 p1 = mix(c, e, u);
                    //vec4 p2 = mix(b, f, u);
                    //vec3 n0 = cross(c.xyz-e.xyz,e.xyz-d.xyz);
                    //vec3 n1 = cross(b.xyz-c.xyz,e.xyz-f.xyz);
                    //vec3 n2 = cross(c.xyz-d.xyz,f.xyz-a.xyz);

                    // QuadPlane
                    vec4 p1 = mix(b, a, u);
                    vec4 p2 = mix(c, d, u);
                    vec3 n0 = cross(a.xyz-b.xyz,c.xyz-b.xyz);
                    vec3 n1 = cross(b.xyz-c.xyz,e.xyz-f.xyz);
                    vec3 n2 = cross(c.xyz-d.xyz,f.xyz-a.xyz);

                    // Plane
                    //vec4 p1 = mix(a, c, u);
                    //vec4 p2 = mix(a, b, u);      
                    //vec3 n0 = cross(a.xyz-b.xyz,b.xyz-c.xyz);          


                    vec4 normal = vec4(normalize(n0),1);

                    float scale = 0.0005;

                    float x = u*10 - 5;
                    float y = v*10 - 5;

                    vec4 plow = texture(_MainTex, vec2(0,0));
                    vec4 phigh= texture(_MainTex, vec2(62,0));

                    float height = scale * (pow(x,3) + pow(y,3));

                    vec4 pos = mix(p1, p2, v) + normal*((-0.012))*height;

                    gl_Position = gl_ModelViewProjectionMatrix * pos;
                }
            #endif

            #ifdef GEOMETRY      //geometry shader for rendering wireframe
                layout(triangles) in;
                layout(line_strip, max_vertices = 3) out;
                void main()
                {
                    for(int i = 0; i < gl_in.length(); ++i)
                    {
                        gl_Position = gl_in[i].gl_Position;
                        EmitVertex();
                    }
                    gl_Position = gl_in[0].gl_Position;
                    EmitVertex();
                    EndPrimitive();
                }  
            #endif
                 
            #ifdef FRAGMENT
                out vec4 color;
                void main()
                {
                    color = vec4(1,1,1,1);
                }
            #endif
         
            ENDGLSL
            }
    }
}

@bhavyanshmishra check Unity - Manual: Writing shaders overview, this covers the shading language used in Unity pretty well. Given shader source in this language, Unity translates it to whatever platform you’re using.

You can, of course, write shaders in GLSL, but that’s limiting you to those platforms that have OpenGL support, and makes other interactions (like passing StructuredBuffers there) awkward, as this is a very infrequently used functionality.

@aleksandrk Ahh, that made perfect sense after your comments and once I read up a little more about Unity’s way of using Shading Languages. So, then I went ahead and ported over my GLSL shader to HLSL which almost perfectly replicates my GLSL shader. I’ve tested it thoroughly. Now, though using HLSL, how would I use a ComputeBuffer as StructuredBuffer in normal shaders like domain, hull and geometry shaders to send float array to the shader at more than 30 Hz? I really appreciate your comments. I’ve learned a lot from them!

Shader "Custom/QuadTessellationHLSL"
{
    SubShader
    {
        Pass
        {
            Cull Off
            CGPROGRAM
            #pragma vertex TessellationVertexProgram
            #pragma hull HullProgram
            #pragma domain DomainProgram
            #pragma fragment FragmentProgram
            #pragma target 4.6
// ----------------------------------------------------------------

            struct appdata {
                float4 vertex : POSITION;
                float2 uv : TEXCOORD0;
            };
            struct ControlPoint
            {
                float4 vertex : INTERNALTESSPOS;
                float2 uv : TEXCOORD0;
            };
            struct hsConstOut
            {
                float Edges[4] : SV_TessFactor;
                float Inside[2] : SV_InsideTessFactor;
            };
            struct v2f
            {
                float4 vertex : SV_POSITION;
                float2 uv : TEXCOORD0;
            };

            StructuredBuffer<float> params;
// ---------------------------------------------------------------


            v2f VertexProgram(appdata v) // Not the primary vertex program.
            {
                v2f o;
                o.vertex = UnityObjectToClipPos(v.vertex);
                o.uv = v.uv;
                return o;
            }

            ControlPoint TessellationVertexProgram(appdata v){
                ControlPoint p;
                p.vertex = v.vertex;
                p.uv = v.uv;
                return p;          
            };
          
            hsConstOut hull_constant_function(InputPatch<ControlPoint, 4> patch)
            {
                hsConstOut output;
                output.Edges[0] = output.Edges[1] = output.Edges[2] = output.Edges[3] = output.Inside[0] = output.Inside[1] = 16;
                return output;
            }
                [domain("quad")]
                [partitioning("integer")]
                [outputtopology("triangle_cw")]
                [outputcontrolpoints(4)]
                [patchconstantfunc("hull_constant_function")]          
            ControlPoint HullProgram(InputPatch<ControlPoint, 4>patch, uint id: SV_OutputControlPointID)
            {
                return patch[id];
            }
                [domain("quad")]
            v2f DomainProgram(hsConstOut factors,
                                const OutputPatch<ControlPoint, 4>patch,
                                float2 UV:SV_DomainLocation)
            {
                float4 a = patch[0].vertex;
                float4 b = patch[1].vertex;
                float4 c = patch[2].vertex;
                float4 d = patch[3].vertex;

                float4 v0 = lerp(a,b,UV.x);
                float4 v1 = lerp(d,c,UV.x);
                float4 vFinal = lerp(v0,v1,UV.y);

                float2 uv0 = lerp(a,b,UV.x);
                float2 uv1 = lerp(d,c,UV.x);
                float2 uvFinal = lerp(uv0,uv1,UV.y);

                float3 n0 = cross(a.xyz-b.xyz,a.xyz-d.xyz);
                float4 normal = float4(normalize(n0),1);

                float scale = 0.000005;
                float x = UV.x*10 - 5;
                float y = UV.y*10 - 5;
                //float height = scale * (pow(x,3) + pow(y,3));

                int i = int(a.x);
                int j = int(a.y);
                float height = params[i*16+j];

                appdata data;
                   data.vertex = vFinal + height * normal;
                   data.uv = uvFinal;

                return VertexProgram(data);
            }


            fixed4 FragmentProgram(v2f i) : SV_TARGET
            {
                //float4 col = tex2D(_MainTex, i.uv);
                fixed4 col = fixed4(0.5,0.8,0.4,1.0);
                return col;
            }
            ENDCG
        }
    }
}

On the C# side I create the ComputeBuffer and assign it to the material buffer using SetBuffer(“params”, cb_params). However, I feel that the params buffer in the shader is still set to zeros.

using System.Collections;
using System.Collections.Generic;
using UnityEngine;

using map_sense = RosSharp.RosBridgeClient.MessageTypes.MapSense;
using sensor = RosSharp.RosBridgeClient.MessageTypes.Sensor;
using RosSharp.RosBridgeClient;

[RequireComponent(typeof(RosConnector))]
public class PlanarRegionSubscriber : MonoBehaviour
{
    private RosConnector rosConnector;
    private Texture2D dtex;
    //private GameObject pointMap;
    public MeshRenderer renderer;

    private float[] paramData;
    public ComputeBuffer cb_params;



    // Start is called before the first frame update
    void Start()
    {
        rosConnector = GetComponent<RosConnector>();
        string subscription_id = rosConnector.RosSocket.Subscribe<map_sense.PlanarRegions>("/map/regions", RegionMsgHandler);
      
        cb_params = new ComputeBuffer(192, sizeof(float));
        renderer = GameObject.FindWithTag("QuadMap").GetComponent<MeshRenderer>();
        // cb_params.Release();
        Debug.Log("Subscribed:"+subscription_id); 
                renderer.material.SetBuffer("params", cb_params);

        paramData = new float[192];
        for(int i = 0; i<192; i++){
            paramData[i] = (float)i;
        }
        cb_params.SetData(paramData);
    }

    private void RegionMsgHandler(map_sense.PlanarRegions message)
    {
        Debug.Log(message.data.Length);
        renderer.material.SetBuffer("params", cb_params);
        for(int i = 0; i<192; i++){
            paramData[i] = (float)i;
        }
        cb_params.SetData(paramData);

        // ImageConversion.LoadImage(dtex, msgData);
        // renderer.material.SetTexture("_MainTex", dtex);

    }

    // void OnDestroy() {
    //     cb_params.Release();
    // }


}

@bhavyanshmishra usually compute buffers are filled from compute shaders, to bypass a copy between CPU visible memory and GPU visible memory. This speeds up things quite a bit :slight_smile:

To debug this, I would fist make sure a simpler case works, and then write a more complex shader.

It works on those APIs as well. Please check if you’re using it correctly.

What can be the fallback if SSBO’s are not supported for passing arbitrary data to vertex shader?

It depends on the amount of data you need to pass.