[SOLVED] How can I initialize a structured buffer without external scripts?

So, I’m sure many of you are familiar with VRChat by now. I have been working on a fur shader intended to work on their platform, however, they have this awful thing that prevents anyone from using even the most basic of custom scripts. Originally I had a neat shader that used a script for physics, as you would expect. Since I found out that scripts aren’t allowed I’ve been working very hard to get my shader to work without it. I’m determined that there is some way to get it to work. The main problem is that I need somewhere to store data about each vertex such as past position and velocity. The solution was to use an RWStructuredBuffer, until I found out that it also needs a script to initialize it! I am utterly stumped now, and mostly want to cry from how many hours I sunk into this thing.

Regardless, I will post some of my code and if anyone has ANY Ideas for how I could proceed I would be oh so grateful.

From variables file:

    float3 _Gravity;
    float _ForceMult;
    float _ForceDamp;
    float3 _WindForce;
    float _WindMin;
    float _WindMax;
    float _WindDamp;
    float _PrevTime;
    float _OurTimeDelta;
    float _UpdateRate;
    float _MaxForceFinal;
    struct vertdata {
        float3 position;
        float3 force;
        float computeTime;
    };
    #ifdef SHADER_API_D3D11
    uniform RWStructuredBuffer<vertdata> _PosBuffer : register(u0);
    #endif

From the vertex file:

        float wDelta = _Time.y - _PrevTime;
        if(wDelta > 1 / _UpdateRate){
            float windMagnitude = (_WindMax - _WindMin) * 2.0;
            float3 wind = float3(_WindMax,_WindMax,_WindMax) - hash33(_Time.yzw)*windMagnitude;
            _WindForce = lerp(_WindForce,wind,wDelta * _WindDamp);
            _PrevTime = _Time.y;
        }

        float4 c = tex2Dlod (_ControlTex, float4(v.texcoord.xy,0,0));
        float3 startdisp = float3(0,0,0);
        float hairLength = _MaxHairLength * CURRENTLAYER;
        hairLength *= _UseHeightMap ? c.r : 1;
        float movementFactor = pow(CURRENTLAYER, 2);
        #ifdef SHADER_API_D3D11
        vertdata inp = _PosBuffer[v.vid];
        float3 force = inp.force;
        float tDelta = _Time.y - inp.computeTime;
        if(tDelta > 1 / _UpdateRate){
            vertdata outp;
            float4 pos = mul(unity_ObjectToWorld, v.vertex);
            outp.position = pos.xyz / pos.w;

            float3 mforce = outp.position - inp.position;
            //float3 mforce = 0;
            mforce /= tDelta;
            mforce *= _ForceMult;
            //mforce += _WindForce;
            float mmult = min(1,MaxForce*MaxForce / (mforce.x*mforce.x + mforce.y*mforce.y + mforce.z*mforce.z));
            mforce *= mmult;
                                                
            float3 totaldisp = mforce;
            float3 noisecoords = _Time.www / 4 + v.vertex.xyz;
            float3 noisecoords2 = _Time.www / 4 + v.vertex.xyz * 2;
            float3 randNums = hash33(_Time.www / 4 + v.vertex.xyz * 16) * 2 + float3(-1,-1,-1);
           randNums *= randFac;
           float3 finaldisp;
           finaldisp.x = totaldisp.x*(1 + NoiseFac*(snoise(noisecoords) + snoise(noisecoords2) * FractFac + randNums.x));
           finaldisp.y = totaldisp.y*(1 + NoiseFac*(snoise(noisecoords + float3(712,712,712)) + snoise(noisecoords2 + float3(712,712,712)) * FractFac + randNums.y));
           finaldisp.z = totaldisp.z*(1 + NoiseFac*(snoise(noisecoords + float3(1916,1916,1916)) + snoise(noisecoords2 + float3(1916,1916,1916)) * FractFac + randNums.z));

           finaldisp += _Gravity;
           force = lerp(force,finaldisp,tDelta * _ForceDamp);
           force *= min(1,_MaxForceFinal*_MaxForceFinal / (force.x*force.x + force.y*force.y + force.z*force.z));

           outp.force = force;
           outp.computeTime = _Time.y;
           _PosBuffer[v.vid] = outp;
       }
       #else
       float3 force = float3(0,-3,0);
       #endif

       if (_UseBiasMap) {
           fixed skinLength = _MaxHairLength - (_MaxHairLength * c.b);
           v.vertex.xyz -= v.normal * skinLength;
       }           
       
       v.vertex.xyz += (v.normal * hairLength);
       float4x4 transmatrix = unity_WorldToObject;
       transmatrix[0].w=0;
       transmatrix[1].w=0;
       transmatrix[2].w=0;
       float4 forcedisp = mul(transmatrix, float4(force,1.0f));
       v.vertex.xyz += (forcedisp.xyz * v.vertex.w / forcedisp.w) * movementFactor * hairLength;

There’s no purely-shader way to set up data to be held between frames/passes. It all requires at least some initial driver-level binding.

There is a hacky trick you can use for VRChat though which is to have a secondary camera that renders to a RenderTexture, but instead of outputting the actual render, you would output your custom data to the RT. Then you sample (.Load()) a given pixel of this RT in your main shader. The data will persist between frames so you can read from the previous results.

A quick search brought me upon this example here of an interactable particle cloud using the technique which you can dig through to learn the approach:

I’m sure there’s probably other posts in the VRC community about using this approach too that you can find with some digging.

I’m going to look into this more in the morning but from my initial gander, I’m kind of confused as to how it even works and a bit doubtful as to whether it could be implemented for my use case. Thank you so much for this, though! I doubt I would have found this on my own, and I have some hope I might be able to make it work.

The particles in their system are no different from vertices in your system really. They use each pixel in the render texture as a float4 to store data to. So you have hundreds of thousands if not millions of indices to store vertex data into. You can use the SV_VertexID from your vertex pass as an index to a given pixel in the RT to grab your value for that vertex from or to write to.

The Git there is a bit confusing at glance because they don’t explain the entire setup the Git. But they have a package in the release that shows more of the setup.

You could also pack all your struct data there into a single float4 to make working with this buffer texture easier.

The float3 force doesn’t really need a full 32 bits of precision. Could easily get away with 10 bits which would be 1024 levels of force (or even 9 bits for 512), and 22 bits for the local vertex position which is still 4,194,304 values. So pack those together into the .xyz of a float4 and finally the computeTime float can just go into the .w of this of this float4.

1 Like

So, this idea has gotten me much closer to what I want, but not there yet. Pretty much, I knew from the beginning this idea on its own wasn’t going to work because the shader you used as an example uses calculations from a separate shader being rendered on a separate object, since the object that needs the data doesn’t need to pass any data to the computation shader. I didn’t have that luxury from the beginning, I need to pass on information about my vertexes, so I decided I would try to make a geometry shader that renders pixels for a rendertexture to pass back to the program. This has worked somewhat, and has it’s own issue of the data sometimes mysteriously vanishing and resetting to 0, of which I posted a new thread for. I’m considering this request for help solved now, so thankyou for your insights.

Not sure what you mean. The first object outputs its values to the RT, those values can be whatever you want, including info about your vertices. The second object then renders and reads the data from that RT.

If you check the code on the shader you provided me as an example, the Mesh shader, which actually renders the particles, doesn’t provide any data to the computation shader.

The setup uses a second camera that has the Render Texture set as its Render Target. So even though it looks like the first shader (sh_PCloud_Main.shader) isn’t passing its data to anything, it actually is. The output of the frag is output to this RT instead of the screen. You can then use that output data in your meshing shader, in their example that is sh_PCloud_Mesh.shader which is reading the values from the RT, assigned to the _Buffer variable to mesh each point.

I know sh_PCloud_Main.shader is passing data to the RT, it’s attached to one of the planes infront of a camera. My point is that I can’t necessarily use the same technique because that technique doesn’t account for the need of input data. Which why is I used a geometry shader to achieve a similar effect, either way I’m considering this particular issue solved anyways.