I have some strange thing on my shader .hlsl textfile, that is used inside the ShaderGraph.
i’am learning always and its the first time using .hlsl Textfile inside a CustomFunction of ShaderGraph
I got this error, even when i use c0 or c1 or .m03
invalid subscript '_m03'
Thats my Code:
static const float3x3 r[6] = {
float3x3(1.0f, 0.0f, 0.0f, 0.0f, -1.192093E-07f, -1.0f, 0.0f, 1.0f, -1.192093E-07f), //Up
float3x3(-10.0f, 0.0f, 0.0f, 0.0f, -1.192093E-07f, 1.0f, 0.0f, 1.0f, -1.192093E-07f), //Down
float3x3(1.0f, 0.0f, 0.0f, 0.0f, 1.0f, 0.0f, 0.0f, 0.0f, 1.0f), //Forward
float3x3(-1.0f, 0.0f, 0.0f, 0.0f, 1.0f, 0.0f, 0.0f, 0.0f, -1.0f), //Back
float3x3(-1.192093E-07f, 0.0f, 1.0f, 0.0f, 1.0f, 0.0f, -1.0f, 0.0f, -1.192093E-07f), //Left
float3x3(-1.192093E-07f, 0.0f, -1.0f, 0.0f, 1.0f, 0.0f, 1.0f, 0.0f, -1.192093E-07f) //Right
};
static const float3 DirectionVector[6] = {
float3(0, 0.5, 0), //Up
float3(0, -0.5, 0), //Down
float3(0, 0, -0.5), //Forward
float3(0, 0, 0.5), //Back
float3(-0.5, 0, 0), //Left
float3(0.5, 0, 0) //Right
};
void ConfigureProcedural () {
#if defined(UNITY_PROCEDURAL_INSTANCING_ENABLED)
int i = _Indexes[unity_InstanceID];
int y = i / (128 * 128);
int x = (i - y * 128 * 128) / 128;
int z = i - y * 128 * 128 - x * 128;
int d = (i % 6);
float3 v = float3(x, y, z) + DirectionVector[d];
float3x3 rot = r[d];
float3x4 m = float3x4(rot._m00, rot._m01, rot._m02, rot._m03, rot._m10, rot._m11, rot._m12, rot._m13, rot._m20, v.x, v.y, v.z);
float4x4 mx = float4x4(m._m00, m._m01, m._m02, m._m03, m._m10, m._m11, m._m12, m._m13, m._m20, m._m21, m._m22, m._m23, 0.0, 0.0, 0.0, 1.0);
unity_ObjectToWorld = mx;
#endif
}
If i change it to this:
float3x4 m = _Matrices[unity_InstanceID];
unity_ObjectToWorld._m00_m01_m02_m03 = m._m00_m01_m02_m03;
unity_ObjectToWorld._m10_m11_m12_m13 = m._m10_m11_m12_m13;
unity_ObjectToWorld._m20_m21_m22_m23 = m._m20_m21_m22_m23;
unity_ObjectToWorld._m30_m31_m32_m33 = float4(0.0, 0.0, 0.0, 1.0);
it works fine…
_Matrices is a StructuredBuffer with float3x4
_Indexes is a Buffer with ints inside
If i got this working, i sending much less data to the gpu using the buffers.
Why I’am need such performance, i will build my base-structure so good as possible, because iam adding more features to it which costs performance over all. Example are more realistic shaders with moveing grass water and so on.
Some other thing that is not important for the moment:
In future i want to change from
Buffer<int> _Indexes;
to
Buffer<short> _Indexes; //stride of 2
Buffer<byte> _Indexes; //stride of 1
this makes possible to sending less data, because i’am only sending values from 0 to 255, not more. I read that Buffer is faster as StructuredBuffer for this case, is this right? and whats the difference to a RWBuffer or RWStructuredBuffer?