Hi,
I’d like to pass a lot of matrices to a material. Material.SetMatrixArray seems the obvious choice, but I’m curious about size limitations. I couldn’t find any definitive info, just that Material.Set_Array might be limited to 1023 elements according to this 2018 thread.
Can somebody confirm that this is still the case and does it apply to matrix arrays as well?
I believe I have two options:
Switch to using ComputeBuffers, which isn’t supported on all platforms
For more than 1023 matrices, use multiple matrix arrays => I would like to know how many matrices in total I can pass like that, is there a total memory “budget” for all uniforms in a shader?
Hi!
A single constant buffer is normally limited to 65536 bytes. Each matrix (I assume you need 4x4, float) takes 4x4x4 bytes = 64 bytes, so a single constant buffer can hold up to 1024 matrices like that.
Some devices have lower limits (16384 bytes, for example). There’s also a hardware-dependent limit on the number of constant buffers used.
What exactly would you like to implement?
Thank you!
I got a warning about exceeding that limit when I tried an array size of 1023 (because of other additional uniforms), but I’ve moved past using matrices directly.
Since the matrices all describe linear transformations between 2d points (including translation, rotation, scaling and shearing), I only need 6 of the 16 values a float4x4 would store. The amount of values I can store with 6x4 = 24 bytes per “matrix” doesn’t sound too bad, but I’m thinking about writing the values into a texture ( distributing the six values into two consecutive pixels (RGBA RG / BA RGBA alternating) of a RGBA Half texture assuming the 16 bit per channel of RGBA Half offer enough precision compared to the 32 bit floats in float4x4s).
I’m figuring out things as I go, but I think even if I need 14 floats instead of the 6 to store some more things I might need, sampling 4 RGBA Half texels per fragment should be acceptable, given how common it is for PBR shaders to sample multiple textures for albedo, normal, metallic/smoothness/AO, emission, details etc.
This approach should have less restrictive limits (i.e. max texture resolution) even on weaker hardware I believe (aside from texture format support which is why I’m eyeing RGBA Half)?
Now, if anybody has a good resource on how to encode floats from C# into separate channels of a texture and correctly reconstruct them in the fragment shader, please share! I remember quite a lot of trial and error from similar endeavors in the past, given all the things that can introduce errors (rounding, color space conversion, not using point sampling etc.).