In my script, I will use a gradient descent method, which requires MxN matrix multiplication and the inverse/transpose of a matrix. However, it seems that Unity only provides 4x4 matrices. I can only store my data in an array and perform these operations element by element. The computational cost is extremely high, not to mention that I will iterate multiple times. Is there any way to efficiently calculate this process?
Thx!
Unity.Mathematics package has more matrix dimentions 2x2 3x3 3x4 4x3 4x2 etc.
and math
API (also part of mathematics package) supports them with different operations.
My requirement is that the dimension N > 4 or/and M > 4.
This is a generic programming issue and even Wikipedia has a dedicated page for it. Matrix multiplication is an O(n³) problem (for square nxn matrices). As you can see in the article, there are various algorithms which can reduce the complexity “slightly”. However when dealing with really large matrices in the end there’s not that much you can do. Our modern hardware has many different versions of vector math support which is very low level and is often used by specialized libraries to do this kind of stuff. Apart from that if you have really large matrices, since a matrix multiplication is essentially just a dot product between a row and a column vector between each row of the first and each column of the second matrix, the problem can be distributed to multiple threads.
How large are your matrices? n<10? n<30? n<100? n>100?
GPUs are very good in processing floating point numbers as that’s their main purpose. So using computeshaders may also be an option
Thank you for your enthusiastic answer. After some investigation, I have also decided to try the multithreading method and the compute shader method respectively to see which method is more suitable for my problem. Thx again!