Does Unity do vector calculations on the cpu or the gpu?
CPU.
–Eric
I guess that leads to my next question. Can the GPU be set to do the vector calc? Also are there any benchmark docs on max vector calculations in Unity?
I’m pretty sure you can’t fetch calculations done on the GPU like that. One of the reasons debugging shaders always comes down to color outputting…
it would be dead slow to do it on the gpu. the overhead for fetching data like this is in a range that even a P3 1Ghz could perform more vector calcs than your gpu approach if you do distinct vector ops as you can do them through the matrix class.
also on todays cpus you are performing them that fast that you need massive matrices to benefit from the gpu. If I recall recall right it was 64x64 or alike at very least till you exceed well written MMX - SSE opted math libraries for this stuff.
For some reason I thought a GPU would be faster at vector calculations. As far as benchmark tests anyone ever had unity running 30,000 vector calcs per second or more? What kind of processing power is needed?
If I’m not mistaken the problem here is that CPU and GPU don’t run in sync. So in order to retrieve data from the GPU, your software would have to wait up for the GPU (or the other way around), transfer the data and then both go back to do what they were doing. I can imagine that to be quite a bottleneck.
It depends on if the volume of data is both large enough and highly conducive to parallelization.
One of the big things these days for the geek crowd, is not just CUDA but using the gpu to accelerate generation of 3d Mandlebulbs. The use shaders to do it, using the same iterative vector calculations across the dataaset.
In those cases it turns what would be an overnight process on a cpu into near instant gratification.
Whatever happened to Altivec and MMX anyhow?