Hello, I was wondering something. What are some of theoretical advantages/disadvantages of rotating an object inside a shader versus rotating it through it’s transform component.
We are multiplying our vertices by the model matrix anyway. Is there some big overhead to modifying the transform of an object that would make it interesting to rotate inside the shader itself?
If you have 100,000 rotating objects (like asteroids in a space game or something), you may wish to instance them, do the update via ECS/DOTS, or pass the matrices in a compute buffer and generate the geometry on the GPU. The reason to do that is because of draw calls, memory layouts, etc – the actual math is quite trivial wherever it happens.
You are correct in that technically you’re always rotating the vertices in the shader either way.
The difference is whether or not you’re calculating the matrix once on the CPU vs for every single vertex individually, for every shader pass, for every frame. Depending on the shader, the transform matrix may also be needed for the fragment shader, which means that fragment shader also has to calculate the matrix for every pixel individually, for every frame.
If you’re modifying the rotation of a game object’s transform, you’re paying the cost of that game object itself existing, and the cost of updating the transform on the CPU, and then transferring the updated transform matrix to the GPU when it changes.
As for which is better… GPUs are ridiculously fast at doing math, but not that good at keeping track of state. CPUs aren’t as fast, but still probably are faster than you think, and a transform matrix isn’t really that much data.
For a single object, it basically doesn’t matter.
Like @burningmime said, for tens or hundreds of thousands of objects it starts to matter a bit, especially when it comes to game objects. But for just the question of it’s better to do it on the CPU vs GPU, which is better will depend on your needs and your specific CPU/GPU combination.