EDIT:
I found the reason but couldn’t find any reference to this in the documentation in the git, if any Unity dev see’s this, pleas can you shed some light or add more info in the docs please?
So i’m changing the title tag to - feedback
ORIGINAL:
I’m training with a server build (on my PC) with the --no-graphics command, but my GPU is still being utilized at +90%.
I don’t have visual observations.
I didn’t set ML Agents to train on the GPU, and my agents are set up to use Burst.
Is this normal?
Is ML Agents training on the GPU automatically?
Ah, torch is the library that’s used for running the neural network. You probably want that to be using the gpu, if you have one: it should run faster.
I’m not sure what inforation you are looking for for this, but eg the pytorch doc on device is here: Tensor Attributes — PyTorch 2.4 documentation. I think that setting it to null making it use the gpu is probably a Unity thing. (normally, it defaults to cpu, in torch, I think, but defaulting to gpu definitely makes sense, for best performance)
Depends on the size of your network. But yeah, with a few rays as input, and using a small stack of Linear layers for the network, gpu is not going to change much.
If you start feeding images into your network, and you start using convolutional layers, then gpu becomes more useful.
Not that you will get better results using images - in fact, everything will just learn much more slowly - but depends on what you are trying to do.
I’m updating this again because I did some tests to see if it is faster with the CPU or GPU settings,
I noticed a significant increase in performance with the GPU, the CPU setting was much slower to update the policy (I didn’t save statistics).
just for reference the agent network size is 400 units & 2 hidden layers
It has a total of 649 observations inputs
and 1 bool & x2 Vector3 outputs