I’m trying to work out why removing the cameras would have a negative effect on learning. Observations are collected from raycasts. The agent is not using the camera for learning (there is no camera sensor component). No agent scripts reference the camera, and there are no console errors.
It’s hard to tell since we don’t have access to your environment. But from your description your agent does use the camera in some way in the training if it affects the training performance. Can you list out all the sensors and observations you’re using? Or can you reproduce the same issue using any of our examples?
Another possible reason is that the extra camera affects the framerate, and that affects the training. If that’s the case, you’ll need to figure out what in your scene is depending on the framerate and fix that.
Thanks for your feedback, am really scratching my head with this one! Observations include standard sensors:
and a single float observation in CollectObservations(VectorSensor sensor). The behavior details:
I wondered if there could have been a camera sensor on my character somewhere, but there are no child objects beneath the behavior parameters object. I’ve got a fairly complex character with animation rigging, etc, so somewhere in the model the camera must be affecting something. But I can remove the camera during inference without any issue. Strange!
I’ll try adding the camera to the examples as you suggested and see if I can recreate the issue outside of my project.
Thanks, yes framerate was something I was considering. But the additional cameras would expect to slow the framerate… but perhaps that helps with training. Actually, during training I’m using a time-scale of 4. I could re-run training with the time-scale reduced even further, and see if I see a difference in learning rates.
Given that you mentioned you have some complex animation stuff it’s likely something about framerate. One thing to mention here is during training, mlagents-learn set captureFramerate to 60 by default and that might affect frame updates and thus affect training.
My FPS counter reported a steady 120 frames per second while training. Considering that captureFramerate is 60, I disabled v-sync and set the Application.targetFrameRate to 60. Re-running training, still similar differences despite a steady framerate of 60 during training:
So, it’s interesting. The work-around is easy - I can just enable the cameras while training, and turn them of for inference. But I’d be curious to know what could be causing the difference. Gut feeling is that it’s framerate related.