max_step is not working for --inference mode

Hi,
I have trained a model using following command:

mlagents-learn config/ppo/unit.yaml --run-id=myid --env=cs_window/Build --force

that in config file(unit.yaml), max_step is set to 10000. So training process is stopped at step 10000 and save the .onnx file.

So far so good!

But now I want to inference the trained model. I have used the following command:

mlagents-learn config/ppo/unit.yaml --run-id=myid --env=cs_window/Build --resume --inference

Again, max_step is set to 10000 in unti.yaml config file but the inference process does not stop at step 10000.

What is the problem? Why the max_step is not working for inference mode?

Thanks

When does the inference process stop ?
Note: A clever work around to this would be to simply continue training using a learning rate of 0, as you dont need to continue training when doing inference in RL (usually).