Train traceback of mlagents

i change the parameters in GridWorld in trainer_config.yaml
When I start the training from the Anaconda prompt, everything looks fine at first but then, I get a BrokenPipeError and the training stops. I searched and couldn’t find any solution.

Hi, since I saw this post labeled as “resolved”, have you already solved this problem?

sorry
this problem haven’t been solved.
i can not find the measure to deal with,also i’m waiting

The error message indicates that the connection between python and Unity is interrupted.
Does it happen consistently? Are you able to reproduce the error with any of our example environments?
Could you also share the exact mlagents-learn command you used to run this training?

Also can you check the Player log (Player-X.log) in the results folder and see if there’s any errors?

The “GetOverlappedResult” error looks like the connection is probably connected to one or more of the instances crashing. Did you try rebooting your machine?

i use '“mlagentslearn config/trainer_config.yaml --run-id=GridWorld --train”
i reset the file “trainer_config.yaml”,now that is ok
i think that maybe the changes of parameters of GridWorld in traienr_config.yaml lead to this error

Could you share what exactly did you change in the config file?

of course
GridWorld:
batch_size: 32
normalize: false
num_layers: 3
hidden_units: 512
beta: 5.0e-3
buffer_size: 256
max_steps: 5000000
summary_freq: 20000
time_horizon: 5
reward_signals:
extrinsic:
strength: 1.0
gamma: 0.9

but I trained two million times ,and i set if eat goal ,agent will get 3points,if ear pit it will get -2 points.Also each step will get -0.001.The results are as follows:

and if i use :

GridWorld:
batch_size: 256
normalize: false
num_layers: 5
hidden_units: 512
beta: 5.0e-3
buffer_size: 256000
max_steps: 5000000
summary_freq: 10000
time_horizon: 5
reward_signals:
extrinsic:
strength: 1.0
gamma: 0.9
it will cause error