Why doesn't the 'Walker' example in ml-agents use the articulation body component?

I am a graduate student studying motion generation using reinforcement learning using ml-agents as a simulation environment.

While struggling for several months to master ml-agents, I realized that the configurable joint component used in the walker example seems to be less efficient than expected for simulation.

In particular, in the case of humanoid models, rigidbody and configurable joint components must be set for each joint with a collider, anchor axis must be adjusted, and mass and angle range must also be considered. This inconvenience is especially noticeable in the SMPL-X model, a representative humanoid model with many joints.

There is a useful script called JointDriveController, but it is not possible to know what variables the spring and damper will act as in order to implement the latest papers with imitation learning through animator and complex reward functions.

The articulation body component seems very convenient for configuring agents because it combines rigidbody and joint components, so there is no need to manually set a separate parent rigidbody, but I don’t know why there are still no examples using the articulation body component in the example environment.

I am also looking for evolving my procedural animation based cat to an articulation body and apply ML-agents to it for walk, trot, gallop gaits.
No answer to this issue?