Motor synergy development in high-performing deep reinforcement learning algorithms

Jiazheng Chai, Mitsuhiro Hayashibe

Research output: Contribution to journalArticlepeer-review

18 Citations (Scopus)


As human motor learning is hypothesized to use the motor synergy concept, we investigate if this concept could also be observed in deep reinforcement learning for robotics. From this point of view, we carried out a joint-space synergy analysis on multi-joint running agents in simulated environments trained using two state-of-The-Art deep reinforcement learning algorithms. Although a synergy constraint has never been encoded into the reward function, the synergy emergence phenomenon could be observed statistically in the learning agent. To our knowledge, this is the first attempt to quantify the synergy development in detail and evaluate its emergence process during deep learning motor control tasks. We then demonstrate that there is a correlation between our synergy-related metrics and the performance and energy efficiency of a trained agent. Interestingly, the proposed synergy-related metrics reflected a better learning capability of SAC over TD3. It suggests that these metrics could be additional new indices to evaluate deep reinforcement learning algorithms for motor learning. It also indicates that synergy is required for multi-joints robots to move energy-efficiently.

Original languageEnglish
Article number8966298
Pages (from-to)1271-1278
Number of pages8
JournalIEEE Robotics and Automation Letters
Issue number2
Publication statusPublished - 2020 Apr


  • Deep learning in robotics and automation
  • motor synergy
  • performance evaluation and benchmarking


Dive into the research topics of 'Motor synergy development in high-performing deep reinforcement learning algorithms'. Together they form a unique fingerprint.

Cite this