TY - JOUR
T1 - Motor synergy development in high-performing deep reinforcement learning algorithms
AU - Chai, Jiazheng
AU - Hayashibe, Mitsuhiro
N1 - Funding Information:
Manuscript received September 9, 2019; accepted January 3, 2020. Date of publication January 22, 2020; date of current version January 31, 2020. This letter was recommended for publication by Associate Editor L. Jamone and Editor T. Asfour upon evaluation of the reviewers’ comments. This work was supported by the JSPS Grant-in-Aid for Scientific Research (B), no. 18H01399. (Corresponding author: Jiazheng Chai.) The authors are with the Neuro-Robotics Lab, Department of Robotics, Graduate School of Engineering, Tohoku University, Sendai 980-8579, Japan (e-mail: chai.jiazheng.q1@dc.tohoku.ac.jp; hayashibe@tohoku.ac.jp).
Publisher Copyright:
© 2016 IEEE.
PY - 2020/4
Y1 - 2020/4
N2 - As human motor learning is hypothesized to use the motor synergy concept, we investigate if this concept could also be observed in deep reinforcement learning for robotics. From this point of view, we carried out a joint-space synergy analysis on multi-joint running agents in simulated environments trained using two state-of-The-Art deep reinforcement learning algorithms. Although a synergy constraint has never been encoded into the reward function, the synergy emergence phenomenon could be observed statistically in the learning agent. To our knowledge, this is the first attempt to quantify the synergy development in detail and evaluate its emergence process during deep learning motor control tasks. We then demonstrate that there is a correlation between our synergy-related metrics and the performance and energy efficiency of a trained agent. Interestingly, the proposed synergy-related metrics reflected a better learning capability of SAC over TD3. It suggests that these metrics could be additional new indices to evaluate deep reinforcement learning algorithms for motor learning. It also indicates that synergy is required for multi-joints robots to move energy-efficiently.
AB - As human motor learning is hypothesized to use the motor synergy concept, we investigate if this concept could also be observed in deep reinforcement learning for robotics. From this point of view, we carried out a joint-space synergy analysis on multi-joint running agents in simulated environments trained using two state-of-The-Art deep reinforcement learning algorithms. Although a synergy constraint has never been encoded into the reward function, the synergy emergence phenomenon could be observed statistically in the learning agent. To our knowledge, this is the first attempt to quantify the synergy development in detail and evaluate its emergence process during deep learning motor control tasks. We then demonstrate that there is a correlation between our synergy-related metrics and the performance and energy efficiency of a trained agent. Interestingly, the proposed synergy-related metrics reflected a better learning capability of SAC over TD3. It suggests that these metrics could be additional new indices to evaluate deep reinforcement learning algorithms for motor learning. It also indicates that synergy is required for multi-joints robots to move energy-efficiently.
KW - Deep learning in robotics and automation
KW - motor synergy
KW - performance evaluation and benchmarking
UR - http://www.scopus.com/inward/record.url?scp=85079273789&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85079273789&partnerID=8YFLogxK
U2 - 10.1109/LRA.2020.2968067
DO - 10.1109/LRA.2020.2968067
M3 - Article
AN - SCOPUS:85079273789
SN - 2377-3766
VL - 5
SP - 1271
EP - 1278
JO - IEEE Robotics and Automation Letters
JF - IEEE Robotics and Automation Letters
IS - 2
M1 - 8966298
ER -