A Survey of Sim-to-Real Transfer Techniques Applied to Reinforcement Learning for Bioinspired Robots

Research output: Contribution to journalArticlepeer-review

10 Citations (Scopus)


The state-of-the-art reinforcement learning (RL) techniques have made innumerable advancements in robot control, especially in combination with deep neural networks (DNNs), known as deep reinforcement learning (DRL). In this article, instead of reviewing the theoretical studies on RL, which were almost fully completed several decades ago, we summarize some state-of-the-art techniques added to commonly used RL frameworks for robot control. We mainly review bioinspired robots (BIRs) because they can learn to locomote or produce natural behaviors similar to animals and humans. With the ultimate goal of practical applications in real world, we further narrow our review scope to techniques that could aid in sim-to-real transfer. We categorized these techniques into four groups: 1) use of accurate simulators; 2) use of kinematic and dynamic models; 3) use of hierarchical and distributed controllers; and 4) use of demonstrations. The purposes of these four groups of techniques are to supply general and accurate environments for RL training, improve sampling efficiency, divide and conquer complex motion tasks and redundant robot structures, and acquire natural skills. We found that, by synthetically using these techniques, it is possible to deploy RL on physical BIRs in actuality.

Original languageEnglish
Pages (from-to)3444-3459
Number of pages16
JournalIEEE Transactions on Neural Networks and Learning Systems
Issue number7
Publication statusPublished - 2023 Jul 1


  • Bioinspired robots
  • reinforcement learning (RL)
  • sim-to-real
  • transfer techniques


Dive into the research topics of 'A Survey of Sim-to-Real Transfer Techniques Applied to Reinforcement Learning for Bioinspired Robots'. Together they form a unique fingerprint.

Cite this