Robotics: Science and Systems XIX

Hindsight States: Blending Sim & Real Task Elements for Efficient Reinforcement Learning

Simon Guist, Jan Schneider, Vincent Berenz, Alexander Dittrich, Bernhard Schölkopf, Dieter Büchler

Abstract:

Reinforcement learning has shown great potential in solving complex tasks when large amounts of data can be generated with little effort. In robotics, one approach to generate training data builds on simulations or models. However, for many tasks, such as with complex soft robots, devising such models is substantially more challenging. Recent successes in soft robotics indicate that employing complex robots can lead to performance boosts. Here, we leverage the imbalance in complexity of the dynamics to learn more sample-efficiently. We (i) abstract the task into distinct components, (ii) off-load the simple dynamics parts into the simulation, and (iii) multiply these virtual parts to generate more data in hindsight. Our new method, Hindsight States (HiS), uses this data and selects the most useful transitions for training. It can be used with an arbitrary off-policy algorithm. We validate our method on several challenging simulated tasks and demonstrate that it improves learning both on its own and when combined with an existing hindsight algorithm, Hindsight Experience Replay (HER). Finally, we evaluate HiS on a physical system and show that it boosts performance on a complex table tennis task with a muscular robot.

Download:

Bibtex:

  
@INPROCEEDINGS{Guist-RSS-23, 
    AUTHOR    = {Simon Guist AND Jan Schneider AND Vincent Berenz AND Alexander Dittrich AND Bernhard Schölkopf AND Dieter Büchler}, 
    TITLE     = {{Hindsight States: Blending Sim & Real Task Elements for Efficient Reinforcement Learning}}, 
    BOOKTITLE = {Proceedings of Robotics: Science and Systems}, 
    YEAR      = {2023}, 
    ADDRESS   = {Daegu, Republic of Korea}, 
    MONTH     = {July}, 
    DOI       = {10.15607/RSS.2023.XIX.038} 
}