Robotics: Science and Systems XVI

Emergent Real-World Robotic Skills via Unsupervised Off-Policy Reinforcement Learning

Archit Sharma, Michael Ahn, Sergey Levine, Vikash Kumar, Karol Hausman, Shixiang Gu

Abstract:

Reinforcement learning provides a general framework for learning robotic skills while minimizing engineering effort. However, most reinforcement learning algorithms assume that a well-designed reward function is provided, and learn a single behavior for that single reward function. Such reward functions can be difficult to design in practice. Can we instead develop efficient reinforcement learning methods that acquire diverse skills without any reward function, and then re-purpose these skills for downstream tasks? In this paper, we demonstrate that a recently proposed unsupervised skill discovery algorithm can be extended into an efficient off-policy method, making it suitable for performing unsupervised reinforcement learning in the real world. Firstly, we show that our proposed algorithm provides substantial improvement in learning efficiency, making reward-free real-world training feasible. Secondly, we move beyond the simulation environments and evaluate the algorithm on real physical hardware. On quadrupeds, we observe that locomotion skills with diverse gaits and different orientations emerge without any rewards or demonstrations. We also demonstrate that the learnt skills can be composed using model predictive control for goal-oriented navigation, without any additional training.

Download:

Bibtex:

  
@INPROCEEDINGS{Sharma-RSS-20, 
    AUTHOR    = {Archit Sharma AND Michael Ahn AND Sergey Levine AND Vikash Kumar AND Karol Hausman AND Shixiang Gu}, 
    TITLE     = {{Emergent Real-World Robotic Skills via Unsupervised Off-Policy Reinforcement Learning}}, 
    BOOKTITLE = {Proceedings of Robotics: Science and Systems}, 
    YEAR      = {2020}, 
    ADDRESS   = {Corvalis, Oregon, USA}, 
    MONTH     = {July}, 
    DOI       = {10.15607/RSS.2020.XVI.053} 
}