Robotics: Science and Systems XVIII

The Surprising Effectiveness of Representation Learning for Visual Imitation

Jyothish Pari*, Nur Muhammad (Mahi) Shafiullah*, Sridhar Pandian Arunachalam, Lerrel Pinto
* These authors contributed equally

Abstract:

While visual imitation learning offers one of the most effective ways of learning from visual demonstrations, generalizing from them requires either hundreds of diverse demonstrations, task specific priors, or large, hard-to-train parametric models. One reason such complexities arise is because standard visual imitation frameworks try to solve two coupled problems at once: learning a succinct but good representation from the diverse visual data, while simultaneously learning to associate the demonstrated actions with such representations. Such joint learning causes an interdependence between these two problems, which often results in needing large amounts of demonstrations for learning. To address this challenge, we instead propose to decouple representation learning from behavior learning for visual imitation. First, we learn a visual representation encoder from offline data using standard supervised and self-supervised learning methods. Once the representations are trained, we use non-parametric Locally Weighted Regression to predict the actions. We experimentally show that this simple decoupling improves the performance of visual imitation models on both offline demonstration datasets and real-robot door opening compared to prior work in visual imitation.

Download:

Bibtex:

  
@INPROCEEDINGS{Pari-RSS-22, 
    AUTHOR    = {Jyothish Pari AND {Nur Muhammad (Mahi)} Shafiullah AND {Sridhar Pandian} Arunachalam AND Lerrel Pinto}, 
    TITLE     = {{The Surprising Effectiveness of Representation Learning for Visual Imitation}}, 
    BOOKTITLE = {Proceedings of Robotics: Science and Systems}, 
    YEAR      = {2022}, 
    ADDRESS   = {New York City, NY, USA}, 
    MONTH     = {June}, 
    DOI       = {10.15607/RSS.2022.XVIII.010} 
}