Robotics: Science and Systems XI

Direct Loss Minimization Inverse Optimal Control

Andreas Doerr, Nathan Ratliff, Jeannette Bohg, Marc Toussaint, Stefan Schaal

Abstract:

Inverse Optimal Control (IOC) has strongly impacted the systems engineering process, enabling automated planner tuning through straightforward and intuitive demonstration. The most successful and established applications, though, have been in lower dimensional problems such as navigation planning where exact optimal planning or control is feasible. In higher dimensional systems, such as humanoid robots, research has made substantial progress toward generalizing the ideas to model free or locally optimal settings, but these systems are complicated to the point where demonstration itself can be difficult. Typically, real-world applications are restricted to at best noisy or even partial or incomplete demonstrations that prove cumbersome in existing frameworks. This work derives a very flexible method of IOC based on a form of Structured Prediction known as Direct Loss Minimization. The resulting algorithm is essentially Policy Search on a reward function that rewards similarity to demonstrated behavior (using Covariance Matrix Adaptation (CMA) in our experiments). Our framework blurs the distinction between IOC, other forms of Imitation Learning, and Reinforcement Learning, enabling us to derive simple, versatile, and practical algorithms that blend imitation and reinforcement signals into a unified framework. Our experiments analyze various aspects of its performance and demonstrate its efficacy on conveying preferences for motion shaping and combined reach and grasp quality optimization.

Download:

Bibtex:

  
@INPROCEEDINGS{Doerr-RSS-15, 
    AUTHOR    = {Andreas Doerr AND Nathan Ratliff AND Jeannette Bohg AND Marc Toussaint AND Stefan Schaal}, 
    TITLE     = {Direct Loss Minimization Inverse Optimal Control}, 
    BOOKTITLE = {Proceedings of Robotics: Science and Systems}, 
    YEAR      = {2015}, 
    ADDRESS   = {Rome, Italy}, 
    MONTH     = {July},
    DOI       = {10.15607/RSS.2015.XI.013} 
}