Robotics: Science and Systems XVII

Robust Value Iteration for Continuous Control Tasks

Michael Lutter, Shie Mannor, Jan Peters, Dieter Fox, Animesh Garg

Abstract:

When transferring a control policy from simulation to a physical system; this policy needs to be robust to variations in the dynamics to perform well. Commonly; the optimal policy overfits to the approximate model and the corresponding state-distribution. Therefore; the policy fails when transferred to the physical system. In this paper; we are presenting robust value iteration. This approach uses dynamic programming to compute the optimal value function on the compact state domain and incorporates adversarial perturbations of the system dynamics. The adversarial perturbations cause the resulting optimal policy to be robust to changes in the dynamics. Utilizing the continuous time perspective of reinforcement learning; we derive the optimal perturbations for the states; actions; observations and model parameters in closed-form. The resulting algorithm does not require discretization of states or actions. Therefore; the optimal adversarial perturbations can be efficiently incorporated in the min-max value function update. We apply the resulting algorithm to the physical Furuta Pendulum and cartpole. By changing the masses of the systems we evaluate the quantitative and qualitative performance across different model parameters. We show that robust value iteration is more robust compared to deep reinforcement learning algorithm and the non-robust version of the algorithm.

Download:

Bibtex:

  
@INPROCEEDINGS{Lutter-RSS-21, 
    AUTHOR    = {Michael Lutter AND Shie Mannor AND Jan Peters AND Dieter Fox AND Animesh Garg}, 
    TITLE     = {{Robust Value Iteration for Continuous Control Tasks}}, 
    BOOKTITLE = {Proceedings of Robotics: Science and Systems}, 
    YEAR      = {2021}, 
    ADDRESS   = {Virtual}, 
    MONTH     = {July}, 
    DOI       = {10.15607/RSS.2021.XVII.007} 
}