Robotics: Science and Systems XIX
Solving Stabilize-Avoid via Epigraph Form Optimal Control using Deep Reinforcement Learning
Oswin So, Chuchu FanAbstract:
Tasks for autonomous robotic systems commonly require stabilization to a desired region while maintaining safety specifications. However, solving this multi-objective problem is challenging when the dynamics are nonlinear and high-dimensional, as traditional methods do not scale well and are often limited to specific problem structures. To address this issue, we propose a novel approach to solve the stabilize-avoid problem via the solution of an infinite-horizon constrained optimal control problem (OCP). We transform the constrained OCP into epigraph form and obtain a two-stage optimization problem that optimizes over the policy in the inner problem and over an auxiliary variable in the outer problem. We then propose a new method for this formulation that combines an on-policy deep reinforcement learning algorithm with neural network regression. Our method yields better stability during training, avoids instabilities caused by saddle-point finding, and is not restricted to specific requirements on the problem structure compared to more traditional methods. We validate our approach on different benchmark tasks, ranging from low-dimensional toy examples to an F16 fighter jet with a 17-dimensional state space. Simulation results show that our approach consistently yields controllers that match or exceed the safety of existing methods while providing ten-fold increases in stability performance from larger regions of attraction.
Bibtex:
@INPROCEEDINGS{So-RSS-23, AUTHOR = {Oswin So AND Chuchu Fan}, TITLE = {{Solving Stabilize-Avoid via Epigraph Form Optimal Control using Deep Reinforcement Learning}}, BOOKTITLE = {Proceedings of Robotics: Science and Systems}, YEAR = {2023}, ADDRESS = {Daegu, Republic of Korea}, MONTH = {July}, DOI = {10.15607/RSS.2023.XIX.085} }