To main content

Safe Reinforcement Learning for Continuous Spaces through Lyapunov-Constrained Behavior

Abstract

This paper presents a safe learning strategy for continuous state and action spaces by utilizing Lyapunov stability properties of the studied systems. The reinforcement learning algorithm Continous Actor-Critic Learning Automation (CACLA) is combined with the notion of control Lyapunov functions (CLF) to limit the learning and exploration behavior to operate inside the stability region of the system to ensure safe operation at all times. The paper extends previous results for discrete action sets to take advantage of the more general continuous actions sets, and show that the continuous method is able to find a comparable solution to the best discrete action choices while avoiding the need for good heuristic choices in the design process.
Read publication

Category

Academic article

Language

English

Author(s)

  • Sigrud Aksnes Fjerdingen
  • Erik Kyrkjebø

Affiliation

  • SINTEF Digital / Mathematics and Cybernetics

Year

2011

Published in

Frontiers in Artificial Intelligence and Applications

ISSN

0922-6389

Publisher

IOS Press

Page(s)

70 - 79

View this publication at Cristin