Speaker
Description
Reinforcement learning - a subfield of machine learning - has recently demonstrated great success in solving games. This triggered an increased interest in many fields toward applying reinforcement learning to the control of real-world systems, including self-driving cars and agile robotic systems. Particle accelerators were no exception in this regard, and thus numerous studies have been published, showcasing the application of reinforcement learning for machine tuning and the control of individual accelerator components. But reinforcement leaning-based methods are not yet sufficiently robust to guarantee safe and reliable control of complex physical systems with continuous state and action spaces.
In this contribution, we discuss a learning-to-safely-control paradigm that could be the key to solve the safety issue of reinforcement learning. This paradigm combines stability notions from the theory of automatic control with the flexibility and expressiveness of machine learning. As a case study we consider cavity resonance control - an important aspect that remains an open issue in the field of accelerator control, and which will be studied in the framework of the iSAS EU Horizon program.
Speed talk: | Normal speed talk selection |
---|