5–7 Jul 2023
Dresden
Europe/Berlin timezone

Learning to Do or Learning While Doing: Reinforcement Learning and Bayesian Optimisation for Online Continuous Tuning

6 Jul 2023, 10:36
3m
Dresden

Dresden

Helmholtz-Zentrum Dresden-Rossendorf (HZDR)
Speed talk & Poster ST - Beam control Speedtalks: Controls/Seeding/DAQ

Speaker

Jan Kaiser (DESY)

Description

Online tuning of real-world plants is a complex optimisation problem that continues to require manual intervention by experienced human operators. Autonomous tuning is a rapidly expanding field of research, where learning-based methods, such as Reinforcement Learning-trained Optimisation (RLO) and Bayesian optimisation (BO), hold great promise for achieving outstanding plant performance and reducing tuning times. Which algorithm to choose in different scenarios, however, remains an open question. Here we present a comparative study using a routine task in a real particle accelerator as an example, showing that RLO generally outperforms BO, but is not always the best choice. Based on the study's results, we provide a clear set of criteria to guide the choice of algorithm for a given tuning task. These can ease the adoption of learning-based autonomous tuning solutions to the operation of complex real-world plants, ultimately improving the availability and pushing the limits of operability of these facilities, thereby enabling scientific and engineering advancements.

Primary author

Co-authors

Andrea Santamaria Garcia (KIT) Annika Eichler (MSK (Strahlkontrollen)) Chenran Xu (KIT) Dr Erik Bruendermann (KIT) Florian Burkart (MPY1 (MPY Fachgruppe 1)) Dr Frank Mayet (MPY1 (MPY Fachgruppe 1)) Hannes Dinter (MPY1 (MPY Fachgruppe 1)) Holger Schlarb (MSK (Strahlkontrollen)) Oliver Stein (D3 (Strahlenschutz)) Thomas Vinatier (MPY1 (MPY Fachgruppe 1)) Willi Kuropka (MPY1 (MPY Fachgruppe 1))

Presentation materials