Speaker
Description
Reinforcement Learning (RL) is emerging as a valuable method for controlling and optimizing particle accelerators, learning through direct experience without a pre-existing model. However, its low sample efficiency limits its application in real-world scenarios. This paper introduces a model-based RL approach using Gaussian processes to address this efficiency challenge. The proposed RL agent successfully controlled the trajectory in CERN's AWAKE facility with limited interactions, outperforming traditional numerical optimizers. Unlike these optimizers, which require exploration for each use, the RL agent quickly learns and can be applied for single or few-shot control, including online stabilization of accelerators. The method is also capable of respecting state constraints and non-stationary environements, which is demonstrated in simulations.
This method represents a significant step forward in applying RL to accelerator control for practical use.
Possible contributed talk | No |
---|---|
Are you a student? | No |