The ESS-Bilbao injector is a multipurpose machine that will accelerate protons up to 3 MeV. It will be used to produce neutrons by means of a Beryllium target. The first part of the injector has been running smoothly for more than a decade. This is formed by a proton source of the Electron Cyclotron Resonance (ECR) type that posseses unique characteristics. The subsequent Low Energy Transport...
Machine learning has emerged as a powerful solution to the modern challenges in accelerator physics. However, the limited availability of beam time and the high computational cost of simulation codes pose significant hurdles in generating the necessary data for training state-of-the-art machine learning models. Furthermore, optimisation methods can be used to tune accelerators and perform...
Machine unlearning is an emerging field in machine learning that focuses on efficiently removing the influence of specific data from a trained model. This capability is critical in scenarios requiring compliance with data privacy regulations or when erroneous data needs to be removed without retraining from scratch. In this study, I explore the importance of machine unlearning as a way to...
This study explores advanced strategies for optimal control in systems with delayed consequences, using beam steering in the AWAKE electron line at CERN as a benchmark. We formulate the task as a constrained optimization problem within a continuous, primarily linear Markov Decision Process (MDP), incorporating measured system parameters and realistic termination criteria. A wide range of...
Reinforcement Learning methods typically require a large number of interactions with the environment to learn anything useful. This makes learning with sophisticated accelerator simulations difficult because of the total time required to train. On the other hand, learning with environments based on these accelerator codes is potentially very useful because they contain a lot of knowledge...
This paper investigates the automation of particle accelerator control using few-shot reinforcement learning (RL), a promising approach to rapidly adapt control strategies with minimal training data. With the advent of advanced diagnostic tools and increasingly complex accelerator schedules, ensuring reliable performance has become critical. We focus on the physics simulation of the AWAKE...
Reinforcement learning (RL) has been successfully applied to various online tuning tasks, often outperforming traditional optimization methods. However, model-free RL algorithms typically require a high number of samples, with training processes often involving millions of interactions. As this time-consuming process needs to be repeated to train RL-based controllers for each new task, it...
The use of autonomous mobile robots in dynamic and uncertain environments requires adaptive and robust decision-making. Synchronized digital twins — real-time virtual counterparts of physical systems — offer a promising approach to improving planning, increasing robustness, and enhancing adaptability. However, developing such systems presents significant challenges, including balancing...