For more than half a decade, RadiaSoft has developed machine learning (ML) solutions to problems of immediate, practical interest in particle accelerator operations. These solutions include machine vision through convolutional neural networks for automating neutron scattering experiments and several classes of autoencoder networks for de-noising signals from beam position monitors and...
We present design considerations and challenges for the fast machine learning component of a third-order resonant beam extraction regulation system being commissioned to deliver steady beam rates to the mu2e experiment at Fermilab. Dedicated quadrupoles drive the tune toward the 29/3 resonance each spill, extracting beam at kV multiwire septa. The overall Spill Regulation System consists of...
The slow extracted beams at the CERN Super Proton Synchrotron (SPS) are transported over several 100 m long transfer lines to three targets in the North Area Experimental Hall. The experiments require to eliminate intensity fluctuations over the roughly 5 s particle spill and hence to debunch the extracted beams. In this environment, secondary emission monitors (SEMs) have to replace the...
In BNL’s Booster, the beam bunches can be split into two or three smaller bunches to reduce their space-charge forces. They are then merged back after acceleration in the Alternating Gradient Synchrotron (AGS). This acceleration with decreased space-charge forces can reduce the final emittance, increasing the luminosity in RHIC and improving proton polarization. Parts of this procedure have...
Aging of the stripper foil and unexpected machine shutdowns are the primary causes for reduction of the injected intensity from CERN’s Linac3 into the Low Energy Ion Ring (LEIR). As a result, the set of optimal control parameters that maximizes beam intensity in the ring tends to drift, requiring daily adjustments to the machine control settings. This paper explores the design of a...
The complexity of the GSI/FAIR accelerator facility demands a high level of automation in order to maximize time for physics experiments. Accelerator laboratories world-wide are exploring a variety of techniques to achieve this, from classical optimization to reinforcement learning.
Geoff, the Generic Optimization Framework & Frontend, is an open-source framework that harmonizes access to...
Manual alignment of optical systems can be time consuming and the achieved performance of the system varies depending on the operator doing the alignment. A reinforcement learning approach using the PPO algorithm was used to train agents to align simple two-mirror optical setups, as well as a full regenerative laser amplifier. The goal is to produce agents that can reproducibly align the setup...
Reinforcement learning (RL) is gaining more and more importance in the field of machine learning (ML). One subfield of RL is Multi-Agent RL (MARL). Here, several agents learn to solve a problem simultaneously rather than a single agent. For this reason, this approach is suitable for many real-world problems.
Since learning in a multiple agent scenario is highly complex, further conflicts can...
Noisy intermediate-scale quantum (NISQ) computers promise a new paradigm for what is possible in information processing, with the ability to tackle complex and otherwise intractable computational challenges, by harnessing the massive intrinsic parallelism of qubits. Central to realising the potential of quantum computing are perfect entangling (PE) two-qubit gates, which serve as a critical...
The ESS-Bilbao injector is a multipurpose machine that will accelerate protons up to 3 MeV. It will be used to produce neutrons by means of a Beryllium target. The first part of the injector has been running smoothly for more than a decade. This is formed by a proton source of the Electron Cyclotron Resonance (ECR) type that posseses unique characteristics. The subsequent Low Energy Transport...
Machine learning has emerged as a powerful solution to the modern challenges in accelerator physics. However, the limited availability of beam time and the high computational cost of simulation codes pose significant hurdles in generating the necessary data for training state-of-the-art machine learning models. Furthermore, optimisation methods can be used to tune accelerators and perform...
Machine unlearning is an emerging field in machine learning that focuses on efficiently removing the influence of specific data from a trained model. This capability is critical in scenarios requiring compliance with data privacy regulations or when erroneous data needs to be removed without retraining from scratch. In this study, I explore the importance of machine unlearning as a way to...
This study explores advanced strategies for optimal control in systems with delayed consequences, using beam steering in the AWAKE electron line at CERN as a benchmark. We formulate the task as a constrained optimization problem within a continuous, primarily linear Markov Decision Process (MDP), incorporating measured system parameters and realistic termination criteria. A wide range of...
Reinforcement Learning methods typically require a large number of interactions with the environment to learn anything useful. This makes learning with sophisticated accelerator simulations difficult because of the total time required to train. On the other hand, learning with environments based on these accelerator codes is potentially very useful because they contain a lot of knowledge...
The increasing automatization and the surging number and resolution of sensors in scientific experiments result in large, heterogeneous, and complex data collections. Data Science is, therefore, a key technology in modern natural sciences and materials science. Data-intensive research at the Science City Hamburg Bahrenfeld that centers around several large-scale user facilities for research in...
This paper investigates the automation of particle accelerator control using few-shot reinforcement learning (RL), a promising approach to rapidly adapt control strategies with minimal training data. With the advent of advanced diagnostic tools and increasingly complex accelerator schedules, ensuring reliable performance has become critical. We focus on the physics simulation of the AWAKE...
Reinforcement learning (RL) has been successfully applied to various online tuning tasks, often outperforming traditional optimization methods. However, model-free RL algorithms typically require a high number of samples, with training processes often involving millions of interactions. As this time-consuming process needs to be repeated to train RL-based controllers for each new task, it...
The use of autonomous mobile robots in dynamic and uncertain environments requires adaptive and robust decision-making. Synchronized digital twins — real-time virtual counterparts of physical systems — offer a promising approach to improving planning, increasing robustness, and enhancing adaptability. However, developing such systems presents significant challenges, including balancing...