Speaker
Description
Recent advances in fine-tuning large language models (LLMs) with reinforcement learning (RL) techniques have demonstrated their ability to generalize, unlike the often-used Supervised Fine-Tuning (SFT).
Many aspects of particle accelerators, such as beam parameters, have well-defined objectives, making them ideal candidates for RL-driven optimization.
In this work, we explore the capabilities of current open-source LLMs fine-tuned to understand the peculiarities of the ALS architecture.
We identify several optimization objectives that can be beneficial and create a small dataset to benchmark our hypothesis.
Our goal is to demonstrate how RL can enhance an LLM’s ability to interpret accelerator states, optimize performance, and provide intelligent insights into beam dynamics.