Speaker
Description
Computing resources play a crucial role in modern particle physics research, covering a wide range of different use
cases. These range from the enormous data processing demands of large experimental collaborations, like ATLAS, CMS or Belle II, over theory calculations and Monte Carlo simulations producing huge amounts of data, to analysis code of PhD or Master's students. These workflows have already been established on the NEMO cluster based on virtual worker nodes. This concept has undergone significant development in the past, now based on container technology, and will be deployed for NEMO 2.
At the Institute of Experimental Particle Physics (ETP) in Karlsruhe, an Overlay Batch System (OBS) is used to provide a
unified access point to various computing resources for all institute members. A key feature of this system is
COBalD/TARDIS, a meta-scheduling tool that dynamically incorporates additional computing resources from external sites. Over the last years, the bwForCluster NEMO has been the one major computing resource for large-scale work at ETP.
This talk introduces the different problems particle physicists face in day-to-day work, how the OBS in our institute
works, including the process of provisioning resources on NEMO and running jobs. It concludes by emphasising the importance of flexible, scalable computing solutions in advancing particle physics research, with the bwForCluster NEMO playing a pivotal role in ETP's computing infrastructure