A Benchmark for Deep Reinforcement Learning-Based Control of Liquid-Propellant Rocket Engines

Apr 4, 2025, 10:30 AM
20m
DESY

DESY

Poster + Talk Talks

Speaker

Kai Dresia

Description

Deep reinforcement learning (DRL) has demonstrated great potential for controlling and regulating complex real-world systems such as nuclear fusion reactors such as tokamaks and particle accelerators. Another promising application is the DRL-based control of liquid-propellant rocket engines (LPREs), which have been a focus of research at the German Aerospace Center (DLR) for the past six years. LPREs are safety-critical systems, where reliability and robustness are of utmost importance. A key difficulty in these systems is the discrepancy between simulation models and real-world behavior, combined with limited availability of real data. An ideal DRL-based controller should be capable of adapting to such errors to ensure robustness and reliability.

To address these challenges, we present a benchmark for LPRE control designed to evaluate DRL-based control strategies. This benchmark includes simulation software calibrated with experimental data, a dataset for fine-tuning, and the ability to simulate representative errors in both sensors and the system itself. The findings from this benchmark are expected to be transferable to particle accelerators and similar systems.

The benchmark will be made freely accessible to the RL community, fostering further research in robust control applications. Our poster outlines the essential steps for deploying DRL controllers on real rocket engines and provides an overview of the benchmark’s components and capabilities.

Primary author

Co-authors

Presentation materials

There are no materials yet.