- Indico style
- Indico style - inline minutes
- Indico style - numbered
- Indico style - numbered + minutes
- Indico Weeks View
The 10th bwHPC symposium will be held on September 25/26 in Freiburg. Researchers are invited to present scientific results and future projects in the framework of bwHPC and bwCloud, two initiatives to provide researchers in Baden-Württemberg with large scale digital infrastructure resources. A particular focus will be on sustainability and energy efficiency in the fields of high performance computing, cloud computing and research data management. |
|
|
The event offers workshops, tutorials and scientific presentations along with opportunities to enter into a dialogue with senior researchers already making use of these significant computational resources. The symposium is free of charge and attendance is open to researchers from all scientific fields. Participation from outside Baden-Württemberg is explicitly encouraged. |
|
|
Registration open to Workshops/Tutorials: |
|
|
Winners of best Talk/best Poster:
|
|
|
|
|
In the three-hour long, hands-on workshop we will introduce the fundamentals of using Jupyter Notebooks for data analysis and data visualization, and machine learning using Python.
We will utilize popular packages for scientific computing and data analysis, including NumPy, Pandas, and Matplotlib.
Furthermore, we will introduce opportunities for Python-based research on High-Performance Computing (HPC) resources within the bwHPC project.
The workshops includes a basic introduction to the Python programming language, so no prior programming experience in Python is required.
Registration:
Workshop: Scientific Programming with Python using Jupyter Notebooks
Biocatalysis has gained a reputation as sustainable alternative to conventional catalysis over the past years. Still, several limitations need to be addressed in order to make this method a competitive candidate for industrial applications. The catalytic activity of unspecific peroxygenases has been investigated in several experimental studies already, with promising results in terms of turn over rates and stability. Thus, our research focuses on the investigation of catalytic cycle of this enzyme class. As model proteins we have concentrated on $Aae$UPO and $Cvi$UPO, these heme-thiolate enzymes oxidize their substrate using $\mathrm{H_2O_2}$ as co substrate. Using DFT calculations and QM/MM simulations of the whole enzyme system, we investigate how the molecular structure of the protein influences the energy barriers along the catalytic cycle. Furthermore, we want gain a deeper understanding of possible reaction channels for heme poisoning by additional hydrogen peroxide.
Understanding and modeling the nonequilibrium dynamics of many-body
quantum systems is a crucial goal for many fields in physics and
chemistry. Examples include scattering of molecules off metal surfaces,
charge transport through molecular nanojunctions, spintronics, and
molecular photophysics. This motivates the development of sophisticated
theories capable of treating not only the large-dimensional nature of
molecular systems but also inelastic interactions and nonequilibrium
conditions. Even with sophisticated theory, nonequilibrium calculations
of realistic systems require efficient numerical implementations and,
even then, significant computational resources. In this contribution,
we will introduce our work and how it is implemented on the bwHPC, as
well as some particular examples highlighting the unique physics and
challenges we face when modeling nonequilibrium dynamics of many-body
quantum systems.
In recent years, phenomic prediction has emerged as a new method in the plant breeding community. The method can be compared to genomic prediction, except that instead of marker data, NIR spectra are used to predict various traits. Phenomic prediction has been shown to have great potential. However, there are still many open questions regarding its practical application. For example, in the field of spectroscopy, it is standard practice to optimize the preprocessing of spectra, which so far has only been done to a limited extent for phenomic prediction. We therefore used three different data sets of breeding programs of soybean, triticale and maize to identify the best combinations of Savitzky-Golay filter parameters for preprocessing near-infrared spectra for phenomic prediction. We tested 677 combinations of polynomial order, derivative and window size and evaluated them with fivefold cross-validation.
Artificial Intelligence (AI) has become indispensable for analyzing large-scale datasets, particularly in the realm of 3D image volumes.
However, effectively harnessing AI for such tasks often requires advanced algorithms and high-performance computing (HPC) resources, presenting significant challenges for non-technical users.
To overcome these barriers, we present KI-Morph, a novel software platform for large-scale image analysis seamlessly integrated with the bwHPC infrastructure.
It offers a user-friendly interface, enabling sophisticated AI-driven analysis without requiring technical expertise in either AI or HPC.
KI-Morph prioritizes data privacy and sovereignty, ensuring that users retain full control over their data.
Additionally, the components developed for the platform support researchers also with science outreach by enabling the creation of interactive online visualizations, for example using the 2D, 3D and augmented reality viewers.
Traditional analysis techniques in data-intensive disciplines like high energy physics and cosmology have been restricted to hand-crafted low dimensional summary statistics. Modern machine learning allows for new methods that attempt to make optimal use of the full high-dimensional data. However the significant computational cost of these methods requires the use of dedicated GPU clusters, especially when taking into account the expected increase in data collection in future experiments like HL-LHC and SKA. In this talk I will introduce some new ML-based analysis techniques from fundamental physics. I will highlight their advantages over traditional methods and discuss the computational bottlenecks.
Durch gesetzliche Vorgaben wird im Bereich IT zunehmend mehr Nachhaltigkeit gefordert. An Hochschulen rechnen sich Green IT-Maßnahmen jedoch selten, da sie ihren Strom nicht selbst bezahlen. Zudem sind Personalmangel, Akzeptanzprobleme, fehlende Unterstützung und eine gewachsene Struktur die häufigsten Hindernisse von Green IT an Hochschulen.
Transparenz fördert Green IT vom Mitarbeiter bis zur Hochschulleitung. In diesen Beitrag werden die Key-Findings des Projekts GAIT vorgestellt, in dem verschiedene Maßnahmen analysiert und eine Messsystematik sowie Kennzahlen zur Bewertung der Nachhaltigkeit entwickelt wurden. Ausblick: Im Projekt bwCloud3 sollen vergleichbare Messdaten und Kennzahlen für die bwCloud erhoben werden. So können Nutzer bei der Bewertungen von Servicealternativen und Betreiber bei der Bewertungen von Green IT-Maßnahmen zukünftig neben monetären Aspekten auch ökologische Aspekte z.B. CO2-Emissionen mit einbeziehen.
Die Anzahl kooperativer Dienste im Hochschulumfeld steigt kontinuierlich. Während der Fokus bisher meist nur darauf lag, wie Kosten, Personalaufwände und Risiken gerecht unter den teilnehmenden Institutionen verteilt werden können, müssen nun weitere Aspekte wie die SDGs oder GreenIT in der Konzeption der Kooperationen berücksichtigt werden. In diesem Beitrag wird diskutiert, welche bisherigen Konzepte der monitären Perspektive übertragen werden können und an welchen Stellen, insbesondere bei GreenIT, eine andere Herangehensweise notwendig ist. Schlagwörter sind z.B. klimaneutrale Hochschulen. Die relevanten Aspekte und Kennzahlen können in einzelne Teilkomponenten aufgeteilt werden, so dass ein transparenter und gerechter Werkzeugkasten für Umlagemodelle, die auf bestimmte Services individuell zugeschnitten sind, zur Verfügung gestellt werden kann. Am Beispiel des Landesdienst bwCloud wird dieser Methodenansatz bewertet und ein Ausblick auf weitere Landesdienste erörtert.
We evaluate a role of independent orbital parameters, namely, eccentricity, obliquity, and longitude of perihelion, on the simulated Earth-like aquaplanet sea surface temperature.
We choose the parameters to be in the range that Earth underwent during last 150000 years and will undergo within next 150000 years.
Our results reveal that the sea surface temperature variability within this time span does not increase 0.3 K and that the main parameter that defines the amount of incoming solar radiation that reaches the aquaplanet surface and causes its warming or cooling is the obliquity.
This study presents a computational investigation of the separation of xylene isomers, with a particular focus on the selective isolation of para-xylene. The separation of xylene isomers is a challenging process due to their similar physical properties, which necessitates the use of energy-intensive and complex separation techniques. To address these challenges, we employed a dual approach combining molecular dynamics (MD) simulations and grand-canonical Monte Carlo (GCMC) simulations. The Computation-Ready Experimental Metal-Organic Framework (CoRE MOF) 2019 database served as the foundation for our computational screening, allowing us to identify promising materials for xylene isomer separation based on diffusion and adsorption characteristics. Our findings highlight specific metal-organic frameworks (MOFs) that exhibit superior selectivity and capacity for para-xylene, offering valuable insights into the design of efficient separation processes for industrial applications.
Using Virtual Machines (VMs) with dedicated rendering and remote access capabilities, virtual workplaces can be created. If this is to happen on a large scale in the cloud, so-called Virtual Desktop Infrastructures (VDIs) become important for the dynamic provision of virtual desktops. A sustainable VDI should be scalable and should support desktop use cases with different resource requirements. Some use cases involve hundreds of similar VMs running in parallel, which requires proper resource planning ahead. A timed long-term resource scheduling of VM placements on compute nodes is a major challenge. Further requirements arise from long-term reservations, capabilities of the compute nodes and guest OS, and remote access. Summarizing the state-of-the-art and outlining use cases for a VDI on OpenStack, this paper discusses the considerations and steps to extend OpenStack services and develop scheduling components to operate an Open Source VDI for various use cases in the academic field.
There is a huge demand for research data management including the sharing of (large scale) data with colleagues and partners outside the own institution. Although there are established services in RDM for various purposes (GitLab, InvenioRDM, baselevel NFS/SMB, object storage, Nextcloud), there has been one missing component, namely (large scale, public) file exchange (with self administered user/access management).
An answer to this is SFTPGo in conjuction with the S3 object store. The S3 object store enables you to store large amounts of data efficiently, securely and at a reasonable price. However, an object store does not behave like a traditional file storage. Therefore, many users are overwhelmed when having to interact with such a storage system. This is where SFTPGo comes into play. SFTPGo enables the user to access an S3 object store using familiar access protocols, making the object store almost look like a traditional file system.
Computing resources play a crucial role in modern particle physics research, covering a wide range of different use
cases. These range from the enormous data processing demands of large experimental collaborations, like ATLAS, CMS or Belle II, over theory calculations and Monte Carlo simulations producing huge amounts of data, to analysis code of PhD or Master's students. These workflows have already been established on the NEMO cluster based on virtual worker nodes. This concept has undergone significant development in the past, now based on container technology, and will be deployed for NEMO 2.
At the Institute of Experimental Particle Physics (ETP) in Karlsruhe, an Overlay Batch System (OBS) is used to provide a
unified access point to various computing resources for all institute members. A key feature of this system is
COBalD/TARDIS, a meta-scheduling tool that dynamically incorporates additional computing resources from external sites. Over the last years, the bwForCluster NEMO has been the one major computing resource for large-scale work at ETP.
This talk introduces the different problems particle physicists face in day-to-day work, how the OBS in our institute
works, including the process of provisioning resources on NEMO and running jobs. It concludes by emphasising the importance of flexible, scalable computing solutions in advancing particle physics research, with the bwForCluster NEMO playing a pivotal role in ETP's computing infrastructure