The research of Hartwig Anzt is about developing and optimizing numerical methods for efficient high-performance computing. In particular, Hartwig is interested in sparse linear algebra, iterative and asynchronous methods, Krylov solvers, and parallel preconditioning methods. Hartwig also works on fault tolerance, energy efficiency, as well as GPU computing. He is the head of the Junior Research Group Fixed-Point Methods for Numerics at Exascale (FiNE) at Karlsruhe Institute of Technology . He is part of the CUDA Center of Excellence, and involved in the projects BEAST and MAGMA, where he in particular contributes functionalities for sparse linear algebra. He is the managing lead of the Ginkgo numerical linear algebra package. Within the Exascale Computing Project, he is part of the effort "Production-ready, Exascale-Enabled, Krylov Solvers (PEEKS) for Exascale Computing". 
Istvan is a developer at SAP SE. He is passionate about git and scalable distributed systems. He is a member of the team in Karlsruhe that provides observability services for the SAP Cloud Platform.
Paulo is an engineer passionate about the upcoming cloud web services and high-performance applications. At CERN he works as a software engineer and DevOps and is responsible for the provisioning, automation and management of HNSciCloud batch worker nodes using Terraform (OpenStack & CloudStack), Puppet, Rundeck & Python PSSH. He further develops for the concurrent adaptive load balancing using GoLang and is a WhiteHat group member at CERN.
Max Fischer is the representative for the ALICE collaboration at GridKa. At GridKa, he juggles responsibilities for the GridKa HTCondor batch system and XRootD storage services.
Previously, he gained his doctoral degree in particle physics with the CMS collaboration; there, his main focus were calibration studies and the development of new middleware solutions for high-performance, distributed data analysis.
Max has a habit to dabble in a broad range of fields, with a focus on algorithms and software development for data analysis. With an affiliation to several overlapping domains, he tends to favour efficiency and manageability over specialization and perfection. When he has to get his hands dirty, his preferred tool of choice is the Python programming language. With years of experience in script and framework development, Max knows the strength and weaknesses of the language all too well.
Hari is a developer at SAP SE. He is a member of the team in Karlsruhe that Builds-Deploys-Runs massive logging infrastructure for platform components and applications running on Cloud Foundry and other distributed environments.
He holds a Master degree in Distributed Software Systems from the Technical University of Darmstadt, Germany. His Master thesis was on the topic “Distributed partitioning Algorithm for large data-sets based on Geo-spatial information on Apache Spark”. Slides: https://goo.gl/M7Uwv8
His daily work focuses on running Elasticsearch, issues with their scale-in/out, synchronizing cluster node’s lifecycle with BOSH’s lifecycle, providing multi-tenancy with OAuth2 and log based alerting. He is currently working on providing a WebSocket based ingestion point for public log ingestion from other platforms like Kubernetes, IoT services for an open service broker like setup. 
Georg Hager holds a PhD in Computational Physics from the University of Greifswald. He is a senior researcher in the HPC Services group at Erlangen Regional Computing Center (RRZE) at the University of Erlangen-Nuremberg and an associate lecturer at the Institute of Physics of the University of Greifswald. Recent research includes architecture-specific optimization strategies for current microprocessors, performance engineering of scientific codes on chip and system levels, and special topics in shared memory and hybrid programming. His daily work encompasses all aspects of user support in High Performance Computing like tutorials and training, code parallelization, profiling and optimization, and the assessment of novel computer architectures and tools. His textbook “Introduction to High Performance Computing for Scientists and Engineers” is recommended or required reading in many HPC-related lectures and courses worldwide. In his teaching activities he puts a strong focus on performance modeling techniques that lead to a better understanding of the interaction of program code with the hardware. 
Hilary Hanahoe (Female) Secretary General of the Research Data Alliance (RDA). Her role is to support the RDA membership, and its governance structures to achieve the goal of the Research Data Alliance - to accelerate international data-driven innovation and discovery by facilitating research data sharing and exchange. Hilary was appointed on 1st February 2018 and previously covered a multiple of roles within the Alliance, including the coordinator of the RDA Europe initiative, the Communications and Plenary Manager for the RDA global secretariat. She also managed the process for the recognition of RDA recommendations as ICT technical specifications in Europe. More information at www.linkedin.com/in/hilary-hanahoe.
Andreas made his PhD as an experimental particle physicist at Forschungszentrum Jülich/Ruhr University Bochum. He investigated the application of graphics processing units (GPUs) for track reconstruction in the online event selection system of the PANDA experiment. After graduating he joined the NVIDIA Application Lab of the Supercomputing Centre of Forschungszentrum Jülich, where he enables scientific applications for GPUs and improves their performances.
Milosch Meriac is involved in hardware, embedded software, protocols and security projects around the Internet of Things and considers himself a white-hat hacker.
Milosch Meriac has over 20 years of professional experience in embedded programming, hardware development and the information security business. He enjoys breaking things, working on microcontroller security and improving IoT security.
His focus is on hardware design, embedded systems, RF designs, active and passive RFID hardware development, custom-tailoring of embedded Linux hardware platforms, real time systems, IT-security, hardware & software reverse engineering and security evaluations of embedded systems. 
Oliver Oberst made his PhD in particle physics at the KIT focused on analysis of QCD data from the CMS Experiment and development of tools to manage virtualized worker nodes. After two years as a post doc at KIT, managing the CMS Tier 1 contact team for GridKa as-well as coordinating the German CMS grid computing group (DCMS), Oliver joint IBM as an Industry Solution Architect. Since then he designs solutions for research and higher-education customers with emphasis on technical computing (HPC, HTC), big data analytics and cloud computing.
Gareth O'Neill is a doctoral candidate in linguistics at Leiden University and is president of the European Council of Doctoral Candidates and Junior Researchers (Eurodoc). He is interested in science policy for researchers and in improving the broad implementation and skills training of Open Science across Europe. Gareth was actively involved in the Dutch National Plan for Open Science, is an expert on Open Science for the European Commission, and is a member of the H2020 Advisory Group on Marie Skłodowska-Curie Actions at the European Commission. He appreciates decent pints of Guinness and sailing in traditional Galway Hookers.
Danilo is a particle physicist specialised in the design and development of high performance scientific software. He got his PhD at KIT in Germany and works at CERN in the Software group of the Experimental Physics Department. As software performance expert, he is responsible for the modernisation and parallelisation of the CERN software suite and recently focussed on the redesign of analysis and data IO tools within the ROOT project.
The research of Prof. Dr. Gordon Pipa is focused on understanding how information processing and cognitive phenomena can arise from the collective self-organization of elements interacting across many spatial and temporal scales. In particular he studies, first, synchronization of neuronal activity in delay coupled systems, second, information processing in self-organized complex systems in different dynamical states, i.e. self-organized criticality, and third, the use of time series analysis to understand how information flow can take place between neural activity occurring at different spatial and temporal scales. The long term goal of my research is to identify principles that shape neuronal activity and are used to process information in a multi-scale system like the brain. 
Alexander Schug works since 2011 as a group leader at the Steinbuch Centre for Computing at KIT. He obtained his PhD 2005 at the University of Dortmund and has worked as Postdoctoral Scholar in Kobe (Japan) and San Diego (US) before returning as an Assistant Professor to Europe (Umeå, Sweden). His research interests include Theoretical Biophysics, Biomolecular Simulations, and High Performance Computing. His work has received multiple awards including a FIZ Chemie Berlin Preis from the German Chemical Society GdCH and a Google Faculty Research Award 2016.
Christian leads communications of BASF’s digitalization initiative “BASF 4.0. In this role he is orchestrating internal and external communication activities and building up communities to drive the digital change.
Before joining BASF 4.0 Christian held various positions in Business Management, Marketing and Sales for different businesses located in Ludwigshafen, Singapore and Basel.
Christian studied business administration in the Universities of Hamburg (Diplom-Kaufmann) and Miami (MBA).
Dirk von Suchodoletz is the head of the eScience department of the computer center of the university of Freiburg. This department is responsible for the operation of the bwForCluster NEMO, the strategic developments in computer center operations and for co-organization of the future research data management strategy of the university. Additionally activities are to run the science cloud instance in Freiburg (bwCloud-SCOPE) and bwLehrpool (flexible PC pool environment to easily manage hundreds of campus machines and provide virtual teaching and learning environments). The department is as well involved in the various eScience projects supported by the state of Baden-Wuerttemberg like the ViCE project on Virtual Research environments. Dirk got his diploma degree in Mathematics at University of Göttingen and his PhD from the computer science dept. of the technical faculty in Freiburg.
Todd Tannenbaum is a Researcher in the Center for High Throughput Computing at the University of Wisconsin-Madison (UW-Madison) Department of Computer Sciences with over 19 years of experience developing production distributed computing environments. He directs the development staff and serves as the Technical Lead for the HTCondor Project, a distributed computing research group that produces the award-winning HTCondor software. Previous to his involvement with HTCondor, Todd served as the Director of the Model Advanced Facility, a high-performance computing center in the UW-Madison College of Engineering, and also as a Technology Editor for Network Computing magazine. He received B.S. and M.S. degrees in computer science from UW-Madison.
Peter Tröger is a professor for distributed systems at the Beuth University of Applied Sciences in Berlin. He got a doctoral degree in computer science during his time at the Hasso Plattner Institute in Potsdam. Peter also spend some time an the Blekinge Institute of Technology (Sweden) and at Chemnitz University of Technology. He works on different research questions related to the increasing uncertainty of reliability analysis for modern IT systems.
Peter has a long-standing relation to high-performance and high-throughput computing due to his work in the Distributed Resource Management Application API (DRMAA) group in the Open Grid Forum. He developed and maintained the Condor DRMAA library for several years, and still acts as maintainer for the current DRMAAv2 specification.
Mian Usman is the Network Architect at GÉANT, Mian received his BSc in Network Management and Design from University of Portsmouth in 2007 and MBA from Manchester Business School in 2017. Mian's work is focused on GÉANT network evolution, network architecture and design he led the technical IP team responsible for designing and deploying GÉANT's IP/MPLS platform and the migration of EoSDH services to EoMPLS. He is also the lead author of the GÉANT Network Evolution plan.
Karsten Wendland holds a degree in computer science and a doctorate in human sciences. He is Professor of Media Informatics at Aalen University and was Visiting Professor at the Institute for Technology Assessment and Systems Analysis (ITAS) at the Karlsruhe Institute of Technology. He is currently researching "Machine Consciousness", which is considered the holy grail in the field of Artificial Intelligence.
For nearly two decades, he has worked on IT ethics, the humane use of information and communication technology, and forward thinking strategies in the areas of artificial consciousness and artificial intelligence (AI).
Since 2000 he is on the road as a professional speaker and is also the author of four books. He says, "If you don't have a grip on your own future, the future will have its grip on you."
With his talk "Beyond Algorithms. Even Better Systems by Understanding Human Special Features" he shows us how to get a better grip on our future.