GridKa School 2019 - The Art of Data

Europe/Berlin
KIT, Campus North, FTU

KIT, Campus North, FTU

Description

GridKa School 2019 Group PictureThe International GridKa School 2019 is one of the leading summer schools for advanced computing techniques in Europe. The school provides a forum for scientists and technology leaders, experts, and novices to facilitate knowledge sharing and information exchange. The target audience is different groups such as graduate and PhD students, advanced users as well as IT administrators. GridKa School is hosted by Steinbuch Centre for Computing (SCC) of Karlsruhe Institute of Technology (KIT).

Workshops - Plenary Talks - Social Events

Hands-on sessions and workshops give participants an excellent and unique chance to gain practical experience on cutting edge technologies and tools.

Plenary talks presented by experts cover theoretical aspects of school topics and focus on innovative features of data science on modern architectures.

Two social events are important parts of the school. Participants improve their networking and have fun by getting in touch with interesting people in a relaxed atmosphere.

Topical Highlight

  • State of the art programming and modern programming languages
  • Scalable and extensible computing infrastructure for scientific computing
  • Machine Learning

Partners

The GridKa School is proudly supported by

    • 8:55 AM 12:00 PM
      Plenary Aula (FTU)

      Aula

      FTU

      Convener: Max Fischer (Karlsruhe Institute of Technology)
      • 9:00 AM
        Why Rust? 40m
        Speaker: Mr Oliver Scherer (KIT - Karlsruhe Institut of Technology)
      • 9:40 AM
        Kubernetes meets Data Scientists - An Experience Report 40m
        Speaker: Prof. Peter Tröger (Beuth Hochschule für Technik Berlin)
      • 10:20 AM
        News 5m
        Speaker: René Caspart (KIT - Karlsruhe Institute of Technology (DE))
      • 10:25 AM
        Coffee Break 15m
      • 10:40 AM
        Make Your Data FABulous 40m

        The CAP theorem is widely known for distributed systems, but it's not the only tradeoff you should be aware of. For datastores there is also the FAB theory and just like with the CAP theorem you can only pick two:
        * Fast: Results are real-time or near real-time instead of batch oriented.
        * Accurate: Answers are exact and don't have a margin of error.
        * Big: You require horizontal scaling and need to distribute your data.
        While Fast and Big are relatively easy to understand, Accurate is a bit harder to picture. This talk shows some concrete examples of accuracy tradeoffs Elasticsearch can take for terms aggregations, cardinality aggregations with HyperLogLog++, and the IDF part of full-text search. Or how to trade some speed or the distribution for more accuracy.

        Speaker: Philipp Krenn (Elastic)
      • 11:20 AM
        Quantum Computing and IBM Q: An Introduction 40m
        Speaker: Dr Oliver Oberst (IBM)
    • 12:00 PM 1:15 PM
      Lunch Break FTU

      FTU

    • 1:15 PM 6:00 PM
      Tutorials
      • 1:15 PM
        Concurrent Programming in C++ 4h 45m 162 (FTU)

        162

        FTU

        In this course we will introduce how to program for concurrency in C++, taking advantage of modern CPUs' ability to run multi-threaded programs on different CPU cores. We will briefly review the native C++ concurrency features for
        asynchronous execution, thread spawning and locking as well as a few other features useful for concurrent programming. The tutorial will then show you how to use Intel's Threaded Building Block (TBB) library as a much higher level
        abstraction onto concurrency that allows concurrent applications to be developed for scientific applications much more quickly. We will examine TBB's basic templates for parallel programming, controlling loops and reductions in 1 and 2 dimensions. Then we will see how TBB's graph execution facilities that allow more sophisticated parallel workflows in a DAG to be run. We will look briefly at the TBB task manager, that allows arbitrary workloads to be executed in parallel when injected into the system by a higher level component. Our goal will be to build up a simple application that exploits multi-level concurrency, executing different tasks, some of which exploit with inner loop optimisations.

        Students should be familiar with C++ and the standard template library. Some familiarity with makefiles and/or CMake would be useful.

        Speaker: Graeme A Stewart (CERN)
      • 1:15 PM
        Docker Container Hands-On 4h 45m Aula (FTU)

        Aula

        FTU

        Container technologies are rapidly becoming the preferred way to distribute, deploy, and run services by developers and system administrators. They provide the means to start a light-weight virtualization environment, i.e., a container, based on Linux kernel namespaces and control groups (cgroups). Such virtualization environment is cheap to create, manage, and destroy, requires a negligible amount of time to set-up, and provides performance equatable with the one of the host. Docker offers an intuitive way to manage containers by abstracting and automating the low-level configuration of namespaces and cgroups, ultimately enabling the development of an entire ecosystem of tools and products around containers.

        This workshop covers aspects ranging from the basic concepts of Docker (e.g., set up of a Docker environment on your machine, run a container interactively, build-tag-publish images) to the deployment of complex service stacks using container clusters and orchestration software (e.g., Docker Compose and Kubernetes). The workshop will discuss in detail the concepts of network, volume, and resource management, demonstrating that containers are suitable for a variety of applications and their actual advantages over traditional virtual machines.

        Note: The workshop includes hands-on exercises. To benefit to the maximum of the tutorial part, you should bring your own laptop and have Internet connection. You should also be comfortable working with the Linux terminal, editing files with common editors (e.g., vi, nano, emacs, etc.), and installing packages over the command line.

        Speaker: Enrico Bocchi (CERN)
      • 1:15 PM
        Elasticsearch and Elastic Stack: Search and Beyond 4h 45m 157 (FTU)

        157

        FTU

        Elasticsearch is the most widely used full-text search engine, but is also very common for logging, metrics, and analytics. This exercise shows you what the rage is all about:
        1. Overview of Elasticsearch and how it became the Elastic Stack.
        2. Full-text search deep dive:
        * How does full-text search work in general and what are the differences to databases.
        * How the score or quality of a search result is calculated.
        * How to handle languages, search for terms and phrases, run boolean queries, add suggestions, work with ngrams, and more with Elasticsearch.
        3. Going from search to logging, metrics, and analytics:
        * System metrics: Keep track of network traffic and system load.
        * Application logs: Collect structured logs in a central location from your systems and applications.
        * Uptime monitoring: Ping services and actively monitor their availability and response time.
        * Application metrics: Get the information from the applications such as nginx, MySQL, or your custom Java applications.
        * Request tracing: Trace requests through an application and show how long each call takes and where errors are happening.
        And we will do all of that live, since it is so easy and much more interactive that way.

        Speaker: Philipp Krenn (Elastic)
      • 1:15 PM
        Introduction to Go 4h 45m 164 (FTU)

        164

        FTU

        In this workshop, we will introduce the basics of programming in Go and
        then work our way up to concurrency programming with this relatively new
        language.

        We'll start with the usual "Hello World" program, introduce functions,
        variables, packages and then interfaces.
        Then, we will tackle the two main tools at the disposal of the Go programmer
        (colloquially known as a gopher): the channels and the goroutines.
        This will be done by implementing a small peer to peer application transmitting
        text messages over the network.

        The workshop wraps up with a whirlwind tour of scientific and non-scientific
        libraries readily available, and prospects/news about the next Go version.

        Speaker: Sebastien Binet (IN2P3/CNRS)
      • 1:15 PM
        Quantum Computing 4h 45m 163 (FTU)

        163

        FTU

        This Quantum Computing tutorial will enabling the participants to access and run calculations on real quantum computers from IBM. The course gives an introduction into the IBM Q Experience as well as to the open source Quantum Information and Science Kit (Qiskit), an open-source quantum computing framework for leveraging today's quantum processors and conducting research. Basic knowledge of Quantum Mechanics, Linear Algebra and Python is assumed (but not mandatory).

        Speaker: Dr Oliver Oberst (IBM)
      • 1:15 PM
        Rust Workshop - Day 1: Introduction to Rust 4h 45m 156 (FTU)

        156

        FTU

        Learning how to develop software for embedded devices has been a treacherous road for a long time. Dangers of undefined behavior coupled with quirks of embedded devices and “beginner” fallacies cause many promising developers to shy away from embedded development. This workshop is made up of two parts:

        The first day covers Rust's safety and usability features. The focus lies on ownership semantics for memory safety, multithreading and thread safety and closes with applying these to writing hardened networked software that can exposed to the internet without requiring the application of additional firewalls or intrusion detection software.

        Speaker: Mr Oliver Scherer (KIT - Karlsruhe Institut of Technology)
    • 6:30 PM 10:00 PM
      Tarte Flambée Evening SCC terrace (KIT Campus North, building 449 (SCC))

      SCC terrace

      KIT Campus North, building 449 (SCC)

    • 9:00 AM 12:00 PM
      Plenary Aula (FTU)

      Aula

      FTU

      Convener: Andreas Heiss (KIT)
    • 12:00 PM 1:15 PM
      Lunch Break FTU

      FTU

    • 1:15 PM 6:00 PM
      Tutorials
      • 1:15 PM
        Advanced Go: writing concurrent and distributed programs 4h 45m 164 (FTU)

        164

        FTU

        In this workshop, we'll implement a distributed peer-to-peer chat application, involving WebSockets, http servers and the famous goroutines and channels that Go programmers can wield to achieve and tame concurrency.
        We'll first start with simple command-line applications that operate on marshaling/unmarshaling JSON messages and then connect all these building blocks to eventually create a web-based peer-to-peer chat application.

        The workshop wraps up with a whirlwind tour of scientific and non-scientific libraries readily available, and prospects/news about the next Go version.

        Speaker: Sebastien Binet (IN2P3/CNRS)
      • 1:15 PM
        ELK log analysis hands-on 4h 45m 163 (FTU)

        163

        FTU

        This tutorial is aimed at intermediate linux users who are interested in a modern way of log collection and analysis.
        You will learn how to setup an Elastic Search cluster with a few tuning recommendation. Next you will learn how to collect and process log data with Filebeat. Than we will focus on advanced log processing with Logstash, where you will learn how to categorise, transform and store data. Kibana aided data visualization will guide us through the whole workshop progress.
        You will need a laptop with ssh client (Putty on windows) and web browser. Knowledge of basic linux CLI is an advantage but not a necessity.

        Speaker: Alexandr Mikula (Czech Academy of Sciences)
      • 1:15 PM
        Introduction to HTCondor 4h 45m 156 (FTU)

        156

        FTU

        How to distribute your compute tasks and get results with high performance, keeping machines and site admins joyful

        HTCondor is an open source workload management system for High Throughput Computing designed to collect many different resources (servers from different computing centres, desktops or cloud services) into one common computing environment. These resources are transparently exposed to the users.
        HTCondor is not only used in the High Energy Physics community and CERN batch services, but is also widely adopted in other science areas and industry. It integrates support for several container runtimes which allows to make use of software stacks defined by the user or offered by a site or the community.

        Compared to other well-known workload managers, it does not make use of the concept of different queues or partitions, but applies a fair-share algorithm to distribute resources dynamically according to the users' requests.
        A flexible mechanism called "ClassAds" is used to represent characteristics and constraints of machines and jobs allows for very dynamic configuration both for users and administrators.

        In this tutorial, we will start with simple job submissions to illustrate how jobs are matched and data is transfered by HTCondor, continue with more complex batch submission examples and also discuss DAGs which can be used to express complex inter-job dependencies and full analysis workflows.
        Care will be taken to illustrate different models of how HTCondor may be operated at various sites and how to use it in a well-performing way depending on that. We will also briefly discuss how containers can simplify or complicate your workflow in the context of HTCondor.

        It will be assumed the participants are already familiar with Linux. If you have already come into contact with analysis workflows or a local computing cluster, this is also a very welcome ingredient.

        Speaker: Oliver Freyermuth (University of Bonn)
      • 1:15 PM
        Introduction to OpenMP and MPI 4h 45m Aula (FTU)

        Aula

        FTU

        In this tutorial, we will introduce parallel programming with MPI and OpenMP, and the differences between the two parallel programming paradigms with hands-on exercises. The hands-on session will start with some basic OpenMP directives. More focus will be put into parallel programming with MPI, where we will cover blocking/non-blocking communication and collective communication.
        For more advanced-level participants, we will have exercises on topics such as groups, communicators, derived datatypes, one-sided communication and MPI 3.0 shared-memory. The course material used in this tutorial is developed by HLRS.

        Prerequisites: C/C++ or Fortran programming

        Speaker: Zheng Meyer-Zhao (ASTRON)
      • 1:15 PM
        Machine Learning with Neural Networks 4h 45m 157 (FTU)

        157

        FTU

        Machine learning, and especially deep learning, is one of the current hot topics in computer science and engineering. It has not only experienced tremendous advancements in its theoretical foundations during the last few years, but is now also the state-of-the-art method in a broad range of applications. In this course, you will learn the basic terms and approaches in machine learning, understand the fundamental concepts of logistic regression and neural networks as well as build your own first deep learning models.

        Using small to mid-sized application use cases from science and computer vision you are going to experience how to put the gained knowledge into practice.
        As the machine learning framework of choice, we are going to use the TensorFlow library as computational back-end to the deep learning library Keras in the Python programming language (some prior knowledge is necessary). Using modern GPU computing resources in a cluster computing system, we are going to have a look at typical machine learning applications, such as classification problems and numerical regression analysis.
        Please make sure to bring your own laptop and refresh you basic knowledge on vectors and matrices. We are looking forward to having you!

        Speakers: Markus Götz (KIT/SCC), Oskar Taubert (KIT-SCC)
      • 1:15 PM
        Rust Workshop - Day 2: Using Rust on embedded systems 4h 45m 162 (FTU)

        162

        FTU

        Learning how to develop software for embedded devices has been a treacherous road for a long time. Dangers of undefined behavior coupled with quirks of embedded devices and “beginner” fallacies cause many promising developers to shy away from embedded development. This workshop is made up of two parts:
        Learning how to develop software for embedded devices has been a treacherous road for a long time. Dangers of undefined behavior coupled with quirks of embedded devices and “beginner” fallacies cause many promising developers to shy away from embedded development. This workshop is made up of two parts:

        The second day builds on top of these features showing how they can be (and are) applied to embedded development. It will be shown how common embedded problems like ordering hardware initialization, reusing hardware drivers across platforms and using high level abstractions are safe and convenient in Rust.

        Speaker: Mr Oliver Scherer (KIT - Karlsruhe Institut of Technology)
    • 6:30 PM 8:00 PM
      Evening Lecture
      Convener: Ugur Cayoglu (D3A)
      • 6:30 PM
        Welcome Reception 30m Foyer (FTU)

        Foyer

        FTU

      • 7:00 PM
        Why the future of weather and climate prediction will depend on supercomputing, big data handling and artificial intelligence 1h Aula (FTU)

        Aula

        FTU

        Weather and climate prediction are high-performance computing application with outstanding societal and economic impact ranging from the daily decision-making of citizens to that of civil services for emergency response, and from predicting weather drivers in food, agriculture and energy markets as well as for risk and loss management by insurances.

        Forecasts are based on millions of observations made every day around the globe and physically based numerical models that represent processes acting on scales from hundreds of metres to thousands of kilometres in the atmosphere, the ocean, the land surface and the cryosphere. Forecast production and product dissemination to users is always time critical and forecast output data volumes already reach petabytes per week.

        Meeting the future requirements for forecast reliability and timeliness needs 100-1000 times bigger high-performance computing and data management resources than today – towards what’s generally called ‘exascale’. To meet these needs, the weather and climate prediction community is undergoing one of its biggest revolutions since its foundation in the early 20th century.

        This revolution encompasses a fundamental redesign of mathematical algorithms and numerical methods, the adaptation to new programming models, the implementation of dynamic and resilient workflows and the efficient post-processing and handling of big data. Due to these enormous computing and data challenges, artificial intelligence methods offer significant potential for gaining efficiency and for making optimal use of the generated information for European society.

        Speaker: Dr Peter Bauer (European Centre for Medium-Range Weather Forecasts (ECMWF))
    • 9:00 AM 11:40 AM
      Plenary Aula (FTU)

      Aula

      FTU

      Convener: Manuel Giffels (KIT)
    • 11:40 AM 1:15 PM
      Lunch Break FTU

      FTU

    • 1:15 PM 6:00 PM
      Tutorials
      • 1:15 PM
        Databases for large-scale science 4h 45m 157 (FTU)

        157

        FTU

        In this workshop, the students will (a) learn how to efficiently use
        relational and non-relational databases for modern large-scale
        scientific experiments, and (b) how to create database workflows suitable for analytics and machine learning.

        First, the focus of the workshop is to teach efficient, safe, and
        fault-tolerant principles when dealing with high-volume and
        high-throughput database scenarios. This includes, but is not limited to, systems such as PostgreSQL, Redis, or ElasticSearch. Topics include query planning and performance analysis, transactional safety, SQL injection, and competitive locking.

        Second, we focus on how to actually prepare data from these databases to be usable for analytics and machine learning frameworks such as Keras.

        An intermediate understanding of Python, SQL, and Linux shell scripting is recommended to follow this course. An understanding of machine learning principles is not required.

        Speaker: Mario Lassnig (CERN)
      • 1:15 PM
        Enhance Machine Learning Performance with Intel® Software tools 4h 45m 156 (FTU)

        156

        FTU

        The use of data analytics techniques, such as Machine Learning and Deep Learning, has become the key for gaining insight into the incredible amount of data generated by scientific investigations (simulations and observations). Therefore it is crucial for the scientific community to incorporate these new tools in their workflows, in order to make full use of modern and upcoming data sets. In this tutorial we will provide an overview on the most known machine learning algorithms for supervised and unsupervised learning. With small example codes we show how to implement such algorithms using the Intel® Distribution for Python*, and which performance benefit can be obtained with minimal effort from the developer perspective.
        Furthermore, the demand of using Deep Learning techniques in many scientific domains is rapidly emerging and the requirements for large compute and memory resources is increasing. One of the consequences is the need of the high-performance computing capability for processing and inferring the valuable information inherent in the data.
        We cover also how to accelerate the training of deep neural networks with Tensorflow, thanks to the highly optimized Intel® Math Kernel Library (Intel® MKL). We also demonstrate techniques on how to leverage deep neural network training on multiple nodes on a HPC system.

        Speaker: Fabio Baruffa (Intel)
      • 1:15 PM
        Productive GPU Programming with OpenACC 4h 45m 163 (FTU)

        163

        FTU

        OpenACC is a directive-based programming model for highly parallel systems, which allows for automated generation of portable GPU code. In this tutorial, we will get to know the programming model with examples, learn how to use the associated tools environment, and incorporate first strategies for performance optimization into our programs. Finally, we will integrate OpenACC with other GPU programming strategies.

        Speaker: Andreas Herten (FZ Jülich)
      • 1:15 PM
        Scalable Scientific Analysis in Python using Pandas and Dask 4h 45m Aula (FTU)

        Aula

        FTU

        Pandas is a Python package that provides data structures to work with heterogenous, relational/tabular data. It provides fundamental building blocks for a powerful and flexible data analysis. Pandas provides functionality to load a wide set of data formats, manipulate the resulting data and also visualize it using various plotting frameworks. We will show in the workshop how to clean and reshape data in Pandas and use the concept of split-apply-combine to do exploratory analysis on it. Pandas provides powerful tooling to do data analysis on a single machine and is mostly mostly constrained to a single CPU. To parallelize and distribute these tasks, one can use Dask.

        Dask is a flexible tool for parallelizing Python code on a single machine or across a cluster. We can think of dask at a high and a low level: Dask provides high-level Array, Bag, and DataFrame collections that mimic NumPy, lists, and Pandas but can operate in parallel on datasets that don't fit into main memory. Dask's high-level collections are alternatives to NumPy and Pandas for large datasets. In the low level, Dask provides dynamic task schedulers that execute task graphs in parallel. These execution engines power the high-level collections mentioned above but can also power custom, user-defined workloads. In the tutorial, we will cover the high-level use of dask.array and dask.dataframe.

        Speakers: Mr Florian Jetter (Blue Yonder GmbH), Mr Sebastian Neubauer (Blue Yonder GmbH)
      • 1:15 PM
        Thrill: High-Performance Algorithmic Distributed Batch Data Processing with C++ 4h 45m 164 (FTU)

        164

        FTU

        In this tutorial we first present our new distributed Big Data processing framework called Thrill [1,2]. It is a C++ framework consisting of a set of basic scalable algorithmic primitives like mapping, reducing, sorting, merging, joining, and additional MPI-like collectives. This set of primitives goes beyond traditional Map/Reduce and can be combined into larger more complex algorithms, such as WordCount, PageRank, and suffix sorting. These complex algorithms can then be run on very large inputs using a distributed computing cluster with external memory. Among the main design goals of Thrill is to lose very little performance when composing primitives such that small data types are well supported. Thrill thus raises the questions of a) how to design algorithms using the scalable primitives, b) whether additional primitives should be added, and c) if one can improve the existing ones using new ideas to reduce communication volume and latency. Our performance evaluations show that Thrill is always faster than Apache
        Spark and Apache Flink on a set of five microkernel benchmarks.

        After introducing the audience to Thrill we continue by guiding participants through the initial steps of download and compiling the software package. After a short sight-seeing tour of the framework's internal source code structure, the tutorial group will together go through the steps to develop and run a simple K-means clustering implementation. As an intermezzo, we will present more details on the implementations of Sort and Reduce in Thrill. In the last part, participant are then given a set of free exercises to choose from and to work on together.

        Pre-requisites

        • Participants need a computer to follow the hands-on parts of the tutorial.
        • Knowledge of C++ is required for enjoyment of this tutorial.
        • Thrill's primary platform is Linux or MacOS, but Windows may also work.

        References

        [1] Timo Bingmann, Michael Axtmann, Emanuel Jöbstl, Sebastian Lamm, Huyen Chau Nguyen, Alexander Noe, Sebastian Schlag, Matthias Stumpp, Tobias Sturm, and Peter Sanders. "Thrill: High-Performance Algorithmic Distributed Batch Data Processing with C++". In: IEEE International Conference on Big Data. preprint arXiv:1608.05634, pages 172–183. IEEE.
        Dec. 2016.

        [2] http://project-thrill.org

        Speaker: Timo Bingmann (KIT)
    • 8:00 PM 11:00 PM
      School Dinner
    • 9:00 AM 12:00 PM
      Plenary Aula (FTU)

      Aula

      FTU

      Convener: Dr Eileen Kühn (Karlsruhe Institute of Technology)
      • 9:00 AM
        DevOps for Machine Learning in Academia 40m
        Speaker: Valentin Kozlov (Karlsruhe Institute of Technology)
      • 9:40 AM
        Continuous Benchmarking 40m
        Speaker: Dr Hartwig Anzt (Karlsruher Institut für Technologie)
      • 10:20 AM
        Coffee Break 20m
      • 10:40 AM
        Modern Classification Tasks in LHC Physics 40m
        Speaker: Tilman Plehn (Heidelberg University)
      • 11:20 AM
        Conclusions 40m
        Speaker: René Caspart (KIT - Karlsruhe Institute of Technology (DE))