Fundamentals of Accelerated Computing with CUDA C/C++

Europe/Berlin
On-Site at NHR@FAU

On-Site at NHR@FAU

The course will take place in room 02.135-113 (CIP-Pool Computer Science) located in Martensstraße 3, 91058 Erlangen.
Description

Prerequisites

  • Basic C/C++ competency, including familiarity with variable types, loops, conditional statements, functions, and array manipulations
  • No previous knowledge of CUDA programming is assumed
  • A free NVIDIA developer account is required to access the course material. Please register before the training at https://courses.nvidia.com/join/.

 

Learning Objectives

At the conclusion of the workshop, participants will have an understanding of the fundamental tools and techniques for GPU- accelerating C/C++ applications with CUDA and be able to:

  • Write code to be executed by a GPU accelerator
  • Expose and express data and instruction-level parallelism in C/C++ applications using CUDA
  • Utilize CUDA-managed memory and optimize memory migration using asynchronous prefetching
  • Leverage command-line and visual profilers to guide your work
  • Utilize concurrent streams for instruction-level parallelism
  • Write GPU-accelerated CUDA C/C++ applications, or refactor existing CPU-only applications, using a profile-driven approach

 

Certification

Upon successful completion of the assessment at the end of the second day, participants will receive an NVIDIA DLI certificate to recognize their subject matter competency and support professional career growth.

 

Structure

Module 1 -- Accelerating Applications with CUDA C/C++

  • Writing, compiling, and running GPU code
  • Controlling the parallel thread hierarchy
  • Allocating and freeing memory for the GPU

Module 2 -- Managing Accelerated Application Memory with CUDA C/C++

  • Profiling CUDA code with the command-line profiler
  • Details on unified memory
  • Optimizing unified memory management

Module 3 -- Asynchronous Streaming and Visual Profiling for Accelerated Applications with CUDA C/C++

  • Profiling CUDA code with NVIDIA Nsight Systems
  • Using concurrent CUDA streams

 

Program

The program can be found here.

 

Language

The course will be held in English.

 

Instructor

Dr. Sebastian Kuckuk, certified NVIDIA DLI Ambassador.

The course is co-organised by NHR@FAU and the NVIDIA Deep Learning Institute (DLI).

 

Prices and Eligibility

The course is open and free of charge for people from academia from the Member States (MS) of the European Union (EU) and Associated/Other Countries to the Horizon 2020 programme.

 

Withdrawal Policy

Please only register for the course if you are really going to attend. No-shows will be blacklisted and excluded from future events. If you want to withdraw your registration, please send e-mail to sebastian.kuckuk@fau.de.

    • 1
      Welcome and Introduction
    • 2
      Module 1 -- Accelerating Applications with CUDA C/C++
    • 10:15 AM
      Coffee Break
    • 3
      Module 1 continued
    • 4
      Module 2 -- Managing Accelerated Application Memory
    • 12:30 PM
      Lunch Break
    • 5
      Module 2 continued
    • 6
      Module 3 -- Asynchronous Streaming and Visual Profiling
    • 3:30 PM
      Coffee Break
    • 7
      Module 3 continued
    • 8
      Closing