Together with Nvidia, we are offering a workshop on accelerated computing with CUDA in C/C++.
The Workshop will take place virtually. Participants will be provided with access to a suitable GPU setup.
Connection details for the workshop as well as required steps for access to the setup will be sent to the participants prior to the event.
Learning Objectives
By participating in this workshop, you’ll:
- Write code to be executed by a GPU accelerator
- Expose and express data and instruction-level parallelism in C/C++ applications using CUDA
- Utilize CUDA-managed memory and optimize memory migration using asynchronous prefetching
- Leverage command-line and visual profilers to guide your work
- Utilize concurrent streams for instruction-level parallelism
- Write GPU-accelerated CUDA C/C++ applications, or refactor existing CPU-only applications, using a profile-driven approach
Prerequisites
- Basic C/C++ competency, including familiarity with variable types, loops, conditional statements, functions, and array manipulations
- No previous knowledge of CUDA programming is assumed
This workshop is part of the Nvidia Deep Learning Institute, further details can be found at https://www.nvidia.com/en-us/training/instructor-led-workshops/fundamentals-of-accelerated-computing-with-cuda/
Registration
Registration for this event is currently open.
Surveys
There is an open survey.