Training on Multiple GPUs with PyTorch Distributed Data Parallel (DDP)

Nov 6, 2023, 12:15 PM
2h

Description

Learn to convert single GPU training to multiple GPUs using PyTorch Distributed Data Parallel

  • Understand how DDP coordinates training among multiple GPUs.
  • Refactor single-GPU training programs to run on multiple GPUs with DDP.

Presentation materials

There are no materials yet.