Description
Learn to convert single GPU training to multiple GPUs using PyTorch Distributed Data Parallel
- Understand how DDP coordinates training among multiple GPUs.
- Refactor single-GPU training programs to run on multiple GPUs with DDP.
Learn to convert single GPU training to multiple GPUs using PyTorch Distributed Data Parallel