Instructor led workshop

Modern deep learning challenges leverage increasingly larger datasets and more complex models. As a result, significant computational power is required to train models effectively and efficiently.

In this course, you will learn how to scale deep learning training to multiple GPUs. Using multiple GPUs for deep learning can significantly shorten the time required to train lots of data, making solving complex problems with deep learning feasible. This course will teach you how to use multiple GPUs to train neural networks. You'll learn:

• Approaches to multi-GPU training
• Algorithmic and engineering challenges to large-scale training
• Key techniques used to overcome the challenges mentioned above

Upon completion, you'll be able to effectively parallelise training and deep neural networks using Horovod.

Prerequisites: Competency in the Python programming language and experience training deep learning models in Python.

Technologies: Python, Tensorflow

Agenda:
10:00-10:15 Intro
10:15-12:00 Stochastic Gradient Descent
12:00-13:00 Lunch Break
13:00-14:20 Introduction to Distributed Training
14:20-14:30 Coffee Break
14:30-15:00 Run:AI Presentation
15:00-16:15 Algorithmic Challenges of Distributed SGD
16:15-16:30 Q&A

Register now
Find out more Find out more