Abstract: Learn how to scale distributed training of TensorFlow and PyTorch models with Horovod. Frameworks like TensorFlow and PyTorch make it easy to design and train deep learning models. However, when it comes to scaling models to multiple GPUs in a server, or multiple servers in a cluster, difficulties usually arise. In this talk, you will learn about Horovod, a library designed to make distributed training fast and easy to use, and will see how to train a model designed on a single GPU on a cluster of GPU servers.