Getting started with distributed TensorFlow on GCP

Click for: original source

For many in the world of data science, distributed training can seem a daunting task. In addition to building and thoughtfully evaluating a high-quality ML model, you have to be aware of how to optimize your model for specific hardware and manage infrastructure. By Nikita Namjoshi.

In this tutorial-style article, you’ll get hands-on experience with GCP data science tools and train a TensorFlow model across multiple GPUs. You’ll also learn key terminology in the field of distributed training, such as data parallelism, synchronous training, and AllReduce.

The article then walks you through:

  • Why distributed training?
  • Single GPU training
  • Multi-GPU training
  • Long running jobs on the DLVM
  • Take your distributed training skills to the next level

In this article you learned how to use MirroredStrategy, a synchronous data parallelism strategy, to distribute your TensorFlow training job across two GPUs on GCP. You now know the basic mechanics of how to set up your GCP environment and prepare your code, but there’s a lot more to explore in the world of distributed training. Good read!

[Read More]

Tags big-data data-science software gcp google