October 29, 2019

195 words 1 min read

Large Scale Distributed Deep Learning with Kubernetes Operators

Large Scale Distributed Deep Learning with Kubernetes Operators

The focus of this talk is the usage of Kubernetes operators to manage and automate training process for machine learning tasks. Two open source Kubernetes operators, tf-operator and mpi-operator, will …

Talk Title Large Scale Distributed Deep Learning with Kubernetes Operators
Speakers Yong Tang (Director of Engineering, MobileIron), Yuan Tang (Senior Software Engineer, Ant Financial)
Conference KubeCon + CloudNativeCon Europe
Conf Tag
Location Barcelona, Spain
Date May 19-23, 2019
URL Talk Page
Slides Talk Slides
Video

The focus of this talk is the usage of Kubernetes operators to manage and automate training process for machine learning tasks. Two open source Kubernetes operators, tf-operator and mpi-operator, will be discussed. Both operators manage training jobs for TensorFlow but they have different distribution strategies. The tf-operator fits the parameter server distribution strategy which has a centralized parameter server for coordination. The mpi-operator, on the other hand, utilize MPI allreduce primitive implementation. While the parameter server strategy requires a right ratio of CPU (for parameter servers) and GPU (for workers) to reach network-optimal, the all reduce distribution might be easier to optimize network cost. We will share our performance numbers in out talk for comparison of those two operators.

comments powered by Disqus