AI on Kubernetes
Kubernetesthe container orchestration engine used by all of the top technology companieswas built from the ground up to run and manage highly distributed workloads on huge clusters. Thus, it provides a solid foundation for model development. Daniel Whitenack demonstrates how to easily deploy and scale AI/ML workflows on any infrastructure using Kubernetes.
Talk Title | AI on Kubernetes |
Speakers | Daniel Whitenack (Pachyderm) |
Conference | Artificial Intelligence Conference |
Conf Tag | Put AI to Work |
Location | San Francisco, California |
Date | September 5-7, 2018 |
URL | Talk Page |
Slides | Talk Slides |
Video | |
It’s no secret that machine learning workflows are awkward to deploy and hard to maintain and often cause friction with engineering and IT teams. Frequently, work done by data scientists and machine learning researchers is wasted because it never escapes their laptops or cannot be scaled to larger data. Kubernetes—the container orchestration engine used by all of the top technology companies, including Google, Amazon, and Microsoft—was built from the ground up to run and manage highly distributed workloads on huge clusters. Thus, it provides a solid foundation for model development. Daniel Whitenack demonstrates how to easily deploy and scale AI/ML workflows on any infrastructure using Kubernetes. You’ll learn how to containerize and deploy model training and inference on Kubernetes using popular open source tools like Pachyderm and KubeFlow and discover how to ingress/egress data, use version models, utilize GPUs, and track and evaluate models. Outline: