December 8, 2019

205 words 1 min read

Building GPU-Accelerated Workflows with TensorFlow and Kubernetes [I]

Building GPU-Accelerated Workflows with TensorFlow and Kubernetes [I]

GPUs are critical to some artificial intelligence workflows. In particular, workflows that utilize TensorFlow, or other deep learning frameworks, need GPUs to efficiently train models on image data. T …

Talk Title Building GPU-Accelerated Workflows with TensorFlow and Kubernetes [I]
Speakers Daniel Whitenack (Lead Data Scientist and Advocate, Pachyderm)
Conference KubeCon + CloudNativeCon North America
Conf Tag
Location Austin, TX, United States
Date Dec 4- 8, 2017
URL Talk Page
Slides Talk Slides
Video

GPUs are critical to some artificial intelligence workflows. In particular, workflows that utilize TensorFlow, or other deep learning frameworks, need GPUs to efficiently train models on image data. These same workflows typically also involve mutli-stage data pre-processing and post-processing. Thus, a unified framework is needed for scheduling multi-stage workflows, managing data, and offloading certain workloads to GPUs. In this talk, we will introduce a stack of open source tooling, built around Kubernetes, that is powering these types of GPU-accelerated workflows in production. We will do a live demonstration of a GPU enabled pipeline, illustrating how easy it is to trigger, update, and manage multi-node, accelerated machine learning at scale. The pipeline will be fully containerized, will be deployed on Kubernetes via Pachyderm, and will utilize TensorFlow for model training and inference.

comments powered by Disqus