December 30, 2019

247 words 2 mins read

Got a trained deep learning model? Now what? Deploying deep learning models

Got a trained deep learning model? Now what? Deploying deep learning models

Developing deep learning models with TensorFlow is often only half of the story. To be useful to the public, the model needs to be deployed. Hannes Hapke explains how to deploy your TensorFlow model easily with TensorFlow Serving, introduces an emerging project called Kubeflow, and highlights some deployment pitfalls like model versioning and deployment flow.

Talk Title Got a trained deep learning model? Now what? Deploying deep learning models
Speakers Hannes Hapke (SAP ConcurLabs)
Conference O’Reilly Open Source Convention
Conf Tag Put open source to work
Location Portland, Oregon
Date July 16-19, 2018
URL Talk Page
Slides Talk Slides
Video

TensorFlow and its community provide a variety of deep learning tools to develop novel deep learning models. A large number of talks have focused on amazing tools like TensorBoard or novel TensorFlow implementations like the support for sequence-to-sequence networks. However, developing deep learning models with TensorFlow is often only half of the story. To be useful to the public, the model needs to be deployed. Hannes Hapke explains how to deploy your TensorFlow model easily with TensorFlow Serving, introduces an emerging project called Kubeflow, and highlights some deployment pitfalls like model versioning and deployment flow. You’ll learn when a deployment with TensorFlow Serving or Kubeflow makes sense, how to deploy trained TensorFlow models with TensorFlow Serving, how to install required system dependencies, and Kubeflow basic concepts. You’ll leave ready to deploy your TensorFlow models yourself or guide your DevOps colleagues to deploy your models for your organization.

comments powered by Disqus