Introducing Kubeflow (with special guests TensorFlow and Apache Spark)
Modeling is easyproductizing models, less so. Distributed training? Forget about it. Say hello to Kubeflow with Holden Karaua system that makes it easy for data scientists to containerize their models to train and serve on Kubernetes.
|Talk Title||Introducing Kubeflow (with special guests TensorFlow and Apache Spark)|
|Speakers||Holden Karau (Independent)|
|Conference||O’Reilly Artificial Intelligence Conference|
|Conf Tag||Put AI to Work|
|Location||San Jose, California|
|Date||September 10-12, 2019|
Data science, machine learning, and artificial intelligence have exploded in popularity in the last five years, but the nagging question of how to put models into production remains. Engineers are typically tasked to build one-off systems to serve predictions that must be maintained amid a quickly evolving backend serving space that evolved from a single machine to custom clusters to “serverless” to Docker to Kubernetes. Holden Karau presents Kubeflow—an open source project that makes it easy for users to move models from laptop to ML rig to training cluster to deployment. Learn exactly what Kubeflow is, why scalability is so critical for training and model deployment, and more. You’ll leave able to deploy models written in Python’s scikit-learn, R, TensorFlow, Spark, and more. The magic of Kubernetes allows you to write models on your laptop, deploy to an ML rig, and then DevOps can move that model into production with all the bells and whistles, such as monitoring, A/B tests, multiarm bandits, and security.