February 2, 2020

265 words 2 mins read

The OS for AI: How serverless computing enables the next gen of machine learning

The OS for AI: How serverless computing enables the next gen of machine learning

ML has been advancing rapidly, but only a few contributors focus on the infrastructure and scaling challenges that come with it. Jonathan Peck explores why ML is a natural fit for serverless computing, a general architecture for scalable ML, and common issues when implementing on-demand scaling over GPU clusters, providing general solutions and a vision for the future of cloud-based ML.

Talk Title The OS for AI: How serverless computing enables the next gen of machine learning
Speakers Jonathan Peck (GitHub)
Conference O’Reilly Artificial Intelligence Conference
Conf Tag Put AI to Work
Location San Jose, California
Date September 10-12, 2019
URL Talk Page
Slides Talk Slides
Video

Machine learning has been advancing rapidly, but only a few contributors are focusing on the infrastructure and scaling challenges that come with it. When you have thousands of model versions, each written in any mix of frameworks (Python, R, Java, Ruby, PyTorch, scikit-learn, Caffe, TensorFlow, etc.), it’s difficult to know how to efficiently deploy them as elastic, scalable, secure APIs with 10 ms of latency and GPU access. Algorithmia has seen many of the challenges faced in this area. Jonathan Peck explores how the company built, deployed, and scaled thousands of algorithms and machine learning models using every kind of framework. You’ll learn some insights into the problems you’re likely to face and how to approach solving them. Jonathan examines the need for, and implementations of, a complete operating system for AI: a common interface for different algorithms to be used and combined, and a general architecture for serverless machine learning which is discoverable, versioned, scalable, and sharable.

comments powered by Disqus