December 6, 2019

271 words 2 mins read

Lessons learned from deploying the top deep learning frameworks in production

Lessons learned from deploying the top deep learning frameworks in production

By building a marketplace for algorithms, Algorithmia gained unique experience with building and deploying machine-learning models using a wide variety of frameworks. Kenny Daniel shares the lessons Algorithmia learned through trial and error, the pros and cons of different deep learning frameworks, and the challenges involved with deploying them in production systems.

Talk Title Lessons learned from deploying the top deep learning frameworks in production
Speakers Kenny Daniel (Algorithmia)
Conference O’Reilly Artificial Intelligence Conference
Conf Tag
Location New York, New York
Date September 26-27, 2016
URL Talk Page
Slides Talk Slides
Video

Algorithmia has a unique perspective on using not just one but five different deep learning frameworks. Since users depend on Algorithmia to host and scale their algorithms, Algorithmia has been forced to deal with all the idiosyncrasies of the many deep learning frameworks out there. Kenny Daniel covers the pros and cons of popular frameworks like TensorFlow, Caffe, Torch, and Theano. Cloud hosting deep learning models can be especially challenging due to complex hardware and software dependencies. Using GPU computing is not yet mainstream and is not as easy as spinning up an EC2 instance, but it is essential for making deep learning performant. Kenny explains why you should use one framework over another and more importantly, once you have picked a framework and trained a machine-learning model to solve your problem, how to reliably deploy it at scale. Kenny also discusses the challenges Algorithmia faced when it moved beyond simple demos and used deep learning in real production systems. Kenny shares what Algorithmia has learned from fighting these battles so that you don’t have to fight them yourself.

comments powered by Disqus