February 6, 2020

308 words 2 mins read

Deep learning at scale: Tools and solutions

Deep learning at scale: Tools and solutions

Success with DL requires more than just TensorFlow or PyTorch. Angela Wu, Sidney Wijngaarde, Shiyuan Zhu, and Vishnu Mohan detail practical problems faced by practitioners and the software tools and techniques you'll need to address the problems, including data prep, GPU scheduling, hyperparameter tuning, distributed training, metrics management, deployment, mobile and edge optimization, and more.

Talk Title Deep learning at scale: Tools and solutions
Speakers Angela Wu (Determined AI), Sidney Wijngaarde (Determined AI), Shiyuan Zhu (Determined AI), Vishnu Mohan (Determined AI)
Conference O’Reilly Artificial Intelligence Conference
Conf Tag Put AI to Work
Location San Jose, California
Date September 10-12, 2019
URL Talk Page
Slides Talk Slides
Video

Building a sophisticated and successful deep learning (DL) practice involves far more than installing frameworks such as TensorFlow or PyTorch then developing and deploying models. As DL research teams grow and model complexity increases, a new set of challenges begin to mount. Research teams will find themselves needing to share GPU infrastructure efficiently, over which they’ll train many models, tune hyperparameters, explore many neural network architectures, and exploit parallel and distributed training techniques to speed up training time. They’ll grow to depend on an effective model lifecycle and metadata management system to ensure reproducibility of results and foster collaboration between researchers within and across teams. They’ll be asked to improve the inference performance of their DL models, particularly for resource-constrained mobile and edge deployments. Tackling these challenges typically requires extensive research and engineering talent. Angela Wu, Sidney Wijngaarde, Shiyuan Zhu and Vishnu Mohan provide you with an overview of these challenges, present state-of-the-art solutions, and discuss popular software and tools with a focus on hyperparameter tuning, distributed training, and model serving. You’ll work through hands-on examples of how to solve practical DL challenges and walk away with an understanding of best practices and tools to smooth your organization’s adoption of DL.

comments powered by Disqus