December 27, 2019

345 words 2 mins read

Analytics Zoo: Distributed TensorFlow in production on Apache Spark

Analytics Zoo: Distributed TensorFlow in production on Apache Spark

Yuhao Yang and Jennie Wang demonstrate how to run distributed TensorFlow on Apache Spark with the open source software package Analytics Zoo. Compared to other solutions, Analytics Zoo is built for production environments and encourages more industry users to run deep learning applications with the big data ecosystems.

Talk Title Analytics Zoo: Distributed TensorFlow in production on Apache Spark
Speakers Yuhao Yang (Intel), Jiao(Jennie) Wang (Intel)
Conference Strata Data Conference
Conf Tag Big Data Expo
Location San Francisco, California
Date March 26-28, 2019
URL Talk Page
Slides Talk Slides

Building a model is fun and exciting, but putting it to production is always a different story. While TensorFlow focuses on model building, a complete DL/ML system always needs a robust infrastructure platform for data ingestion, feature extraction, and pipeline management. Apache Spark is a perfect candidate. In recent releases, TensorFlow has been enhanced for distributed learning and HDFS access. Several community projects are also wiring TensorFlow onto Apache Spark clusters. While these approaches are a step in the right direction, they usually require complicated deployment steps or error-prone interprocess communication. Yuhao Yang and Jennie Wang offer an overview of Analytics Zoo, a unified analytics and AI platform for distributed TensorFlow, Keras, and BigDL on Apache Spark. This new framework enables easy experimentation for algorithm designs and supports training and inference on Spark clusters with ease of use and near-linear scalability. Compared with other frameworks, Analytics Zoo is designed to serve in production environment. It requires minimal or even no deployment effort on vanilla Spark clusters; offers high performance through intraprocess communication and optimized parameter synchronization; provides a rich variety of inference patterns, including low-latency local POJO, high-throughput batching, and streaming; and supplies a variety of reference use cases and preprocessing utilities. Join Yuhao and Jennie to explore the tech details behind Analytics Zoo and walk through multiple examples that highlight its key capabilities. Along the way, you’ll discover how with a few extra lines of code, an existing TensorFlow algorithm can be transformed into a Spark application and integrated with the big data world.

comments powered by Disqus