nGraph: Unlocking next-generation performance with deep learning compilers
The rapid growth of deep learning in demanding large-scale real-world applications has led to a rapid increase in demand for high-performance training and inference solutions. Adam Straw, Adam Procter, and Robert Earhart offer a comprehensive overview of Intel's nGraph deep learning compiler.
Talk Title | nGraph: Unlocking next-generation performance with deep learning compilers |
Speakers | Adam Straw (Intel), Adam Procter (Intel AI), Robert Earhart (Intel) |
Conference | O’Reilly Artificial Intelligence Conference |
Conf Tag | Put AI to Work |
Location | New York, New York |
Date | April 16-18, 2019 |
URL | Talk Page |
Slides | Talk Slides |
Video | |
The rapid growth of deep learning in demanding large-scale real-world applications has led to a rapid increase in demand for high-performance training and inference solutions. This demand is reflected in the growth of investment in deep learning performance by major hardware manufacturers, including a proliferation of new application-specific accelerators. But performance isn’t driven by hardware alone. In the software realm, a new class of deep learning compilers has emerged, which brings to bear both classic and novel compiler techniques in order to maximize the performance of deep learning systems. Recently developed deep learning compilers include NNVM/TVM from the University of Washington and Amazon, Glow from Facebook, XLA from Google, and nGraph from Intel. These deep learning compilers unlock a wealth of optimizations that take a view of the whole data-flow graph. This approach achieves substantial speedups over the approach favored by existing frameworks, where an interpreter orchestrates the invocation of per-op compute kernels that must be optimized specifically for the framework and hardware target. Adam Straw, Adam Procter, and Robert Earhart offer a comprehensive overview of Intel’s nGraph deep learning compiler. Topics include: