December 25, 2019

210 words 1 min read

Intel Nervana Graph: A universal deep learning compiler

Intel Nervana Graph: A universal deep learning compiler

With the chaotic and rapidly evolving landscape around deep learning, we need deep learning-specific compilers to enable maximum performance in a wide variety of use cases on a wide variety of hardware platforms. Jason Knight offers an overview of the Intel Nervana Graph project, which was designed to solve this problem.

Talk Title Intel Nervana Graph: A universal deep learning compiler
Speakers Jason Knight (Intel)
Conference Artificial Intelligence Conference
Conf Tag Put AI to Work
Location San Francisco, California
Date September 18-20, 2017
URL Talk Page
Slides Talk Slides
Video

With the chaotic and rapidly evolving landscape around deep learning, we need deep learning-specific compilers to enable maximum performance in a wide variety of use cases on a wide variety of hardware platforms. Jason Knight offers an overview of the Intel Nervana Graph project, which was designed to solve this problem. Intel Nervana establishes a hardware-independent intermediate representation (IR) for deep learning that all deep learning frameworks can target, which allows them to seamlessly and efficiently execute across present and future platforms with minimal effort. In addition to this IR, the project offers connectors to popular frameworks such as TensorFlow, Intel’s reference framework neon, and backends for compiling and executing this IR on CPUs, GPUs, and emerging deep learning accelerators.

comments powered by Disqus