February 18, 2020

228 words 2 mins read

ROCm and Hopsworks for end-to-end deep learning pipelines

ROCm and Hopsworks for end-to-end deep learning pipelines

The Radeon open ecosystem (ROCm) is an open source software foundation for GPU computing on Linux. ROCm supports TensorFlow and PyTorch using MIOpen, a library of highly optimized GPU routines for deep learning. Jim Dowling and Ajit Mathews outline how the open source Hopsworks framework enables the construction of horizontally scalable end-to-end machine learning pipelines on ROCm-enabled GPUs.

Talk Title ROCm and Hopsworks for end-to-end deep learning pipelines
Speakers Jim Dowling (Logical Clocks), Ajit Mathews (AMD)
Conference O’Reilly Artificial Intelligence Conference
Conf Tag Put AI to Work
Location London, United Kingdom
Date October 15-17, 2019
URL Talk Page
Slides Talk Slides
Video

The Radeon open ecosystem (ROCm) is an open source software foundation for GPU computing on Linux. ROCm supports TensorFlow and PyTorch using MIOpen, a library of highly optimized GPU routines for deep learning. Jim Dowling and Ajit Mathews outline how Hopsworks, an open source platform for machine learning infrastructure, enables the training and operation of deep learning models on ROCm in horizontally scalable end-to-end machine learning pipelines. He presents performance benchmarks for ROCm on new GPU hardware (AMD MI50, MI60 GPUs) and shows you how Hopsworks can enable distributed deep learning with both ROCm and Cuda on both TensorFlow and PyTorch. You’ll see a live demonstration of training and inference for an end-to-end machine learning pipeline written in a number of Jupyter notebooks orchestrated by Airflow.

comments powered by Disqus