January 18, 2020

240 words 2 mins read

Cognitive Workloads on the Cloud: What Clouds Need To Do

Cognitive Workloads on the Cloud: What Clouds Need To Do

With the emergence of new powerful hardware accelerators, such as Graphical Processing Units (GPUs) and Programmable Field Gate Arrays (FPGAs), more and more artificial intelligence (AI) has appeared …

Talk Title Cognitive Workloads on the Cloud: What Clouds Need To Do
Speakers Larry Brown (IBM), Khoa Huynh (IBM Senior Technical Staff Member (STSM), IBM), Brian Wan (Software Engineer, IBM)
Conference Open Source Summit North America
Conf Tag
Location Los Angeles, CA, United States
Date Sep 10-14, 2017
URL Talk Page
Slides Talk Slides
Video

With the emergence of new powerful hardware accelerators, such as Graphical Processing Units (GPUs) and Programmable Field Gate Arrays (FPGAs), more and more artificial intelligence (AI) has appeared in our daily lives, from Siri, Alexa, language translation, image recognition, to self-driving cars. The cognitive era has truly begun. In this presentation, we will look at the key characteristics of cognitive workloads, including the deep-learning workloads that employ neural networks to solve complex AI problems. Popular software stacks and frameworks supporting these workloads, such as TensorFlow and Caffe, will be discussed. We will then present some performance data to show how well typical cognitive workloads, such as neural network model training, run on GPUs and FPGAs that are available on public clouds, such as IBM SoftLayer, Amazon Web Services (AWS), Microsoft Azure, and others. Finally, we provide some recommendations on what public clouds should provide, in terms of hardware and software images, to optimize for cognitive workloads.

comments powered by Disqus