February 19, 2020

212 words 1 min read

How to deploy large-scale distributed data analytics and machine learning on containers (sponsored by HPE)

How to deploy large-scale distributed data analytics and machine learning on containers (sponsored by HPE)

Join Thomas Phelan to learn whether the combination of containers with large-scale distributed data analytics and machine learning applications is like combining oil and water or like peanut butter and chocolate.

Talk Title How to deploy large-scale distributed data analytics and machine learning on containers (sponsored by HPE)
Speakers Thomas Phelan (HPE BlueData)
Conference O’Reilly Artificial Intelligence Conference
Conf Tag Put AI to Work
Location London, United Kingdom
Date October 15-17, 2019
URL Talk Page
Slides Talk Slides
Video

AI, ML, and data analytics are transforming every industry. When Gartner recently asked CIOs which technologies are game-changers for their organization, AI and ML are at the very top of the list; data analytics is number two. At the same time, containerization is taking the enterprise by storm-driven by the benefits of greater agility, efficiency, and portability across any infrastructure. By 2022, more than 75% of global organizations will be running containerized applications in production, up from less than 30% today. As a result, many companies are now exploring whether it is possible to deploy complex distributed data analytics and ML applications (like Cloudera, Spark, Kafka, H2O, and TensorFlow) at scale on containers—with enterprise-grade security and performance in production. Thomas Phelan explains how to make it work. This session is sponsored by HPE.

comments powered by Disqus