December 1, 2019

167 words 1 min read

Migrating petabyte-scale Hadoop clusters with zero downtime

Migrating petabyte-scale Hadoop clusters with zero downtime

Migrating petabyte-scale Hadoop installations to a new cluster with hundreds of machines, several thousands of jobs daily, and countless ecosystem integrations while maintaining a stable production environment is a challenging task. Alon Elishkov discusses the techniques and tools Outbrain has developed to achieve this goal.

Talk Title Migrating petabyte-scale Hadoop clusters with zero downtime
Speakers Alon Elishkov (Outbrain)
Conference Strata Data Conference
Conf Tag Making Data Work
Location London, United Kingdom
Date May 23-25, 2017
URL Talk Page
Slides Talk Slides
Video

Migrating petabyte-scale Hadoop installations to a new cluster with hundreds of machines, several thousands of jobs daily, and countless ecosystem integrations while maintaining a stable production environment is a challenging task. Add the need to allow active and fast paced continuous deployment with dozens of daily commits to continue on track and maintaining a stable production environment, and it becomes a truly herculean endeavor. Alon Elishkov discusses the techniques and tools Outbrain has developed to achieve this goal. This session is sponsored by MapR Technologies.

comments powered by Disqus