From flat files to deconstructed databases: The evolution and future of the big data ecosystem
Big data infrastructure has evolved from flat files in a distributed filesystem to an efficient ecosystem to a fully deconstructed and open source database with reusable components. Julien Le Dem discusses the key open source components of the big data ecosystem and explains how they relate to each other and how they make the ecosystem more of a database and less of a filesystem.
Talk Title | From flat files to deconstructed databases: The evolution and future of the big data ecosystem |
Speakers | Julien Le Dem (WeWork) |
Conference | Strata Data Conference |
Conf Tag | Big Data Expo |
Location | San Francisco, California |
Date | March 26-28, 2019 |
URL | Talk Page |
Slides | Talk Slides |
Video | |
Over the past 10 years, big data infrastructure has evolved from flat files in a distributed filesystem to an efficient ecosystem to a fully deconstructed and open source database with reusable components. With Hadoop, we started from a system that was good at looking for a needle in a haystack using snowplows. We had a lot of horsepower and scalability but lacked the subtlety and efficiency of relational databases. But since Hadoop provided the ultimate flexibility compared to the more constrained and rigid RDBMSs, we didn’t mind and plowed through. However, machine learning, recommendations, matching, abuse detection, and data-driven products in general require a more flexible infrastructure. Over time, we started applying everything that had been known to the database world for decades to this new environment. We’d been told loud enough how Hadoop was a huge step backward. And it was true to some degree. The key difference was the flexibility of the Hadoop stack. There are many highly integrated components in a relational database and decoupling them took some time. Today, we see the emergence of key components, such as optimizers, columnar storage, in-memory representation, table abstraction, and batch and streaming execution, as standards that provide the glue between the options available to process, analyze, and learn from our data. We’ve been deconstructing the tightly integrated relational database into flexible reusable open source components. Storage, compute, multitenancy, and batch or streaming execution are all decoupled and can be modified independently to fit every use case. Julien Le Dem discusses the key open source components of the big data ecosystem—including Apache Calcite, Parquet, Arrow, Avro, and Kafka as well as batch and streaming systems—and explains how they relate to each other and how they make the ecosystem more of a database and less of a filesystem. (Parquet is the columnar data layout to optimize data at rest for querying. Arrow is the in-memory representation for maximum throughput execution and overhead-free data exchange. Calcite is the optimizer to make the most of our infrastructure capabilities.) Julien also explores the emerging components that are still missing or haven’t become standard yet to fully materialize the transformation to an extremely flexible database that lets you innovate with your data.