November 1, 2019

411 words 2 mins read

The future of column-oriented data processing with Arrow and Parquet

The future of column-oriented data processing with Arrow and Parquet

In pursuit of speed, big data is evolving toward columnar execution. The solid foundation laid by Arrow and Parquet for a shared columnar representation across the ecosystem promises a great future. Julien Le Dem and Jacques Nadeau discuss the future of columnar and the hardware trends it takes advantage of, such as RDMA, SSDs, and nonvolatile memory.

Talk Title The future of column-oriented data processing with Arrow and Parquet
Speakers Julien Le Dem (WeWork), Jacques Nadeau (Dremio)
Conference Strata + Hadoop World
Conf Tag Big Data Expo
Location San Jose, California
Date March 14-16, 2017
URL Talk Page
Slides Talk Slides
Video

In pursuit of speed and efficiency, big data processing is continuing its logical evolution toward columnar execution. A number of key big data technologies have or will soon have in-memory columnar capabilities. This includes Kudu, Ibis, Drill, and many others. Jacques Nadeau, vice president of Apache Arrow, and Julien Le Dem, vice president of Apache Parquet, discuss the future of columnar data processing and the hardware trends it can take advantage of. Modern CPUs achieve higher throughput using SIMD instructions and vectorization on Apache Arrow’s columnar in-memory representation. Similarly, Apache Parquet provides storage and I/O optimized columnar data access using statistics and appropriate encodings. For interoperability, row-based encodings (CSV, Thrift, Avro) combined with general purpose compression algorithms (GZip, LZO, Snappy) are common but inefficient. This solid foundation for a shared columnar representation across the big data ecosystem promises great things for the future. The Arrow and Parquet Apache projects define standard columnar representations, allowing interoperability without the usual cost of serialization. Arrow-based interconnection between the various big data tools (SQL, UDFs, machine learning, big data frameworks, etc.) enable these tools to be used together seamlessly and efficiently without overhead. When collocated on the same processing node, read-only shared memory and IPC avoid communication overhead. When remote, scatter-gather I/O sends the memory representation directly to the socket, avoiding serialization costs—and soon RDMA will allow exposing data remotely. As in-memory processing becomes more popular, the traditional tiering of RAM as working space and HDD as persistent storage is outdated. More tiers are now available, such as SSDs and nonvolatile memory, that provide much higher data density, achieving a latency close to RAM at a fraction of the cost. Execution engines can take advantage of more-granular tiering and avoid the traditional spilling to disk, which impacts performance by an order of magnitude when the working dataset does not fit in main memory.

comments powered by Disqus