January 18, 2020

233 words 2 mins read

Building Robust Streaming Data Pipelines with Apache Spark

Building Robust Streaming Data Pipelines with Apache Spark

There are challenges to architecting a solution that will allow for developers to stream data into Kafka and be able to manage dirty data which is always an issue in ETL pipelines. I'd like to share l …

Talk Title Building Robust Streaming Data Pipelines with Apache Spark
Speakers Zak Hassan (Senior Software Engineer - AI/ML CoE, CTO Office, Red Hat Inc.)
Conference Open Source Summit North America
Conf Tag
Location Los Angeles, CA, United States
Date Sep 10-14, 2017
URL Talk Page
Slides Talk Slides
Video

There are challenges to architecting a solution that will allow for developers to stream data into Kafka and be able to manage dirty data which is always an issue in ETL pipelines. I’d like to share lessons learned and demonstrate how we can put Apache Kafka, Apache Spark and Apache Camel together to provide developers with a continuous data pipeline for the Spark applications. Without data it is very difficult to take advantage of its full capabilities of Spark. Companies sometimes have their data stored in many different systems and Apache Camel allows developers to Extract, Transform and Load their data to many systems Apache Kafka is one example. Apache Kafka is great for aggregating data in a centralized location and Apache Spark already comes with a built in connector to connect to Kafka. I’ll also be explaining lessons learned from running these technologies inside docker.

comments powered by Disqus