December 15, 2019

200 words 1 min read

Big data architectural patterns and best practices on AWS

Big data architectural patterns and best practices on AWS

Siva Raghupathy demonstrates how to use Hadoop innovations in conjunction with Amazon Web Services (cloud) innovations.

Talk Title Big data architectural patterns and best practices on AWS
Speakers
Conference Strata + Hadoop World
Conf Tag Make Data Work
Location New York, New York
Date September 27-29, 2016
URL Talk Page
Slides Talk Slides
Video

The world is producing an ever-increasing volume, velocity, and variety of big data. Consumers and businesses are demanding up-to-the-second (or even millisecond) analytics on their fast-moving data, in addition to classic batch processing. The Hadoop ecosystem and AWS provide a plethora of tools for solving big data problems. But what tools should you use, why, and how? Siva Raghupathy demonstrates how to use Hadoop innovations in conjunction with Amazon Web Services innovations, showing how to simplify big data processing as a data bus comprising various stages: collect, store, process/analyze, and consume. Siva then discusses how to choose the right technology in each stage based on criteria such as data structure, query latency, cost, request rate, item size, data volume, durability, and so on before providing reference architecture, design patterns, and best practices for assembling these technologies to solve your big data problems at the right cost.

comments powered by Disqus