January 21, 2020

249 words 2 mins read

Lessons learned building a scalable and extendable data pipeline for Call of Duty

Lessons learned building a scalable and extendable data pipeline for Call of Duty

What's easier than building a data pipeline? You add a few Apache Kafka clusters and a way to ingest data, design a way to route your data streams, add a few stream processors and consumers, integrate with a data warehouse. . .wait, this looks like a lot of things. Join Yaroslav Tkachenko to learn best practices for building a data pipeline, drawn from his experience at Demonware/Activision.

Talk Title Lessons learned building a scalable and extendable data pipeline for Call of Duty
Speakers Yaroslav Tkachenko (Activision)
Conference Strata Data Conference
Conf Tag Make Data Work
Location New York, New York
Date September 11-13, 2018
URL Talk Page
Slides Talk Slides
Video

What’s easier than building a data pipeline? You add a few Apache Kafka clusters and a way to ingest data (probably over HTTP), design a way to route your data streams, add a few stream processors and consumers, integrate with a data warehouse. . .wait, this looks like a lot of things, doesn’t it? And you probably want to make it highly scalable and available too. Join Yaroslav Tkachenko to learn best practices for building a data pipeline, drawn from his experience at Demonware/Activision. Yaroslav shares lessons learned about scale pipelines, not only in terms of messages per second but also in terms of supporting more games and more use cases, as well as message schemas, Apache Kafka organization and tuning, topics naming conventions, structure and routing, reliable and scalable producers and the ingestion layer, and stream processing.

comments powered by Disqus