December 17, 2019

416 words 2 mins read

A hands-on data science crash course for modeling and predicting the behavior of (large) distributed systems

A hands-on data science crash course for modeling and predicting the behavior of (large) distributed systems

Data science is a hot topic. Bart De Vylder offers a practical introduction that goes beyond the hype, exploring data analysis, visualization, and machine-learning techniques using Python for modeling the behavior of distributed systems. You'll leave with a solid starting point to implement data science techniques in your infrastructure or domain of interest.

Talk Title A hands-on data science crash course for modeling and predicting the behavior of (large) distributed systems
Speakers Bart De Vylder (CoScale)
Conference O’Reilly Velocity Conference
Conf Tag Build Resilient Distributed Systems
Location San Jose, California
Date June 20-22, 2017
URL Talk Page
Slides Talk Slides
Video

Data science is a hot topic. However, the high number of available software libraries, languages, and platforms is often overwhelming for those who want to get started in the field. Bart De Vylder offers a practical introduction that goes beyond the hype, exploring data analysis and modeling techniques applied to the behavior of distributed systems. Using hosted iPython notebooks and a real-world dataset of monitoring data originating from a nontrivial distributed application, consisting of both stateful and stateless services communicating over a message bus, Bart walks you through the Python scientific ecosystem (NumPy, SciPy and scikit-learn) as he demonstrates different data visualization techniques that help the interpretation of the data and the models built from it. Bart discusses data clustering techniques, such as those to automatically discover which servers or containers are running in a load-balanced fashion, shows you how to apply correlation analysis and dimensionality reduction techniques. Modern monitoring systems easily capture tens of thousands of metrics, but many of these metrics are highly correlated and don’t convey much extra information. Applying dimensionality reduction techniques to automatically discover these correlations helps in understanding and visualizing the data and is a step in the process of preparing and modeling the data. Bart also outlines supervised machine-learning techniques to model data and touches on the important concepts of overfitting and cross-validation, considering the advantages and disadvantages of both simple linear techniques and more advanced ones. Bart then shows how to put these models in action and make predictions, discussing techniques for performing what-if analyses related to capacity planning (e.g., which resource will be the next bottleneck if the number of web requests keeps increasing?) and robustness (e.g., what is the impact on a service’s SLA if a node falls out?). Bart ends with a challenging problem on the given dataset using one of the discussed techniques—with a nice prize for the attendee with the best solution.

comments powered by Disqus