December 21, 2019

317 words 2 mins read

Model interpretation guidelines for the enterprise: Using Jupyters interactiveness to build better predictive models (sponsored by DataScience.com)

Model interpretation guidelines for the enterprise: Using Jupyters interactiveness to build better predictive models (sponsored by DataScience.com)

Pramit Choudhary offers an overview of Datascience.com's model interpretation library Skater, explains how to use it to evaluate models using the Jupyter environment, and shares how it could help analysts, data scientists, and statisticians better understand their model behaviorwithout compromising on the choice of algorithm.

Talk Title Model interpretation guidelines for the enterprise: Using Jupyters interactiveness to build better predictive models (sponsored by DataScience.com)
Speakers Pramit Choudhary (h2o.ai)
Conference JupyterCon in New York 2017
Conf Tag
Location New York, New York
Date August 23-25, 2017
URL Talk Page
Slides Talk Slides
Video

The adoption of machine learning and statistical models to solve real-world problems has increased exponentially, but users still struggle to derive the full potential of the predictive models. There is a dichotomy between explainability and model performance while making the choice of the algorithm. Linear models and simple decision trees are often preferred over more complex models such as ensembles or deep learning when operationalizing models for ease of interpretation, which often results in a loss of accuracy. But is it necessary to accept a trade-off between model complexity and interpretability? Being able to faithfully interpret a model globally, using partial dependence plots (PDP) and relative feature importance, and locally, using local interpretable model-agnostic interpretation (LIME), helps in understanding feature contribution on predictions and model variability in a nonstationary environment. This enables trust in the algorithm, which drives better collaboration and communication among peers. And the need to understand the variability in the predictive power of a model in human-interpretable way is even more important for complex models (e.g., text, images, and machine translations). Pramit Choudhary offers an overview of Datascience.com’s model interpretation library Skater, explains how to use it to evaluate models using the Jupyter environment, and shares how it could help analysts, data scientists, and statisticians better understand their model behavior—without compromising on the choice of algorithm. This session is sponsored by DataScience.com.

comments powered by Disqus