November 25, 2019

291 words 2 mins read

Human in the loop: Bayesian rules enabling explainable AI

Human in the loop: Bayesian rules enabling explainable AI

Pramit Choudhary explores the usefulness of a generative approach that applies Bayesian inference to generate human-interpretable decision sets in the form of "if. . .and else" statements. These human interpretable decision lists with high posterior probabilities might be the right way to balance between model interpretability, performance, and computation.

Talk Title Human in the loop: Bayesian rules enabling explainable AI
Speakers Pramit Choudhary (h2o.ai)
Conference Strata Data Conference
Conf Tag Big Data Expo
Location San Jose, California
Date March 6-8, 2018
URL Talk Page
Slides Talk Slides
Video

The adoption of machine learning to solve real-world problems has increased exponentially, but users still struggle to derive full potential of the predictive models. It is no longer sufficient to evaluate a model’s accurate prediction just on a validation set based on error metrics. However, there is still a dichotomy between explainability and model performance when choosing an algorithm. Linear models and simple decision trees are often preferred over more complex models such as ensembles or deep learning models for ease of interpretation, but this often results in loss in accuracy. However, is it actually necessary to accept a trade-off between model complexity and interpretability? Pramit Choudhary explores the usefulness of a generative approach that applies Bayesian inference to generate human-interpretable decision sets in the form of “if. . .and else” statements. These human interpretable decision lists with high posterior probabilities might be the right way to balance between model interpretability, performance, and computation. This is an extension of DataScience.com’s ongoing effort to enable trust in predictive algorithms to drive better collaboration and communication among peers. Pramit also outlines DataScience.com’s open source model interpretation framework, Skater, and explains how it helps practitioners understand model behavior better without compromising on the choice of algorithm.

comments powered by Disqus