January 12, 2020

378 words 2 mins read

Explaining machine learning models

Explaining machine learning models

What does it mean to explain a machine learning model, and why is it important? Armen Donigian addresses those questions while discussing several modern explainability methods, including traditional feature contributions, LIME, and DeepLift. Each of these techniques offers a different perspective, and their clever application can reveal new insights and solve business requirements.

Talk Title Explaining machine learning models
Speakers Armen Donigian (ZestFinance)
Conference Artificial Intelligence Conference
Conf Tag Put AI to Work
Location San Francisco, California
Date September 5-7, 2018
URL Talk Page
Slides Talk Slides
Video

Machine learning models are often complex, with massive abstract descriptions that make the relationship between their inputs and outputs seem like a black box. A modern neural network, for example, might look at thousands of features and perform millions of additions and multiplications to produce a prediction. But how do we explain that prediction to someone else? How do we tell which features are important and why? And if we can’t understand how a model makes a prediction, do we really trust it to run our business, make medical conclusions, or make an unbiased decision about an applicant’s eligibility for a loan? Explainability techniques clarify how models make decisions, offering answers to these questions and giving us confidence that our models are functioning properly (or not). Each of these techniques is applicable to a different set of models, makes different assumptions, and answers a slightly different question, but when used properly, they can meet business requirements and improve model performance. Armen Donigian shares several examples of two of the main types of explainability. The first directly relates inputs to outputs, a naturally intuitive approach that includes local interpretable model-agnostic explanations (LIME), axiomatic attributions, VisualBackProp, and traditional feature contributions. The second makes use of the data the model was trained on. DeepLift, for example, can show which training examples were most relevant to a model’s decision, while scrambling and prototype methods offer overviews of the decision-making process. Along the way, Armen discusses how ZestFinance approaches explainability, offering a practical guide for your own work. While there is no perfect “silver bullet” explainability technique, understanding when and how to use these approaches enables you to explain many useful models and gives you a broad view of current explainability best practices and research.

comments powered by Disqus