Interpretable machine learning products
Interpretable models result in more accurate, safer, and more profitable machine learning products, but interpretability can be hard to ensure. Michael Lee Williams examines the growing business case for interpretability, explores concrete applications including churn, finance, and healthcare, and demonstrates the use of LIME, an open source, model-agnostic tool you can apply to your models today.
Talk Title | Interpretable machine learning products |
Speakers | Mike Lee Williams (Cloudera Fast Forward Labs) |
Conference | Strata Data Conference |
Conf Tag | Making Data Work |
Location | London, United Kingdom |
Date | May 22-24, 2018 |
URL | Talk Page |
Slides | Talk Slides |
Video | |
A model you can interpret and understand is one you can more easily improve. It is also one you, regulators, and society can better trust to be safe and nondiscriminatory. An accurate, interpretable model can also offer insights that can be used to change real-world outcomes for the better. There is a central tension, however, between accuracy and interpretability: the most accurate models are necessarily the hardest to understand. Michael Lee Williams examines the growing business case for interpretability, explores concrete applications including churn, finance, and healthcare, and discusses LIME, an open source, model-agnostic tool that gets around the tension between accuracy and interpretability by allowing you to peer inside black-box models. Michael concludes by sharing a practical prototype that brings these concepts to life: a working web application that uses LIME to explain why customers churn and raises the possibility of intervening to prevent their loss.