December 18, 2019

256 words 2 mins read

Programming your way to explainable AI

Programming your way to explainable AI

As interactive and autonomous systems make their way into nearly every aspect of our lives, it is crucial to gain more trust in intelligent systems. Mark Hammond explores the latest techniques and research in building explainable AI systems. Join in to learn approaches for building explainability into control and optimization tasks, including robotics, manufacturing, and logistics.

Talk Title Programming your way to explainable AI
Speakers Mark Hammond (Microsoft)
Conference O’Reilly Artificial Intelligence Conference
Conf Tag Put AI to Work
Location New York, New York
Date June 27-29, 2017
URL Talk Page
Slides Talk Slides
Video

Greater interpretability is crucial to greater adoption of applied AI, yet today’s most popular approaches to building AI models don’t allow for this. Explainability of intelligent systems has run the gamut from traditional expert systems, which are totally explainable but inflexible and hard to use, to deep neural networks, which are effective but virtually impossible to see inside. Developing trust between consumers of AI applications and the algorithms that power them will require the ability to understand how intelligent systems reach conclusions. Mark Hammond explores the latest techniques and cutting-edge research currently underway to build explainability into AI models. Mark dives into two approaches—learning deep explanations and model induction—and discusses the effectiveness of each in explaining classification tasks. Mark then explains how a third category—learning more interpretable models with recomposability—uses building blocks to build explainability into control tasks. To keep it fun and engaging, Mark then demonstrates these approaches by more effectively solving the game Lunar Lander in the Bonsai platform.

comments powered by Disqus