December 6, 2019

327 words 2 mins read

Deeply active learning: Approximating human learning with smaller datasets combined with human assistance

Deeply active learning: Approximating human learning with smaller datasets combined with human assistance

Natural-language assistants are the emergent killer app for AI. Getting from here to there with deep learning, however, can require enormous datasets. Christopher Nguyen and Binh Han explain how to shorten the time to effectiveness and the amount of training data that's required to achieve a given level of performance using human-in-the-loop active learning.

Talk Title Deeply active learning: Approximating human learning with smaller datasets combined with human assistance
Speakers Christopher Nguyen (Arimo), Binh Han (Arimo)
Conference O’Reilly Artificial Intelligence Conference
Conf Tag
Location New York, New York
Date September 26-27, 2016
URL Talk Page
Slides Talk Slides
Video

Natural-language assistants are the emergent killer app for AI. An important use case is mapping natural-language questions to answers, expressed as a sequence of API calls. A business user wants to ask for some analysis, but traditional user interface systems create some affordances for business users to do some predefined set of operations by hard-wiring UI events to application-level APIs, severely limiting business users to phrasing their requests in very mechanical, well-formed, and fixed interactions (choosing options, clicking buttons, filling out wizards, etc.). Deep learning (specifically recurrent neural networks), on the other hand, shows surprisingly good performance on text understanding and natural language processing. By taking advantage of recurrent networks, we can create a smart assistant that knows plain English out of the box and can map English phrases to API calls. However, getting from here to there can require enormous datasets. Christopher Nguyen and Binh Han explain how to shorten the time to effectiveness and the amount of training data that’s required to achieve a given level of performance using human-in-the-loop active learning. By using the smart assistant model, Chistopher and Binh allow the assistant to actively learn over time by simply interacting with a user. Active learning gradually pushes the pretrained general deep model underneath toward a customized model that responds much better to the user requests—a personalized smart assistant that can adapt to users’ styles.

comments powered by Disqus