December 21, 2019

209 words 1 min read

Adding meaning to natural language processing

Adding meaning to natural language processing

Jonathan Mugan surveys the field of natural language processing (NLP), both from a symbolic and a subsymbolic perspective, arguing that the current limitations of NLP stem from computers having a lack of grounded understanding of our world. Jonathan then outlines ways that computers can achieve that understanding.

Talk Title Adding meaning to natural language processing
Speakers Jonathan Mugan (DeepGrammar)
Conference O’Reilly Artificial Intelligence Conference
Conf Tag Put AI to Work
Location New York, New York
Date June 27-29, 2017
URL Talk Page
Slides Talk Slides

Jonathan Mugan surveys two paths in natural language processing to move from meaningless tokens to artificial intelligence. The first path is the symbolic path. Jonathan explores the bag-of-words and tf-idf models for document representation and discusses topic modeling with latent Dirichlet allocation (LDA). Jonathan then covers sentiment analysis, representations such as WordNet, FrameNet, ConceptNet, and the importance of causal models for language understanding. The second path is the subsymbolic path—the neural networks (deep learning) that you’ve heard so much about. Jonathan begins with word vectors, explaining how they are used in sequence-to-sequence models for machine translation, before demonstrating how machine translation lays the foundation for general question answering. Jonathan concludes with a discussion of how to build deeper understanding into your artificial systems.

comments powered by Disqus