November 19, 2019

218 words 2 mins read

Word embeddings under the hood: How neural networks learn from language

Word embeddings under the hood: How neural networks learn from language

Word vector embeddings are everywhere, but relatively few understand how they produce their remarkable results. Patrick Harrison opens up the black box of a popular word embedding algorithm and walks you through how it works its magic. Patrick also covers core neural network concepts, including hidden layers, loss gradients, backpropagation, and more.

Talk Title Word embeddings under the hood: How neural networks learn from language
Speakers Patrick Harrison (S&P Global)
Conference Strata Data Conference
Conf Tag Big Data Expo
Location San Jose, California
Date March 6-8, 2018
URL Talk Page
Slides Talk Slides
Video

Since their introduction in the early 2010s, word vector embedding models have exploded in popularity and use. They are one of the key breakthroughs that have enabled a new, state-of-the-art approach to natural language processing based on deep learning. But despite their impact, relatively few practitioners understand how word vector models work under the hood to capture the semantic relationships within natural language data and produce their remarkable results. Patrick Harrison opens up the black box of a popular word embedding algorithm and walks you through how it works its magic. Patrick also covers core neural network concepts, including hidden layers, loss gradients, backpropagation, and more. This talk is based on an excerpt from the forthcoming book Deep Learning with Text from O’Reilly Media.

comments powered by Disqus