December 29, 2019

419 words 2 mins read

Regularization of RNNs through Bayesian networks

Regularization of RNNs through Bayesian networks

While deep learning has shown significant promise for model performance, it can quickly become untenable particularly when data size is short. RNNs can quickly memorize and overfit. Vishal Hawa explains how a combination of RNNs and Bayesian networks (PGM) can improve the sequence modeling behavior of RNNs.

Talk Title Regularization of RNNs through Bayesian networks
Speakers vishal hawa (Vanguard)
Conference O’Reilly Artificial Intelligence Conference
Conf Tag Put AI to Work
Location New York, New York
Date April 16-18, 2019
URL Talk Page
Slides Talk Slides
Video

Deep learning has shown significant promise for model performance, but any DL technique may require large volumes of data. Without it, DL models can quickly become untenable, particularly when data size falls short of the problem space—a common challenge while training RNNs. RNNs can quickly memorize and overfit when the data size is small to medium. On the other hand, Bayesian techniques (particularly Bayesian networks) are more robust in the face of missing data, noise, and data size, but they lack order or sequence information. However, by combining these modeling techniques, you can harness the power of RNNs at the expense of data size. Drawing on a marketing channel attribution modeling use case, Vishal Hawa exposes the shortcomings of RNNs and demonstrates how a combination of RNNs and Bayesian networks (PGM) can not only overcome them but also improve the sequence modeling behavior of RNNs. While attempting to attribute credits to a channel, it’s important to take channel interactions, the number of impressions of channel on the leads, and the order in which the channel was touched in a lead’s journey into account. First, each lead’s journey or path is processed through Bayesian nets, which produce posterior distribution; this posterior distribution can then be trained alongside the RNN architecture, stacked LSTMs and GRU architecture, to capture the effectiveness of the order in which the channels are touched for the marketing campaign. However, since the posterior distribution is composed of positive and negative cases, the solution uses a hyperparameter for regularization to best segregate positive and negative distributions. The length of the sequence (channel touches) needs to be trimmed so that the combined architecture will effectively generalize the order and sequence impact on the attribution. The combined trained architecture can then be used to score each lead (its path journey) and arrive at odds of becoming a client. This technique not only assess effectiveness of a path but also provides optimal points of interception, even at the expense of missing or limited data size.

comments powered by Disqus