How to build privacy and security into deep learning models

In recent years, we've seen tremendous improvements in artificial intelligence, due to the advances of neural-based models. However, the more popular these algorithms and techniques get, the more serious the consequences of data and user privacy. Yishay Carmiel reviews these issues and explains how they impact the future of deep learning development.
Talk Title | How to build privacy and security into deep learning models |
Speakers | Yishay Carmiel (IntelligentWire) |
Conference | O’Reilly Artificial Intelligence Conference |
Conf Tag | Put AI to Work |
Location | New York, New York |
Date | April 16-18, 2019 |
URL | Talk Page |
Slides | Talk Slides |
Video | |
In recent years, we’ve seen tremendous improvements in artificial intelligence, due to the advances of neural-based models. However, the more popular these algorithms and techniques get, the more serious the consequences of data and user privacy. These issues will drastically impact the future of AI research—specifically how neural-based models are developed, deployed, and evaluated. Yishay Carmiel shares techniques and explains how data privacy will impact machine learning development and how future training and inference will be affected. Yishay first dives into why training on private data should be addressed, federated learning, and differential privacy. He then discusses why inference on private data should be addressed, homomorphic encryption and neural networks, a polynomial approximation of neural networks, protecting data in neural networks, data reconstruction from neural networks, and methods and techniques to secure data reconstruction from neural networks.