Adversarial network for natural language synthesis
Rajib Biswas outlines the application of AI algorithms like generative adversarial networks (GANs) to solve natural language synthesis tasks. Join in to learn how AI can accomplish complex tasks like machine translation, write poetry with style, read a novel, and answer your questions.
Talk Title | Adversarial network for natural language synthesis |
Speakers | Rajib Biswas (Ericsson) |
Conference | O’Reilly Artificial Intelligence Conference |
Conf Tag | Put AI to Work |
Location | London, United Kingdom |
Date | October 15-17, 2019 |
URL | Talk Page |
Slides | Talk Slides |
Video | |
The key issue with generative tasks is about deciding what a good cost function should be. GAN introduces two networks to solve that. The generator network creates fake samples, and the discriminator network distinguishes them from real samples. GAN has been predominantly applied in image augmentation and is particularly good at generating continuous samples, but because of this, it can’t be used directly for text generation (as it’s sequence of discrete numbers.) Rajib Biswas outlines the recent breakthroughs in applying adversarial networks for language generation such as SeqGAN (policy gradient reinforcement learning methods), LeakGAN (long text generation with leaked information), and a reparameterization trick for latent variables. You’ll learn about a variety of applications and tasks, including GAN for machine translation, GAN for dialogue generation, and GAN for style transfer, along with seeing a demonstration of language generation application with code.