January 6, 2020

278 words 2 mins read

Synthetic video generation: Why seeing should not always be believing

Synthetic video generation: Why seeing should not always be believing

The advent of "fake news" has led us to doubt the truth of online media, and advances in machine learning give us an even greater reason to question what we are seeing. Despite the many beneficial applications of this technology, it's also potentially very dangerous. Alex Adam explains how synthetic videos are created and how they can be detected.

Talk Title Synthetic video generation: Why seeing should not always be believing
Speakers Alexander Adam (Faculty)
Conference Strata Data Conference
Conf Tag Making Data Work
Location London, United Kingdom
Date April 30-May 2, 2019
URL Talk Page
Slides Talk Slides
Video

We often find ourselves questioning the meaning of “truth” in the virtual world of imagery online. It’s well known that images can be tampered with, using tools like Photoshop. What’s less well known is that recent advances in deep learning and computer vision make it possible to manipulate videos as well. In just a few years, it will likely be possible to create synthetic video that is indistinguishable by eye from reality. Despite the many beneficial applications of this technology (whether in special effects or dubbing), there’s no question that it’s also potentially very dangerous. For example, it will become possible to manipulate videos of public figures in the run-up to an election and make them appear to say or do things that they didn’t do. Alex Adam offers an overview of the approaches to generating synthetic video, starting with simple face-swaps using autoencoders and moving on to discuss generative adversarial networks (GANs) and style transfer using Cycle-GAN and Recycle-GAN. Alex concludes by discussing work Faculty has been doing towards building machine learning classifiers to detect face-swapped video.

comments powered by Disqus