It is time to worry about deepfakes - Hindustan Times
close_game
close_game

It is time to worry about deepfakes

Hindustan Times | By
Dec 13, 2019 07:48 PM IST

Don’t believe everything you see or hear. Verify it first

If you think the rampant spread of misinformation is a problem, things are about to get much worse. Researchers working on the problem of deepfakes believe that they are getting so good that not even artificial intelligence, such as the ones that created them, can tell them apart. “Deepfake” combines “deep learning” and “fake”. They are typically in the form of images, audio or videos of a person or people that have been superimposed on another to make it seem like someone said or did something they didn’t.

“Deepfake” combines “deep learning” and “fake”. They are typically in the form of images, audio or videos of a person or people that have been superimposed on another to make it seem like someone said or did something they didn’t(Getty Images/iStockphoto)
“Deepfake” combines “deep learning” and “fake”. They are typically in the form of images, audio or videos of a person or people that have been superimposed on another to make it seem like someone said or did something they didn’t(Getty Images/iStockphoto)

Recently, researchers at the Massachusetts Institute of Technology’s Center for Advanced Virtuality created a deepfake video of former American president Richard Nixon giving a televised speech about the astronauts on board the Apollo 11 spacecraft being stranded on the moon. Essentially, this is a half-a-century old video of an event that never happened. The video is of a speech that was written for the possibility of the moon mission going wrong; but had never been used. Earlier this year, distorted videos of the speaker of the United States House of Representatives, Nancy Pelosi, went viral on social media in the USA. The videos had been altered to make her sound as if she was drunk and slurring her words. Given the already rampant use of deepfakes for targeted misinformation that makes individualised propaganda possible, the increasing quality of deepfakes generates new concerns. This problem is exacerbated by the possibility of amplification through social media. Facebook had famously refused to take down the Pelosi fake videos because it didn’t “have a policy that stipulates that the information you post on Facebook must be true”.

Hindustan Times - your fastest source for breaking news! Read now.

Election manipulation, revenge porn, fake news, and even financial fraud are just some of the issues that deepfakes have created in the recent past. While technological fixes to the problem are being and should be investigated, the solution also lies in the social and political domains. The manner in which the traditional and social media deal with misinformation, and the responsibility of the individual in not sharing and amplifying suspect information, will be crucial in stemming the tide of deepfakes. As algorithms become capable of identifying our weaknesses and direct us to content that is most likely to appeal to our vulnerabilities, it will be every individual’s responsibility to stay alert to the possibility of manipulation. As the Nixon deepfake shows us, nothing can be trusted blindly, not even the purported past.

Unlock a world of Benefits with HT! From insightful newsletters to real-time news alerts and a personalized news feed – it's all here, just a click away! -Login Now!
SHARE THIS ARTICLE ON
Share this article
SHARE
Story Saved
Live Score
OPEN APP
Saved Articles
Following
My Reads
Sign out
New Delhi 0C
Thursday, March 28, 2024
Start 14 Days Free Trial Subscribe Now
Follow Us On