masthead_37.jpg
Local NPR for the Cape, Coast & Islands 90.1 91.1 94.3
Play Live Radio
Next Up:
0:00
0:00
Available On Air Stations

Algorithms Could Help Ferret Out "Deepfakes"

Deepfake techniques have brought us Elon Musk's face on a baby. Experts are concerned about more sinister uses.
The Fakening, YouTube, https://tinyurl.com/y5t58fvc
/
Deepfake techniques have brought us Elon Musk's face on a baby. Experts are concerned about more sinister uses.

Doctored photos and videos are nothing new. But “deepfake” videos generated by artificial intelligence are causing a new wave of concern.

Algorithms Could Help Ferret Out "Deepfakes"
John Villasenor

These high-tech manufactured videos are often indistinguishable from reality to the human eye. The House Intelligence Committee recently held a hearing on the risks of deepfakes, and experts say the technology could be used to influence the 2020 elections.

In May, a deepfake video of House Speaker Nancy Pelosi, which made her appear impaired, picked up more than 2.5 million views on Facebook.

Computer deep learning techniques are what sets deepfakes apart from old-fashioned photoshopping, said John Villasenor.

“What that enables is very sophisticated automated creation or modification of videos,” said Villasenor, a professor of electrical engineering, public policy and management, and a visiting professor of law at the University of California, Los Angeles.

“And because the A.I. is, in a sense, doing all the hard work, it makes it within the reach of really pretty much anybody who's got a computer to create these videos.”

Villasenor is also a nonresident senior fellow in Governance Studies and the Center for Technology Innovation at Brookings Institute, a member of the World Economic Forum’s Global Agenda Council on Cybersecurity, and a member of the Council on Foreign Relations.

He points out the technology is allowing people to create a video in which a candidate or public official appears to say something they didn’t really say. This is likely to create even deeper problems in public discourse.

“It also calls into question genuine videos,” said Villasenor. “Up until now, visual imagery has had a veneer of reliability… We're all going to have to recalibrate in light of the new technology environment we're in.”

But algorithms may also help us detect A.I.-generated fakes, Villasenor said. There are a few telltale signs that computers can watch for.

“Sometimes people don't blink their eyes in the same way that they would in a natural unmodified video,” he said. “That was identified as an area of potential detection opportunity.”

Algorithms can also look at inconsistencies across multiple frames of video, Villasenor said. These can be as subtle as inconsistent lighting or lighting color.

Normally when we watch video, we don’t think about whether it’s on TV or online. But Villasenor says we should rethink that in the age of deepfakes.

“It's the online piece of it that's got to be dominant in our internal evaluation of trust,” he said. “It's not something we should automatically assume is representing the truth."

Elsa Partan is a producer for Living Lab Radio. She first came to the station in 2002 as an intern and fell in love with radio. She is a graduate of Bryn Mawr College and the Columbia University Graduate School of Journalism. From 2006 to 2009, she covered the state of Wyoming for the NPR member station Wyoming Public Media in Laramie. She was a newspaper reporter at The Mashpee Enterprise from 2010 to 2013. She lives in Falmouth with her husband and two daughters.