Algorithms Could Help Ferret Out "Deepfakes"
Doctored photos and videos are nothing new. But “deepfake” videos generated by artificial intelligence are causing a new wave of concern.
These high-tech manufactured videos are often indistinguishable from reality to the human eye. The House Intelligence Committee recently held a hearing on the risks of deepfakes, and experts say the technology could be used to influence the 2020 elections.
In May, a deepfake video of House Speaker Nancy Pelosi, which made her appear impaired, picked up more than 2.5 million views on Facebook.
Computer deep learning techniques are what sets deepfakes apart from old-fashioned photoshopping, said John Villasenor.
“What that enables is very sophisticated automated creation or modification of videos,” said Villasenor, a professor of electrical engineering, public policy and management, and a visiting professor of law at the University of California, Los Angeles.
“And because the A.I. is, in a sense, doing all the hard work, it makes it within the reach of really pretty much anybody who's got a computer to create these videos.”
Villasenor is also a nonresident senior fellow in Governance Studies and the Center for Technology Innovation at Brookings Institute, a member of the World Economic Forum’s Global Agenda Council on Cybersecurity, and a member of the Council on Foreign Relations.
He points out the technology is allowing people to create a video in which a candidate or public official appears to say something they didn’t really say. This is likely to create even deeper problems in public discourse.
“It also calls into question genuine videos,” said Villasenor. “Up until now, visual imagery has had a veneer of reliability… We're all going to have to recalibrate in light of the new technology environment we're in.”
But algorithms may also help us detect A.I.-generated fakes, Villasenor said. There are a few telltale signs that computers can watch for.
“Sometimes people don't blink their eyes in the same way that they would in a natural unmodified video,” he said. “That was identified as an area of potential detection opportunity.”
Algorithms can also look at inconsistencies across multiple frames of video, Villasenor said. These can be as subtle as inconsistent lighting or lighting color.
Normally when we watch video, we don’t think about whether it’s on TV or online. But Villasenor says we should rethink that in the age of deepfakes.
“It's the online piece of it that's got to be dominant in our internal evaluation of trust,” he said. “It's not something we should automatically assume is representing the truth."