What Will US Do to Combat ‘Deepfakes’ Ahead of 2020 Election?
WASHINGTON — Imagine a scenario where, on the eve of next year’s presidential election, the Democratic nominee appears in a video where he or she endorses President Donald Trump.
Now, imagine it the other way around. Or that both endorse the other.
But it’s exactly what a British think tank, Future Advocacy, was able to show is possible — as a warning of sorts — in the weeks leading up to the Dec. 12 election in the United Kingdom. It created two videos which, despite never happening, appeared entirely authentic, one with British Prime Minister Boris Johnson endorsing his opponent, Jeremy Corbyn; and another, with Corbyn endorsing Johnson.
It’s an example of what are known as “deepfakes,” advanced technologies which allow for video and audio materials to be manipulated and re-created in such a way as to make the unreal real — or at least appear to be so. And in Congress, legislators are trying to wrap their heads around how best to combat it.
U.S. Sen. Gary Peters, D-Mich., and U.S. Rep. Haley Stevens, D-Mich., are among those who have backed bills aimed at prodding the U.S. government to help root out deepfakes. The Johnson and Corbyn videos were never aired as being genuine but hint at the sorts of disruption such technology could cause.
With clear evidence that Russian-backed operatives interfered with the 2016 presidential election in the U.S., those threats are even higher going into 2020 as well.
“We already know the potential for manipulation exists in social media platforms and digital media,” said Peters, the top ranking Democrat on the Senate Homeland Security and Governmental Affairs Committee, referring to evidence of bots and fake accounts on Twitter, Facebook and other social media that can often be used to attempt to confuse or coerce viewers into believing one thing or another. “Deepfakes take that to a whole new level.”
By using advanced technology, precise video and audio editing equipment, specialized software, impersonators and AI — artificial intelligence, through which computing technology can review and fix mistakes, learning not to make the same mistakes again — creators of deepfakes can produce material that makes it seem like something happened that never did.
Peters makes the point in a video of his own, in which he seems to voice support for — horrors — Ohio State ahead of its annual game with the University of Michigan, which is something he assures the viewer he would never do. And in 2018, comedian Jordan Peele voiced a deepfake of Barack Obama as a public service announcement about the dangers of such material.
The prospect of sowing chaos and confusion isn’t limited to politics either. Consider:
— Say a video is created suggesting a military threat, or a biological one, like an infectious disease, in a given area. Peters notes that in 2014, when there were widespread media reports of the Ebola virus, “people stopped going out,” despite the fact the risk of contagion was small. Such a deepfake, if believed, could hurt whole economies or, in a worst-case scenario, cause an armed response.
— There have been reports of fake pornographic videos being distributed in which the face of one woman is seamlessly superimposed on another woman’s body, appearing to be far more genuine than such efforts in the past. It’s also apparently not just celebrities coming in for this sort of harassment. According to The Washington Post, a “number of deepfakes target women far from the public eye” as the price of such technology comes down and applications allowing for such fakery become more readily available.
— Some of the most significant concerns about deepfakes come down to personal security and safety, with worries that technological advances and an increasingly interconnected network of devices could lead to, say, someone telling your computerized home assistant to open the doors to your house to let a stranger in, or make a purchase on someone else’s behalf. In a related kind of case, a hacker recently managed to access a Mississippi family’s home network and, through a camera and microphone installed there, talk to an 8-year-old girl, making racial slurs and asking her to misbehave.
Part of the problem at present, however, is that the technological, scientific and legal communities are not as far along as they need to be in determining what is a deepfake and what is not and how to address it. That’s where Peters’ and Stevens’ pieces of legislation come in: Peters’ bill, which he co-sponsored with Sen. Rob Portman, R-Ohio, and has passed the Senate, would push the Department of Homeland Security to do annual reports on the state of “digital content forgery” and how to combat it.
Stevens, meanwhile, who is chairwoman of the Research & Technology Subcommittee in the House, co-sponsored a bill which has passed the House that directs the National Science Foundation to begin committing funding to learning all it can about detecting and stopping “generative adversarial networks” — otherwise known as GANs — the underlying computing networks which help to create deepfakes.
“What we know about deepfakes is we don’t know a lot,” said Stevens. “When I talk to the social media companies, they’ve started to embrace a set of procedures that put in place some standards, some flags for things that appear to be manipulative or are not true. We could do a lot more.”
At Oakland University in Rochester, Khalid Malik, an assistant professor in the computer science and engineering department, has been working with a $200,000 grant from the National Science Foundation on how to detect deepfakes and what to do about them.
Malik has been working on forged audio, which, by using deepfake technology to recreate a voice within the frequency range of a given speaker, can be hard to prove a fake. He’s been looking, however, at ways of picking up on other sounds that should be able to be detected on such an audio recording if it were real — like the subtle sound of a microphone — to determine if it’s authentic or not.
“Based on what types of forgeries are out there … we’re getting very reliable results,” he said. “But having said that … it’s a long battle.”
For instance, he said that even if he can reliably detect a deepfake, someone who creates them could use the potential of AI to create even more hard-to-detect material.
Nor will it stop there, he said, as policymakers, legislators, businesses and the courts will all be pushed to respond in some way in the years to come.
For instance, he said there could be efforts to hold people in the U.S. legally responsible for creating deepfakes, but that may have little effect on foreign governments or agents doing the same.
Then there are questions about constitutionally guaranteed freedoms of speech and expression, and whether the producer of a fake video enjoys those rights, he said.
He expects growing issues around these kinds of videos, not just in the political sphere but in people’s connected homes.
“We should be thinking about how it’s going to affect every individual’s life,” he said.
Peters said he believes Congress is already behind the curve in terms of responding to the threat.
“It’s always the challenge we have with government,” he said. “Technology is moving at a rapid pace. You’ve got to lean in and try to get ahead of it.”
(EDITORS: STORY CAN END HERE)
Areeq Chowdhury, who heads Future Advocacy, the British think tank that created the Johnson-Corbyn videos, said governments around the world are going to have to take action or face serious consequences to keeping their citizens’ trust.
“Despite warnings over the past few years, politicians have so far collectively failed to address the issue of disinformation online,” he said. “Instead, the response has been to defer to tech companies to do more. The responsibility for protecting our democracy doesn’t lie with executives in Silicon Valley but with elected representatives in Congress and parliaments across the world.”
©2019 Detroit Free Press
Visit the Detroit Free Press at www.freep.com
Distributed by Tribune Content Agency, LLC.
In The News
President Donald Trump's tariff war with China significantly restricted the growth of the nation's solar energy workforce last year, an industry analysis shows. According to a report released Wednesday by The Solar Energy Foundation, a nonprofit, nonpartisan organization dedicated to advancing the use of solar and... Read More
WASHINGTON — The Trump administration is considering new restrictions on exports of cutting-edge technology to China in a push aimed at limiting Chinese progress in developing its own passenger jets and clamping down further on tech giant Huawei’s access to vital semiconductors, according to four people... Read More
WASHINGTON - Seven House members have come together in a bipartisan effort to help state and local governments address cybersecurity weaknesses in their networks. The lawmakers engaged in the effort are Reps. Cedric Richmond, D-La., John Katko, R-N.Y., Derek Kilmer, D-Wash., Michael McCaul, R-Texas, Dutch Ruppersberger,... Read More
WASHINGTON — A vast majority of election-related websites operated by local governments in battleground states lack a key feature that would help distinguish them from those run by commercial entities or criminal hackers — a site that ends in .gov as opposed to .com or other... Read More
WASHINGTON - Google’s former chief executive officer told Congress Wednesday the United States would lose its lead in artificial intelligence technology to China in less than a decade unless the federal government creates incentives for new research and development. “In other words, unless trends change, we... Read More
They are the scourge of telephone solicitors, and federal authorities are going after robocallers in a big way, launching landmark lawsuits against five companies and three people behind hundreds of millions of annoying interruptions. Not only are the calls viewed as a nuisance, but they are... Read More