What Will US Do to Combat ‘Deepfakes’ Ahead of 2020 Election?
WASHINGTON — Imagine a scenario where, on the eve of next year’s presidential election, the Democratic nominee appears in a video where he or she endorses President Donald Trump.
Now, imagine it the other way around. Or that both endorse the other.
But it’s exactly what a British think tank, Future Advocacy, was able to show is possible — as a warning of sorts — in the weeks leading up to the Dec. 12 election in the United Kingdom. It created two videos which, despite never happening, appeared entirely authentic, one with British Prime Minister Boris Johnson endorsing his opponent, Jeremy Corbyn; and another, with Corbyn endorsing Johnson.
It’s an example of what are known as “deepfakes,” advanced technologies which allow for video and audio materials to be manipulated and re-created in such a way as to make the unreal real — or at least appear to be so. And in Congress, legislators are trying to wrap their heads around how best to combat it.
U.S. Sen. Gary Peters, D-Mich., and U.S. Rep. Haley Stevens, D-Mich., are among those who have backed bills aimed at prodding the U.S. government to help root out deepfakes. The Johnson and Corbyn videos were never aired as being genuine but hint at the sorts of disruption such technology could cause.
With clear evidence that Russian-backed operatives interfered with the 2016 presidential election in the U.S., those threats are even higher going into 2020 as well.
“We already know the potential for manipulation exists in social media platforms and digital media,” said Peters, the top ranking Democrat on the Senate Homeland Security and Governmental Affairs Committee, referring to evidence of bots and fake accounts on Twitter, Facebook and other social media that can often be used to attempt to confuse or coerce viewers into believing one thing or another. “Deepfakes take that to a whole new level.”
By using advanced technology, precise video and audio editing equipment, specialized software, impersonators and AI — artificial intelligence, through which computing technology can review and fix mistakes, learning not to make the same mistakes again — creators of deepfakes can produce material that makes it seem like something happened that never did.
Peters makes the point in a video of his own, in which he seems to voice support for — horrors — Ohio State ahead of its annual game with the University of Michigan, which is something he assures the viewer he would never do. And in 2018, comedian Jordan Peele voiced a deepfake of Barack Obama as a public service announcement about the dangers of such material.
The prospect of sowing chaos and confusion isn’t limited to politics either. Consider:
— Say a video is created suggesting a military threat, or a biological one, like an infectious disease, in a given area. Peters notes that in 2014, when there were widespread media reports of the Ebola virus, “people stopped going out,” despite the fact the risk of contagion was small. Such a deepfake, if believed, could hurt whole economies or, in a worst-case scenario, cause an armed response.
— There have been reports of fake pornographic videos being distributed in which the face of one woman is seamlessly superimposed on another woman’s body, appearing to be far more genuine than such efforts in the past. It’s also apparently not just celebrities coming in for this sort of harassment. According to The Washington Post, a “number of deepfakes target women far from the public eye” as the price of such technology comes down and applications allowing for such fakery become more readily available.
— Some of the most significant concerns about deepfakes come down to personal security and safety, with worries that technological advances and an increasingly interconnected network of devices could lead to, say, someone telling your computerized home assistant to open the doors to your house to let a stranger in, or make a purchase on someone else’s behalf. In a related kind of case, a hacker recently managed to access a Mississippi family’s home network and, through a camera and microphone installed there, talk to an 8-year-old girl, making racial slurs and asking her to misbehave.
Part of the problem at present, however, is that the technological, scientific and legal communities are not as far along as they need to be in determining what is a deepfake and what is not and how to address it. That’s where Peters’ and Stevens’ pieces of legislation come in: Peters’ bill, which he co-sponsored with Sen. Rob Portman, R-Ohio, and has passed the Senate, would push the Department of Homeland Security to do annual reports on the state of “digital content forgery” and how to combat it.
Stevens, meanwhile, who is chairwoman of the Research & Technology Subcommittee in the House, co-sponsored a bill which has passed the House that directs the National Science Foundation to begin committing funding to learning all it can about detecting and stopping “generative adversarial networks” — otherwise known as GANs — the underlying computing networks which help to create deepfakes.
“What we know about deepfakes is we don’t know a lot,” said Stevens. “When I talk to the social media companies, they’ve started to embrace a set of procedures that put in place some standards, some flags for things that appear to be manipulative or are not true. We could do a lot more.”
At Oakland University in Rochester, Khalid Malik, an assistant professor in the computer science and engineering department, has been working with a $200,000 grant from the National Science Foundation on how to detect deepfakes and what to do about them.
Malik has been working on forged audio, which, by using deepfake technology to recreate a voice within the frequency range of a given speaker, can be hard to prove a fake. He’s been looking, however, at ways of picking up on other sounds that should be able to be detected on such an audio recording if it were real — like the subtle sound of a microphone — to determine if it’s authentic or not.
“Based on what types of forgeries are out there … we’re getting very reliable results,” he said. “But having said that … it’s a long battle.”
For instance, he said that even if he can reliably detect a deepfake, someone who creates them could use the potential of AI to create even more hard-to-detect material.
Nor will it stop there, he said, as policymakers, legislators, businesses and the courts will all be pushed to respond in some way in the years to come.
For instance, he said there could be efforts to hold people in the U.S. legally responsible for creating deepfakes, but that may have little effect on foreign governments or agents doing the same.
Then there are questions about constitutionally guaranteed freedoms of speech and expression, and whether the producer of a fake video enjoys those rights, he said.
He expects growing issues around these kinds of videos, not just in the political sphere but in people’s connected homes.
“We should be thinking about how it’s going to affect every individual’s life,” he said.
Peters said he believes Congress is already behind the curve in terms of responding to the threat.
“It’s always the challenge we have with government,” he said. “Technology is moving at a rapid pace. You’ve got to lean in and try to get ahead of it.”
(EDITORS: STORY CAN END HERE)
Areeq Chowdhury, who heads Future Advocacy, the British think tank that created the Johnson-Corbyn videos, said governments around the world are going to have to take action or face serious consequences to keeping their citizens’ trust.
“Despite warnings over the past few years, politicians have so far collectively failed to address the issue of disinformation online,” he said. “Instead, the response has been to defer to tech companies to do more. The responsibility for protecting our democracy doesn’t lie with executives in Silicon Valley but with elected representatives in Congress and parliaments across the world.”
©2019 Detroit Free Press
Visit the Detroit Free Press at www.freep.com
Distributed by Tribune Content Agency, LLC.
In The News
WASHINGTON — The Senate Judiciary Committee approved antitrust legislation Thursday that bans Big Tech from giving a preference to their... Read More
WASHINGTON — The Senate Judiciary Committee approved antitrust legislation Thursday that bans Big Tech from giving a preference to their own products and services on their internet platforms. The American Innovation and Choice Online Act responds to criticism that Amazon, Apple, Google and Meta Platforms Inc.’s... Read More
WASHINGTON — A digital ledger of transactions known as blockchain, that can protect an individual’s identity and information, is evolving... Read More
WASHINGTON — A digital ledger of transactions known as blockchain, that can protect an individual’s identity and information, is evolving in the health care space into decentralized patient-centric platforms. “Think of blockchain as an underlying technology that supports everything. It’s a ledger of knowing what happened,... Read More
WASHINGTON — AT&T and Verizon on Tuesday agreed to delay their rollout of new 5G services near some unspecified airports... Read More
WASHINGTON — AT&T and Verizon on Tuesday agreed to delay their rollout of new 5G services near some unspecified airports over ongoing concerns that moving those services to a new band could cause flight disruptions. Debate has been raging for months over whether — and if... Read More
When April Schneider's children returned to in-person classrooms this year, she thought they were leaving behind the struggles from more... Read More
When April Schneider's children returned to in-person classrooms this year, she thought they were leaving behind the struggles from more than a year of remote learning. No more problems with borrowed tablets. No more days of missed lessons because her kids couldn't connect to their virtual... Read More
SAN FRANCISCO — Former Twitter CEO Jack Dorsey on Wednesday announced the creation of a nonprofit legal defense fund geared... Read More
SAN FRANCISCO — Former Twitter CEO Jack Dorsey on Wednesday announced the creation of a nonprofit legal defense fund geared towards defending Bitcoin developers from litigation. Dorsey announced the fund in a mailing list for Bitcoin developers and said its purpose was to aid software developers... Read More
SAN ANTONIO — Twitter content curated by its personalization algorithms amplifies the mainstream political right more than the left, according... Read More
SAN ANTONIO — Twitter content curated by its personalization algorithms amplifies the mainstream political right more than the left, according to a joint study conducted by the platform’s transparency and accountability team. Researchers undertook a large-scale experiment that analyzed millions of Twitter users, political parties in... Read More