Congress Wants Laws to Confront Deepfakes That Deceive Consumers
WASHINGTON — Congress is trying to play catch-up to the deepfake technology that is showing up on social media and could influence who gets elected to political office.
Changes to intellectual property and criminal laws are pending in Congress over how to confront abuses of deepfake technology.
Deepfakes refer to generating visual and audio digital content using artificial intelligence. They can be indistinguishable from the persons or content they imitate.
At best, it could be used to create music and animations for the entertainment industry. More often, it is used for financial fraud, pornography and fake news.
It also could be used for disinformation to sway voters toward specific political candidates.
“There is no precedent for how these things are applied,” David Doermann, chairman of the computer science and engineering department at State University of New York at Buffalo, told The Well News.
He was an expert witness during a House Oversight and Accountability subcommittee hearing Wednesday on how to stop abuses of the powerful and emerging technology.
Deepfakes are difficult to trace, sometimes leading to unidentified authors in foreign countries. U.S. intelligence agencies are raising concerns deepfakes from China, Iran or Russia might affect who becomes an American political leader.
Doermann said there’s a need for “laws that hold people accountable for using other people’s personas. They don’t have a mandate to stop this.”
The leading bill is the DEEP FAKES Accountability Act introduced by Rep. Yvette Clarke, D-N.Y. It would provide prosecutors, regulators and victims with detection technology and other resources.
The bill would require creators to label deepfakes on online platforms and to provide notifications of alterations to a video or other content. Failing to label “malicious deepfakes” would incur criminal penalties.
Malicious deepfakes refers to sexual content, criminal exploitation, incitements to violence and foreign interference in U.S. elections.
“This sends a signal to bad actors that they won’t get away with deceiving people,” Clarke said.
Rep. Nancy Mace, R-S.C., chairwoman of the Subcommittee on Cybersecurity, Information Technology, and Government Innovation, said the federal government is working with technology companies to develop new software to prevent consumers from being deceived.
It could embed data into legitimate digital images, video and other content to notify anyone who sees it when it has been converted into a deepfake.
She advocated for swift action by Congress and tech companies because of the rapid proliferation of generative artificial intelligence that produces the deepfakes.
“Creating these images is easier than ever,” Mace said.
Commercially available software apps are being sold that can create deepfakes by typing in a few words for the image or video a user desires.
Spencer Overton, a George Washington University law professor, said deepfakes can be much worse than a nuisance.
“They also threaten democratic values,” he said.
In some cases, they have undercut the credibility of prominent professional women by imposing their faces onto pornographic images before being posted on the internet, he said.
He said “bad actors” could create a deepfake video showing a White police officer shooting and killing a Black man, thereby inciting racial tension.
In making a plea for congressional intervention, he said, “The market alone will not solve all these problems.”