Senate Told Promise of AI Comes With Dangers to Civil Rights
WASHINGTON — A civil rights leader on technology policy told a Senate panel Tuesday that artificial intelligence is creating risks to privacy and democracy that the U.S. government has not yet brought under control.
She and other witnesses on artificial intelligence said it also could enable criminals to engage in identity theft and extortion with little chance of apprehending them.
“The technical capabilities exist here and we do not have adequate legal frameworks to address them,” said Alexandra Reeve Givens, president of the Washington, D.C.-based Center for Democracy and Technology.
Without government intervention soon, the technology could be misused to squelch innocent political protests and to disrupt elections, she said.
She mentioned the example of facial recognition technology, which could allow police to identify people standing in crowds or walking along sidewalks without them knowing it.
It already is used widely for political repression in Iran and China but lesser examples have resulted in protesters being arrested by police in the United States, Givens said.
In 2020, police in Florida cities used facial recognition to identify and catalog activists engaging in peaceful civil rights protests supporting the Black Lives Matter movement, she said.
In Baltimore, Maryland, in 2015, law enforcement officers used facial recognition technology in real time to target people protesting after the death by police of Freddie Gray. Police scanned the crowd to identify persons with outstanding warrants for unrelated offenses and arrested them on site, she said.
“When face recognition is used in this way, it violates people’s rights to freedom of expression and peaceful assembly,” Givens said in her testimony. “Congress must act to rein it in.”
Facial recognition is one of many new uses for artificial intelligence, which refers to the ability of computerized equipment to perceive situations and learn from it. It could provide huge economic benefits by helping with energy storage, medical diagnosis, driverless vehicles, internet search engines and supply chain management.
The Senate Judiciary Subcommittee on Human Rights and the Law was more concerned during its hearing Tuesday with the potential misuses of artificial intelligence.
They could include smart spyware to identify political dissenters, deepfakes for disinformation campaigns, biological terrorism, digital warfare and online exploitation of vulnerable persons.
“This conduct should be criminal and be punished,” said Sen. Jon Ossoff, D-Ga., chairman of the Subcommittee on Human Rights and the Law.
Machine learning with artificial intelligence algorithms could design thousands of toxic molecules within hours. Without adequate protections, it also could alter elections through widespread disinformation campaigns that interfere with voters’ ability to know the truth about candidates.
It’s these protections that members of Congress seek with several legislative proposals.
“We do need to think carefully about how we deploy AI technologies in the absence of a national privacy law,” said Sen. Marsha Blackburn, R-Tenn.
Among proposals Congress is considering is a new agency to oversee artificial intelligence through rules, standards and enforcement.
Other proposals would limit data available publicly about individuals that could be collected by artificial intelligence developers. A third proposal would use the courts for aggressive legal action against artificial intelligence abuse, such as for prosecution of fraud, harassment and intellectual property infringement.
Without better protections, lawmakers and witnesses said there would be many more cases like a Scottsdale, Arizona, mother who testified at the Senate hearing.
Jennifer DeStefano told about a telephone extortionist who used artificial intelligence to clone a voice that she described as being exactly the same as her daughter’s.
The extortionist told DeStefano he had kidnapped her 15-year-old daughter, then played a fake recording of the girl calling for her and sobbing on the phone. He said he would hurt her unless she paid a ransom, which started at $1 million.
She quoted her daughter’s cloned voice saying, “Mom, these bad men have me. Help me, help me.”
DeStefano confirmed within minutes that her daughter was upstairs in their home and safe but still expressed rage over the incident.
“There is no limit to the depth of evil AI can enable,” she said.
You can reach us at [email protected] and follow us on Facebook and Twitter