FCC Bans Robocalls Using Voices Generated by Artificial Intelligence
WASHINGTON — The Federal Communications Commission released a decision Thursday to ban robocalls with voices generated by artificial intelligence.
The FCC said the robocalls play key roles in rising rates of sophisticated scams, such as calls before the New Hampshire primary last month using the voice of President Joe Biden to discourage residents from voting.
The FCC said unsolicited robocalls are prohibited under the 1991 Telephone Consumer Protection Act. The law severely limits marketing calls that use artificial and prerecorded voice messages.
“Bad actors are using AI-generated voices in unsolicited robocalls to extort vulnerable family members, imitate celebrities and misinform voters,” FCC Chairwoman Jessica Rosenworcel said in a statement. “We’re putting the fraudsters behind these robocalls on notice.”
FCC officials sped up their ban on robocalls after the AI-generated phone calls before the Jan. 23 New Hampshire primary. They sent the Texas company behind the calls a cease-and-desist letter while they continue to investigate.
The new regulation, which takes effect immediately, says robocalls can be sent only to persons who consent. The messages also must identify themselves as being artificially generated.
“In every case where the artificial or prerecorded voice message includes or introduces an advertisement or constitutes telemarketing, it must also offer specified opt-out methods for the called party to make a request to stop calling that telephone number,” the FCC decision says.
The FCC authorized state attorneys general to enforce the regulation against violators. Companies that use AI-generated voices could be fined as much as $23,000 per call and have their calls blocked.
The FCC authorized people who receive the automated messages to collect up to $1,500 in damages for each unsolicited call.
“Although voice cloning and other uses of AI on calls are still evolving, we have already seen their use in ways that can uniquely harm consumers and those whose voice is cloned,” the FCC decision said. “Voice cloning can convince a called party that a trusted person or someone they care about, such as a family member, wants or needs them to take some action that they would not otherwise take.”
The FCC’s decision also is a response to a request from 26 state attorneys general last month for the agency to broaden the definition of artificial voice messages prohibited by the Telephone Consumer Protection Act.
Artificial intelligence has evolved to the point that automated voices can interact with the persons called through robocalls in ways that mimic normal conversation.
Current FCC regulations could be interpreted to mean only robocalls that play recorded messages are banned, the attorneys general said in their Jan. 16 letter.
They asked for a rule that goes further by prohibiting robocalls that appear to create conversations with “live agents.”
Without a ban on interactive artificial intelligence, the FCC appears to give “a ‘stamp of approval’ for unscrupulous businesses seeking to employ AI technologies to inundate consumers with unwanted robocalls for which they did not provide consent to receiving, all based on the argument that the business’ advanced AI technology acts as a functional equivalent of a live agent because it has been programmed to interact with the called party,” the attorneys general wrote.
The FCC decision announced Thursday expands its ban to include fake “live agent” calls.
YouMail, a company that makes software to block spam messages, reported that Americans received more than 4 billion robocalls in January.
You can reach us at [email protected] and follow us on Facebook and X.