AI Has Turned Battleground States Into the Informational Wild West
COMMENTARY

With less than a week left until Election Day 2024, it’s clear this year’s presidential election could come down to the difference between a few thousand votes in a handful of swing states. And as key undecided voters work to make up their minds, multiple forces are battling for their attention: candidates, super PACs, news outlets and celebrities. Then there’s artificial intelligence.
In 2024, 31 states have no election laws on the books regulating how AI can be used in political content or what transparency is required to warn voters that content is AI-generated. The 19 states that did manage to put laws into place following the debut of AI-generated content deserve recognition. But for most voters, including those in key swing states, this election is a Wild West of political information.
Want to launch an AI call to talk to voters in Pennsylvania? No disclosure required. Or promote a social media post in Nevada with AI audio of Vice President Kamala Harris? No problem.
Today’s no-rules election landscape has a couple major consequences. First, some number of voters in a razor-thin election are likely to be swayed by election misinformation. Second, because there is no transparency about AI or consistency in AI laws across states, voters are more hesitant to trust what they see.
If you worry at all about bad actors (foreign or otherwise) interfering with elections, AI is an open door for them to walk through.
In a new poll of registered voters, Americans for Responsible Innovation found that 55% of registered voters believe they either definitely or probably have been exposed to AI-generated fake information during this presidential election. That’s more than half of all U.S. voters reporting AI misinformation in their news feed.
Of course, many voters aren’t sure what’s real and what’s not. A third of voters in our poll said they can never or only rarely distinguish AI-generated material from authentic content. Another 43% said they think they can tell AI content apart sometimes — but not all the time. That chalks up to seven in 10 voters without much confidence they can distinguish AI from the real thing.
That’s terrible for our elections. Voters aren’t sure that the volunteers they talk with on the phone are real, they can’t trust a candidate’s own voice, or even video footage with their likeness. And some candidates have used that uncertainty to their advantage, claiming real photos are fake.
Asking voters to bear the burden of distinguishing real content from AI-generated misinformation isn’t fair or effective. The rapid speed at which AI content has become indistinguishable from reality suggests that in a few years simply eyeballing political content won’t tell us much without the use of new tech tools to dissect provenance.
A patchwork of state laws also isn’t very effective, especially when more than half of all voters are left unprotected and most voters are left unaware of which protections exist in their home state.
To be clear, there is no cure-all for the spread of misinformation in our elections. The constitution strongly protects Americans’ right to free speech, and government efforts to identify or remove false speech must be carefully crafted to avoid running afoul of one of our most important constitutional rights.
But we can still do better by voters. Congress can provide some basic transparency requirements around the use of AI in political content to provide voters with extra assurance that the ads they’re seeing are based in reality.
As a former member of Congress, I’ve seen firsthand the breakdown in voter trust in our election system and the negative consequences it’s had on our national dialogue.
Fortunately, there’s bipartisan legislation sitting in Congress right now to help address AI content in our elections. Sens. Amy Klobuchar, D-Minn., and Lisa Murkowski, R-Alaska, have introduced the AI Transparency in Elections Act requiring political ads to disclose when they use AI-generated content, and their bill passed committee with bipartisan support. In the House, Reps. Adam Schiff, D-Calif., Brian Fitzpatrick, R-Pa., and others have introduced the AI Ads Act to prohibit fraudulent misrepresentations of candidates through AI content and to codify the Federal Election Commission’s authority to regulate AI misrepresentations. Neither of these bills would eliminate malicious AI content online, but both are a first step toward rebuilding voter trust in our political system.
This year is the first AI election our country has witnessed, but it’s not the last. While AI’s impact on our elections is already raising alarm bells, both experts and voters expect it to get worse unless we act. While Congress may have missed the boat to pass safeguards in time for this federal election, there’s still time before our next.
For voters, it’s a matter of knowing what’s real when they make their voices heard in our elections. For lawmakers, it should be a matter of not having their voices co-opted.
Brad Carson is president of Americans for Responsible Innovation. Carson is a former congressman representing Oklahoma’s 2nd District and served as acting undersecretary of Defense. Carson is the 21st president of the University of Tulsa. He can be reached on LinkedIn.
We're proud to make our journalism accessible to everyone, but producing high-quality journalism comes at a cost. That's why we need your help. By making a contribution today, you'll be supporting TWN and ensuring that we can keep providing our journalism for free to the public.
Donate now and help us continue to publish TWN’s distinctive journalism. Thank you for your support!