Why AI Did Not Upend the Super Year of Elections

How AI labs and public policies helped safeguard the 2024 elections—and what to learn in order to protect democracy from future threats.

Why AI Did Not Upend the Super Year of Elections
An illustration of the Earth, digitally connected (Photo: TheDigitalArtist/Pixabay, https://tinyurl.com/bd5nyt7v, CC0)

A Super Year of Elections and New Super Tools

At the outset of 2024, the so-called “super year” for elections (given the number of major countries hosting national elections), many election watchers and everyday people feared that AI (artificial intelligence) tools could flood public and private information channels with dis-, mis-, and mal-information. In the lead-up to the U.S. elections, for example, a majority of Americans worried AI would be used to “create and distribute fake information about the presidential candidates and campaigns.” These fears were not without cause. AI tools capable of creating and spreading convincing content had become broadly available and easier to use than ever.

Thankfully, most elections during this super year went off without a hitch. AI did not bring about the “death of truth” many feared.

The relatively ineffective use of AI by bad actors to disrupt the 2024 U.S. elections is worthy of study. AI will only become more widespread and more sophisticated in years to come. Public policies and private norms that prevented misuse of AI in November may prove valuable in safeguarding future elections. 

Those safeguards may mitigate the harms of misinformation generated and spread by AI in future elections. A few case studies of AI misinformation in global elections in 2024 demonstrate the disruptive potential of more sophisticated and widespread uses of the evolving technology. The Associated Press documented a few such instances in March, including a video of Moldova’s pro-Western president throwing her support behind a political party friendly to Russia; audio clips of Slovakia’s liberal party leader discussing vote rigging and raising the price of beer; and a video of an opposition lawmaker in Bangladesh—a conservative Muslim majority nation—wearing a bikini.

The first example warrants particular attention because that was far from the only use of AI by bad actors to sway voters in Moldova. Late in 2023, a video emerged of that same candidate banning people from drinking a popular tea. The fake video was likely created by forces tied to Russia. It garnered significant public attention. While the pro-Western candidate earned reelection, this targeted and sustained use of AI makes clear the potential of AI to minimally interrupt popular discourse around an election. 

How Labs Prepared for and Responded to Electoral Threats

Many AI labs, which include nonprofit, for-profit, and academic entities dedicated to researching and developing new AI systems, have taken it upon themselves to reduce the odds of electoral disruption resulting from their tools. As the creators of the latest and most widely adopted models, labs have comparatively more control over how their tools may be used and by whom. Google, for instance, applied similar election policies to its Gemini models that it enforces for other products, such as search and YouTube. Relatedly, Meta has intentionally collaborated with the federal government to help officials use its Llama model for national and economic security purposes. Anthropic and OpenAI seem to have gone even further to develop robust checks for misuse of their models during elections. 

Reports from those large AI labs indicate that bad actors had marginal to no success in using AI to shape U.S. elections. The voluntary measures relied on by these labs to identify and quash potential misuse ought to be scaled and mandated going forward. Dramatic shifts in the leadership of AI labs make clear that election observers should not count on future CEOs to be in place nor to take the same voluntary measures in two years, four years, and beyond. Given this reality, the lessons gleaned by these labs warrant close analysis and, perhaps, codification so that future elections similarly run without an AI-induced hitch. 

Before setting out some broad recommendations for regulation, it is worth diving into the specifics of reports by Anthropic and OpenAI. The former responded to concerns about the misuse of generative AI tools to shape elections by adopting a four-pronged strategy: first, implementation of proactive safety measures; second, use of reliable tools to monitor how its models are being leveraged by users; third, provision of reliable election information to users seeking guidance; and, fourth, dissemination of the company’s successes and failures in mitigating electoral interference. Note that this article does not cover the entirety of the steps taken by either company but errs instead on the side of identifying the most important interventions.

On the first prong, Anthropic expanded its preexisting usage policy to more directly address the threats posed by AI in an electoral context. The prior iteration of the policy already placed substantial restrictions on the use of models. Among the prohibited behavior was “campaigning and election interference, including promoting candidates or parties, soliciting votes or contributions, and generating misinformation.” Nearly halfway through the super year, the company then bolstered the policy by specifying limitations on “influence campaigns, voter targeting, impersonation, and election interference.” 

Though some observers may take issue with these additions coming after a number of national elections had been held, the timing was a reflection of learning on the go—this was, after all, “the first major election cycle with widespread access to generative AI[.]” Going forward, civil society and regulators should expect companies to set forth robust and comprehensive policies prior to elections, while also recognizing that some learning may occur as bad actors push the bounds of how to use AI tools in inventive ways. 

On the second prong, the team at Anthropic turned to an internal tool to closely monitor public use (and misuse) of its models. Known as Cliothis tool “takes raw conversations that people have with the language model and distills them into abstracted, understandable topic clusters.” Clio was first deployed in the U.S. elections (again, better late than never but important to flag as an area for improvement). Its deployment gave Anthropic deep insights into how users were turning to models for electoral insights. The good news was that most users who engaged in election conversations with Claude, one of Anthropic’s models, were doing so to retrieve basic information about the election and underlying policy debates. A “small proportion” of users seeking election information attempted to violate the usage policy. That bad news is not all that grim in light of the fact that “[e]lection-related interactions [with Claude] represent a very small percentage of overall Claude.ai usage with less than 1% of conversations touching on election-related topics.”

The utility of Clio and related tools should not be taken for granted. Meta once provided researchers with a similar tool—CrowdTangle—before unceremoniously throwing it into the dustbin. While Anthropic should be applauded for creating the tool in the first place, regulators should explore how best to make sure the tool is not only maintained and improved but also shared among trusted researchers and organizations. 

On the third prong, in light of the fact that models are necessarily trained on old information, Anthropic made sure to direct voters looking for the latest election information to “authoritative, nonpartisan sources[.]” This practice may have limited users blindly accepting out-of-date or simply flawed information generated by Claude. 

Again, this step worked out well this year, but it is not hard to imagine this well-intentioned intervention sowing discord rather than directing voters to quality information. Anthropic under different leadership, for instance, might opt to send users to sites with a partisan skew. This possibility may justify state and federal lawmakers making more explicit where they would recommend labs send curious voters for more information. In particular, lawmakers should specify that at least one of the sites provided to potential voters be the official local government election administration website. 

On the fourth prong, Anthropic opted to share its observation and learnings. Though only a “4 min read,” the report gives external observers some valuable insights into the company’s work and goals. More information, though, would likely benefit lawmakers and election administrators. This is low-hanging fruit for lawmakers hoping to get ahead of future use of AI in elections—more robust postelection reports (not to mention reports during the election) could contribute to discussions around the need for reform after elections (as well as emergency measures in the middle of an election). 

Turning to OpenAI, the lab followed a similar strategy as Anthropic. On the first prong, it mapped out its approach to protecting elections in a Jan. 15, 2024, blog post. On the second, it tapped into internal tools to detect and prevent bad actors from using its models. On the third, it also tried to meet the informational needs of users by nudging them to consider trusted sources for the latest information. On the fourth, it released a number of reports throughout the election about how it was identifying and subverting efforts by bad actors to use its models to cause harm. 

These positive steps, though, were not necessarily sufficient. Some state and federal officials as well as civil society leaders alleged that AI chatbots were a font of misinformation. The sheer number of deepfakes spread around the web, including videos created by Russians aiming to throw off the U.S. elections, also cuts against the idea that any lab can rest on its laurels for preventing worst-case scenarios. Public distrust in AI-generated election information shows that many users also demand more from AI labs.

Readying Ourselves for the Next Round of Elections

The super year of elections is over. Yet the electoral stakes are as high as ever in places like France, Belarus, Ecuador, Germany, and Australia, which will hold major elections this year. Election watchdogs and civil society groups ought not assume that all AI labs will follow the examples of OpenAI and Anthropic. There are also no guarantees that those two labs will maintain their efforts. The annals of tech governance are filled with layoffs of staff working to protect users and the public writ large from abuse and misuse of new tools. 

Congress does not seem poised to heed these preemptive steps. Its productivity has waned significantly in recent years. AI regulation has so far not proved to be an exemption. Leading officials in the Senate and the House have thought deeply about these issues but have yet to drive much AI legislation across the finish line. State legislatures may likewise find that political winds are pushing them to address what some may regard as more urgent matters. State attorneys general may fill this void. In fact, many of them are well practiced in calling out novel efforts to disrupt elections. By way of example, New Hampshire Attorney General John Formella, a Republican, took methodical steps to identify the source of an AI-generated robocall impersonating then-presidential candidate Joe Biden. Likewise, New York Attorney General Letitia James, a Democrat, set up a hotline for voters to share any concerns they had about participating in the election. These efforts demonstrate the power and capacity of state attorneys general to lead proactive initiatives related to election integrity.

State attorneys general and other officials at the state and federal levels should see the impending lull in electoral energy as the proper time to lock in the lessons learned from 2024. A good place to start is to simply make sure labs continue to do what they’ve been doing. A bolder vision that involves more onerous and specific requirements, such as means to provide public authorities with timely and detailed reports of massive interference efforts, should also be on the table. As AI continues to progress, so will efforts by bad actors to leverage those new features toward deleterious ends. 

– Kevin Frazier is an Assistant Professor at St. Thomas University College of Law and Senior Research Fellow in the Constitutional Studies Program at the University of Texas at Austin. He is writing for Lawfare as a Tarbell Fellow.

No Comments Yet

Leave a Reply

Your email address will not be published.

©2025 Global Security Wire. Use Our Intel. All Rights Reserved. Washington, D.C.