A national advisory referendum on AI could give Congress a clearer signal of the public’s priorities and help end the paralysis by analysis that has beset lawmakers.
Kevin Frazier is an Assistant Professor at St. Thomas University College of Law. He is writing for Lawfare as a Tarbell Fellow.
It’s Day 575 AGPT (after ChatGPT). OpenAI introduced ChatGPT-3.5 to the world on Nov. 30, 2022. Since then, Congress has embraced the posture of a bystander at a playground when someone else’s kid falls off the swings—concerned, but unsure of whether, when, and how they should respond. Members from Sen. Mike Rounds (R-S.D.) to Rep. Ted Lieu (D-Calif.) have shown a willingness to theorize about how best to respond to mitigate the risks posed by artificial intelligence (AI), but those theories have landed as hard as the kid still waiting for someone to help them up.
On Day 324 AGPT, I called on Congress to hold a national advisory referendum on how best to regulate AI. Specifically, I urged federal lawmakers to issue the federal equivalent of a formal poll that would inform but not bind Congress. The whole election infrastructure need not be rolled out. Congress could and should provide Americans with a wide range of means to participate, such as through mail ballots and even an online ballot. The referendum could address a range of questions, though I would recommend focusing on policy questions tied to the day-to-day lives of Americans—their work, their data, and the long-term well-being of their communities.
A traditional poll, akin to a Pew Research Center survey, would not do. The first national advisory referendum would lead to an overdue voter education effort on the potential and perils of AI. The formality and significance of the referendum would give stakeholders including AI labs and civil society groups like the Center for AI Safety, which aims to reduce societal-scale risks from artificial intelligence, a chance to inform voters about the current capabilities of this emerging technology. Such education has been lacking. Only a third of Americans have “heard a lot about AI.”
A traditional poll would also fail to create the political pressure necessary to spur congressional action. There is no shortage of polling going on. Those efforts have not moved Congress, which is running out of time to regulate AI. As of June 20, the Senate has just 30 days in session prior to the 2024 election. Passage of substantive AI regulation in such a short window will require popular support, perhaps as expressed through a national referendum.
What may have been radical on Day 324 AGPT is necessary now. Numerous outlets report that ChatGPT-5 may be launched in the coming weeks. The ongoing introduction of AI agents that proactively take actions on behalf of users has introduced a new regulatory front. All the while, the risks of AI tools causing some significant disruption to our information ecosystem or financial systems, to just name two areas, have been increasing. Congress cannot continue to defer to states, the executive branch, and other nations to fill in the regulatory void.
A quick review of some of the regulatory theories contemplated by Congress bolsters the case for seeking direction from the people. The bills that have been introduced appear to have low odds of passing, have been rendered more or less moot as a result of being incorporated by executive orders, or, if enacted, have high odds of generating substantial legal challenges, which may undermine their usefulness given the rapid rate of advances in AI. Other congressional actions, namely publication of a policy memo authored by the Bipartisan Senate AI Working Group and the convening of numerous hearings on AI, have done little more than indicate a general willingness to regulate AI. A national referendum would help transition Congress from brainstorming to implementing policy and iterating on that policy as conditions and AI evolve.
A Brief Overview of the Current Regulatory Landscape
Congress seems to be experiencing paralysis by analysis when it comes to AI regulation. This uncertainty has turned what could have been a relatively short and intense study of AI’s risks and the proper regulatory response into an ongoing, slow-paced inquiry. Consider that Congress managed to pass sweeping coronavirus legislation within nine months of the pandemic’s onset. The pandemic presented a more immediate, dire change than AI, but Congress’s comparatively swift response shows that it is capable of meaningful action even in an evolving, complex policy area.
Despite consensus that AI requires regulation, members have expressed divergent regulatory goals. On the one hand, officials including AI Working Group member and Senate Majority Leader Chuck Schumer (D-N.Y.) have identified innovation as the “North Star” of AI regulation. On the other hand, officials such as Rep. Ro Khanna (D-Calif.) have called attention to the risks posed by that innovation, such as labor displacement. A number of other concerns have likewise drawn attention. Schumer has also framed the need to regulate AI as a national security imperative. In particular, he warned that policymakers cannot allow China to take the lead in the race to develop and deploy the most sophisticated AI.
The aforementioned AI Working Group’s recently released “roadmap” reinforces the idea that Congress seems to lack direction when it comes to regulating AI. The roadmap reflects what senators learned from a series of forums, called the AI Insight Forums, they held over the past year pertaining to the promises and perils of AI. The roadmap’s coverage of everything from discrimination brought on by biased algorithms to existential risks posed by AI demonstrates awareness of the manifold regulatory issues posed by AI. Yet the dearth of specific legislative proposals in that roadmap suggests officials are still unsure about which issues to prioritize and how to address them.
A high-level look at specific legislative proposals bolsters the argument that Congress does not know how, when, and why to regulate AI. Dozens of bills have been drafted. Some bills, like one sponsored by Sen. Amy Klobuchar (D-Minn.) that addresses the use of “deceptive AI” in elections, deal with only a small (albeit important) part of the regulatory battlefield. Others would create new entities tasked with tackling a broader set of AI’s anticipated effects. A more in-depth analysis of pending bills helps illustrate the wide range of regulatory ideas.
AI Leadership to Enable Accountable Deployment Act or AI LEAD Act
The AI LEAD Act introduced by Sen. Gary Peters (D-Mich.) would create a “Chief Artificial Intelligence Officers Council” tasked with “coordination regarding agency practices relating to the design, acquisition, development, modernization, use, operation, sharing, risk management, and performance of artificial intelligence technologies,” among other tasks. The director of the Office of Management and Budget would form the council and serve as its chair. This proposal reflects congressional concern with how the federal government itself adopts and implements AI.
Chief artificial intelligence officers representing federal agencies would make up the rest of the council, according to the proposal. These officers would be appointed by the head of their respective agencies and take on a litany of tasks including promoting AI innovation, setting the agency’s AI policies, working with agency officials to ensure responsible use of AI by agency staff, and creating an “Artificial Intelligence Governance Board” within their agency.
Though the AI LEAD Act has yet to gain traction, the Biden administration recently issued an executive order that more or less required executive agencies to adopt the act’s provisions. If there is a change in administration come November, this order could be rescinded—paving the way for Congress to take up the act. A national referendum that included questions on the need for safeguards around government use of AI could inform Congress’s decision to formalize the AI officer mandate via legislation.
National AI Commission Act
One of the Hill’s most tech-savvy legislators, Rep. Lieu, authored the National AI Commission Act. The bill was introduced in the House in June 2023 and was referred to the House Committee on Science, Space, and Technology. No further action has been taken. The commission the act proposes would be housed within Congress and made up of 20 commissioners, with each party appointing half of the members. To be appointed, members would have to demonstrate some sort of expertise, ranging from computer science and AI to national security. Members would serve for the entire duration of the commission, which is currently proposed to last for approximately 1.5 years. Within six months, the commission would need to submit a report to Congress and the president outlining its recommendations for how best to:
mitigat[e] the risks and possible harms of artificial intelligence, protect[] the United States leadership in artificial intelligence innovation and the opportunities such innovation may bring, and ensur[e] that the United States takes a leading role in establishing necessary, long-term guardrails to ensure that artificial intelligence is aligned with values shared by all Americans.
A year later, the commission would submit a follow-up report with new findings and updated recommendations before it expires.
This bill brings to mind the AI Insight Forums organized by the AI Working Group. On a number of occasions, the group invited a carefully selected group of experts to share with lawmakers strategies for limiting AI’s harms and accentuating its benefits. Schumer, a member of the group, described those selected as “balanced and diverse.” Sen. Josh Hawley (R-Mo.), by contrast, chalked up the forums to an off-the-record opportunity for “monopolists” to steer regulation. Sen. Elizabeth Warren (D-Mass.) took issue with the proceedings being closed to the media.
Lieu’s proposal would address some of those concerns and create a more formal approach to providing Congress with regulatory guidance. Though some may argue that the need for such a commission has dissipated given the number of bills already pending before Congress, the case for the National AI Commission will likely remain strong so long as lawmakers are still weighing the proper regulatory approach. Lawmakers would likely benefit from the in-depth analysis provided by a bipartisan bunch of AI experts, especially given the possibility that new, even more advanced AI models may be deployed in the coming weeks. An expert commission could help lawmakers understand the marginal risks presented by AI advances and develop regulation that targets the most likely and significant sources of harm.
Digital Platform Commission Act
Transparency concerns aside, the AI Insight Forums influenced Sens. Michael Bennet (D-Colo.) and Peter Welch (D-Vt.) to update their Digital Platform Commission Act to more explicitly regulate the use of AI by platforms. Modeled after the Food and Drug Administration and Federal Communications Commission (FCC), five members of Congress would make up the commission, according to the proposal.
In brief, the commission would possess “rulemaking, investigative, and related authorities to regulate access to, competition among, and consumer protections for digital platforms.” It would also play a role in preventing unacceptable concentration among platforms. The act would mandate that the commission receive pre-merger notifications concerning designated platforms. It also empowers the commission to inform the Justice Department and Federal Trade Commission about any issues with proposed mergers. Pursuant to the updated version of the bill, the commission’s regulatory bailiwick would extend to digital platforms that provide “content primarily generated by algorithmic processes.” Platforms designated by the commission as “systemically important” would be subject to algorithmic audits and public risk assessments of their tools.
Starting a whole new agency would require quite a bit of financial and political capital. The AI referendum would give voters a chance to signal whether the treasure is worth the trek in this case or if simply relying on preexisting regulators is their preferred approach.
On the whole, these proposals provide a glimpse into the diverse regulatory preferences of Congress. That diversity is not a strength, at least with respect to indicating that Congress will soon rally behind a specific regulatory approach.
Other Proposals
Congress is not the only actor unsure of how to proceed on AI. The think tanks, scholars, and civil society groups that Congress commonly leans on for regulatory ideas also have yet to reach a consensus on the best means to address AI.
Former FCC Chair Tom Wheeler has endorsed a new AI agency responsible for identifying and quantifying AI risks, developing a code of conduct that is flexible enough to respond to AI’s rapidly changing risk profile, and enforcing compliance with that code. However, he has also questioned the feasibility of launching such an agency.
Anton Korinek of Brookings has urged the creation of an AI body akin to the National Transportation Safety Board or Federal Aviation Administration—two entities spun up in response to novel technology. Korinek’s “AI Control Council” would have an expansive mandate “to ensure that the ever more powerful AI systems we are creating act in society’s interest.” This mandate would include oversight of the use of AI across the entire economy. He also expects that the council would have the authority and resources to support research related to directing AI toward the public interest—something Korinek doubts will be undertaken by private researchers. The council’s power would not end there.
Korinek insists that for the council to be “truly effective,” it must be able to “oversee AI development by private and public actors[.]” This would include the authority to monitor AI development, require completion of impact assessments of advanced AI systems, and prevent the likelihood of any risks revealed by those assessments.
Stakeholders, myself included, have introduced a broad set of other proposals. Though such brainstorming has benefits, the impending release of new models and the nearing election suggest that the window for meaningful regulation may be closing. Further delay may further expose Americans to the very real risks posed by AI, such as rapid labor displacement due to automation and AI-facilitated cyberattacks on critical infrastructure.
A Referendum on AI Regulation
A national advisory referendum on AI could help Congress and others rally behind a specific response to AI. Congress possesses the authority to pass a statute to place a nonbinding advisory question on the ballot. Under the Necessary and Proper Clause, Congress may exercise all implied and incidental powers “conducive” to the “beneficial exercise” of its enumerated powers, such as regulating interstate commerce and promoting the general welfare. A nonbinding referendum on AI would fall within that expansive power because of AI’s substantial and long-lasting impacts on our economy, culture, and politics.
Members of Congress have weighed using such power on several occasions. Officials debated asking the American public if they supported joining the League of Nations following World War I. In 1964, Rep. Charles Gubser (R-Calif.) sponsored a resolution for an annual nationwide opinion poll on key policy questions. Prominent officials have also recognized the potential and value of a national referendum. For instance, in 1980, House Majority Leader Richard Gephardt “introduced a bill to poll citizens on three designated issues every two years during the federal election cycle.”
As briefly discussed, an AI referendum would serve several purposes. First, the extensive impact of AI on various aspects of daily life—from education and health care to the economy and transportation—renders it an issue too significant to be shaped by a single state. California arguably set the nation’s privacy law when it passed the first comprehensive privacy bill in 2020—a bill that several states went on to closely emulate. No one state should have that sway in the context of regulating AI because the odds are high that the citizens of any one state think differently than the average American about AI. While the referendum would be nonbinding, it nevertheless gives officials an indication of how the whole of the American public thinks about AI, a marked improvement on the unrepresentative AI Insight Forums.
Second, posing specific questions regarding the values and goals that should inform AI regulation would stimulate more detailed public discourse on the subject. In other words, the very process of asking Americans for feedback on AI regulation would stimulate more popular education on AI’s risks and benefits. That information campaign would not only add weight to the results of the referendum but also allow for more robust public participation in future AI policy debates.
Third, this approach would prevent Congress from enacting legislation that diverges from public opinion and carries long-term, irreversible unintended consequences. Significant regulatory undertakings are challenging to redirect once established.
What questions to include on the referendum is a tricky question in and of itself. This may be where the AI Working Group’s roadmap actually comes in handy. The roadmap provides an overview of the key regulatory topics raised by AI: U.S. Innovation in AI; AI and the Workforce; High Impact Uses of AI; Elections and Democracy; Privacy and Liability; Transparency, Explainability, Intellectual Property, and Copyright; Safeguarding Against AI Risks; and National Security. A referendum that addressed these answers and provided the people with a means to express their preferences and priorities could motivate and steer Congress.
***
On Nov. 30, 2022, OpenAI introduced ChatGPT-3.5, marking a pivotal shift in AI’s societal impact. Congress, despite recognizing the urgency to regulate AI, remains indecisive and unclear on the appropriate approach. To end this paralysis by analysis, Congress should consider holding a national advisory referendum to guide legislative action.