Nations are racing toward AI sovereignty, prioritizing military control over human welfare. We need a new path before it’s too late.

A throughline connects recent statements on AI policy by the Trump administration, AI labs, and others with a vested interest in the nation’s AI policy—it’s a call for AI sovereignty. In particular, it’s a call for strong AI sovereignty or complete domestic control over essential AI inputs. Policy discussions in other countries, such as India, China, Japan, and Canada, have similarly accelerated in this direction. Emerging technical and resource considerations likewise may encourage more nations to home grow their AI. This is a troubling trend. Widespread pursuit of strong AI sovereignty is the worst of both worlds: at once increasing the odds of an all-out AI arms race in which nations focus their development strategies on national security interests while also crowding out the development of AI tools intended to serve the broader public interest.
This article briefly explains AI sovereignty, explores the ramifications of ever-more major AI players seeking strong AI sovereignty, and reviews proposals to slow this pattern.
Explaining AI Sovereignty
AI sovereignty refers to domestic control over the data used to train models, the compute relied on to power them, the talent necessary to refine them, and the energy essential to their diffusion—collectively known as the “AI stack.” Taking that definition as a starting point, strong AI sovereignty would describe a stack made up solely of domestic sources, and weak AI sovereignty would permit some degree of control over that stack by trusted allies.
Up to this point in the nascent age of AI, many nations have accepted weak AI sovereignty as an acceptable level of control—in many cases, surrendering some aspect of that stack to the U.S. by way of relying on NVIDIA chips or to China such as through use of their open-source models. At the tail end of the Biden administration, however, the U.S. pivoted away from being a reliable part of other nations’ respective AI stacks. Its diffusion rule effectively divided the world into compute-haves, compute-have nots, and compute-have-not-as-much-as-they’d-likes. More precisely, the rule creates three tiers of countries: tier one countries (of which there are 18, including the United Kingdom, France, and Germany) have more or less unrestricted access to U.S. chips; tier two countries (made up of most of the world) face caps on the total number of chips they may receive; and tier three countries (a select group of 25 adversaries already subject to U.S. arms embargoes) find themselves effectively banned from importing U.S. chips. If this approach marked a step toward strong AI sovereignty, then the Trump administration seems poised to jump in that direction.
Bolstered by labs and civil society organizations, the administration has leaned into a go-it-alone AI strategy. In a recent speech at the American Dynamism Summit, Vice President Vance spelled out what I regard as the “America First, America Only” approach to AI. He also faulted prior administrations for “sending so much of our industrial base to other countries” and called for manufacturing, designing, and diffusing new technologies, especially AI, in the United States. In regulatory filings in response to the Office of Science and Technology Policy’s Request for Information regarding the AI Action Plan, OpenAI urged the administration to adopt policies that would increase the United States’s domestic chip production capacity—warning that China currently has a strategic advantage in that regard. Anthropic advocated for a “hardening” of existing export controls as well as for expanding the type of chips subject to controls to include even less advanced chips that may nevertheless aid adversaries in the development and deployment of AI. Prominent filers, such as the Center for a New American Security and Business Roundtable, stressed the need for the U.S. to develop critical AI infrastructure at home rather than rely on foreign states for key AI inputs.
Critically, these statements suggest a move toward not only AI sovereignty but strong AI sovereignty. And, crucially, it looks as though other nation-states may follow the United States’s lead.
The Spread of Strong AI Sovereignty
One manifestation of strong AI sovereignty is a proliferation of state-backed AI initiatives that prioritize national interests over international collaboration. Initiatives include supporting “national champion” AI laboratories—companies directly singled out by a government for financial and regulatory support or companies that simply benefit from a general desire to encourage their growth. The European Union and France exemplify this trend with their backing of Mistral AI, which has emerged as France’s answer to American AI dominance.
In mid-February, France hosted the Paris AI Action Summit. Unlike prior summits hosted by the U.K. and South Korea, which emphasized global governance, shared regulatory frameworks, and safety protocols, French officials dedicated substantial summit resources to highlighting Mistral’s capabilities and attracting capital. French President Emmanuel Macron contributed to that effort through courting venture capitalists and tech executives with pledges to “simplify” EU regulations.
China, meanwhile, has leaned into the surprising success of DeepSeek and other homegrown upstarts with the aim of advancing its own AI capabilities. The Chinese government has channeled incredible resources into AI efforts through both direct funding mechanisms and preferential access to computational infrastructure. These investments reflect a calculated strategy to ensure that Chinese AI development aligns with national security priorities while competing with American and European models in technical sophistication.
The integration of AI into military systems and critical infrastructure has further accelerated the drive toward strong AI sovereignty, extending beyond the economic competition seen in France’s promotion of Mistral and China’s backing of DeepSeek. Whereas some allies, such as Canada, may have previously been fine with some degree of U.S. control over AI-equipped weapons and defense systems, times have changed. Recent developments within the U.S. Department of Government Efficiency (DOGE) stand out as a particular cause for alarm among nations once reliant on U.S. tech in their national security systems. DOGE has rapidly deployed AI systems for sensitive government functions with minimal transparency regarding oversight mechanisms or ethical guardrails. Reports that DOGE granted significant operational autonomy to junior employees with unclear accountability structures have alarmed traditional U.S. allies. As Sen. Mark Warner (D-Va.) discussed, allies have cause to question whether American AI systems can be trusted in joint defense initiatives given the possibility that lax security standards may allow even low-ranking DOGE members to access and alter those systems. These concerns have materialized in tangible ways: Canada and others have become skeptical of contracting with the U.S. to purchase F-35 fighter jets that would be maintained and serviced in the U.S. This growing wariness about dependency on U.S. military technology represents a marked shift from previous decades when access to American technological superiority was generally a boon for its allies.
To be sure, the spread of strong AI sovereignty is not driven solely by economic and military considerations. Consideration of other factors shows that some nations, despite intending to pursue strong AI sovereignty, may find themselves nevertheless reliant on others for at least one part of their AI stack.
Technical barriers—namely language limitations—may inform whether a country pursues strong AI sovereignty and whether it can achieve that aim. Nations with primary languages different from English face significant disadvantages in model training without deliberate intervention. The dominance of English in existing training datasets means that Arabic, Thai, or Punjabi speakers experience AI systems that perform markedly worse in their native tongues. This performance gap creates a practical incentive to curate language-specific datasets that capture the unique vocabulary, cultural references, and semantic structures of their languages. Absent creating sufficient datasets, it’s likely that these nations will have to incorporate data gathered elsewhere for the short term. Over the long term, however, control over ever-more data will likely sustain this incipient push for strong AI sovereignty.
Data has grown much more valuable with the emergence of foundation models. While previous data sovereignty concerns focused primarily on protecting citizens’ privacy and preventing foreign exploitation of consumer information, the stakes of maintaining control over data collected and stored in a jurisdiction have become exponentially higher. The data that nations now seek to control represents not just individual privacy concerns but the raw material from which AI systems derive patterns, relationships, and predictive capabilities that extend far beyond the original information. A nation that controls vast repositories of financial transactions, health care records, or telecommunications metadata can develop AI systems capable of identifying macroeconomic trends, disease outbreaks, or social movements before they become apparent to human analysts. This exponential amplification of data’s value through inferential capabilities means that previous data sovereignty frameworks, which focused on controlling storage and processing locations, are being expanded rapidly to encompass the entire AI value chain—from raw data collection to model training to inference deployment.
The staggering energy requirements of large-scale AI deployment present another key sovereignty factor. As countries project the electrical load needed for widespread AI adoption—from data centers to edge computing applications—many are discovering that reliance on foreign energy infrastructure to train frontier models introduces unacceptable vulnerabilities. Nations from South Korea to Saudi Arabia are consequently allocating massive sums toward power generation facilities specifically intended to handle AI workloads. These infrastructure projects often incorporate renewable energy sources, allowing countries to simultaneously address climate commitments while reducing dependence on imported fossil fuels that could be leveraged against them in times of geopolitical tension.
By contrast, nations that lack reliable and abundant energy resources—whether due to geographic limitations or inadequate infrastructure—find themselves unable to power the data centers necessary for independent AI development. Countries like France and Italy have already confronted this reality, establishing joint AI research initiatives with energy-rich partners rather than attempting self-sufficiency at prohibitive costs. This energy dependency fundamentally undermines aspirations for strong AI sovereignty, forcing pragmatic compromises that balance national control with operational feasibility.
One final factor—access to talent—may further influence the movement toward strong AI sovereignty. The talent shortage presents perhaps the most significant obstacle to strong AI sovereignty. Many nations lack the educational infrastructure and specialized workforce needed to independently develop and manage sophisticated AI systems. While countries like Singapore and Israel have implemented aggressive educational reforms to cultivate domestic AI expertise, the reality remains that the global distribution of AI talent is highly concentrated. Nations outside the traditional technology hubs—particularly in developing regions—face severe limitations in assembling teams capable of building competitive foundation models or implementing complex AI applications in government and military contexts. For these countries, permitting foreign experts to assist with AI development isn’t merely a preference but a necessity, even when it carries the chance of potential vulnerabilities.
Reliance by nations on external sources for key AI inputs, however, is unlikely to persist in the long term. Any dependency on other nations—whether for talent, energy, or other critical AI inputs—represents a strategic vulnerability that will increasingly become untenable as AI’s centrality to national power grows. National security hawks, such as Sen. Josh Hawley (R-Mo.), are already framing these dependencies as existential risks, providing potent political fuel to mobilize public sentiment toward domestic control over the entire AI stack. AI companies, such as Siam AI in Thailand and Naver Corp. in South Korea, regard AI sovereignty as a business imperative and have sought state support for those aims.
Political and economic pressure for strong AI sovereignty will presumably accelerate investments in domestic education programs, specialized pathways for AI talent to develop at home, and energy infrastructure dedicated specifically to computational needs within a nation’s borders. It seems likely that nations currently dependent on weak AI sovereignty arrangements will gradually transition toward stronger forms of autonomous control, suggesting that the present landscape of collaboration represents not a stable equilibrium but merely a transitional phase in the global AI sovereignty movement.
Problems Arising From Strong AI Sovereignty
The framing of AI as primarily a matter of state power carries significant consequences for how these technologies will develop and who will benefit from them. The securitization and centralization of AI under a strong AI sovereignty paradigm is a complex geopolitical development that warrants deeper examination.
As nations prioritize strong AI sovereignty, centralization becomes far more likely due to two primary mechanisms. First, the national security imperative creates powerful justifications for concentrated control, with governments establishing regulatory frameworks and oversight bodies that funnel AI development through approved channels and institutions with proper security clearances. Second, the massive capital requirements for advanced AI infrastructure—from specialized chip fabrication to enormous computing clusters—naturally favor large state-backed entities or public-private partnerships with deep ties to defense establishments.
This centralization process creates a self-reinforcing cycle in which military applications receive disproportionate attention and funding. The focus on weaponized AI, autonomous defense systems, and intelligence capabilities tends to eclipse civilian applications for health care, climate science, and poverty reduction. When AI becomes conceptualized primarily as a strategic asset rather than a societal resource, the technology’s development path narrows toward applications that enhance state power rather than addressing broader human needs.
The ultimate risk is that the securitization of AI becomes a self-fulfilling prophecy. As more nations adopt defensive AI sovereignty postures, international cooperation diminishes, creating precisely the competitive and potentially adversarial international AI landscape that each country feared in the first place. This dynamic threatens to transform what could be humanity’s most powerful tool for collective problem-solving into a technology optimized primarily for competition between nation-states.
Under such a dynamic, the viability of open-source AI projects—which have historically democratized access to cutting-edge capabilities—may falter. Open-source initiatives like Llama, Mistral, and Stable Diffusion have enabled researchers, entrepreneurs, and public institutions to develop specialized applications addressing local needs without massive resource investments. These collaborative approaches have produced remarkable innovations in public health, including more accurate disease diagnostic tools, and education, where personalized learning systems have shown promise in addressing achievement gaps among underserved populations.
As governments increasingly view AI through a national security lens, however, funding priorities and regulatory frameworks inevitably shift to favor classified, state-controlled development. Research that might have flowed freely across borders becomes siloed within national laboratories operating under strict security protocols. The redirection of talent, compute resources, and research funding toward sovereign AI projects creates a vacuum in the public and open-source domains, effectively relegating societal-benefit applications to secondary status. This reallocation of resources toward state interests rather than public welfare represents a profound opportunity cost that remains largely invisible in national security discussions about AI sovereignty.
Mitigating the Shift Toward Strong AI Sovereignty
The dual challenges of concentrated control and massive capital requirements in AI development require creative policy interventions that can balance legitimate national security concerns with the benefits of collaborative innovation. The following proposed mechanisms aim to create alternative pathways that maintain security while democratizing access to AI’s transformative potential.
Public-Interest AI Endowments: A Feasible Path Forward
Public-interest AI endowments represent the most immediately feasible approach to balancing sovereignty concerns with broader societal benefits. These endowments would establish financially independent institutions dedicated to developing AI applications that address global challenges while maintaining domestic control over the fundamental AI stack—a key requirement for nations unwilling to slow their pursuit of strong AI sovereignty.
The critical innovation of this approach is that it preserves domestic control over AI infrastructure while redirecting a portion of development toward nonmilitary applications. Participating governments would commit a fixed percentage (e.g., 5 percent) of their military AI budgets to domestic public-interest endowments; technology companies would contribute through a combination of direct funding and in-kind compute resources; and philanthropic organizations would provide additional financial support. This diverse funding base would ensure both sufficient scale and relative independence from purely security-oriented objectives.
Unlike more ambitious international frameworks, public-interest endowments could be implemented unilaterally by nations without requiring complex diplomatic negotiations or surrendering any meaningful sovereignty. Each country would maintain its own endowment, operating under domestic laws but with governance structures designed specifically to insulate research priorities from short-term political pressures, similar to how central banks maintain independence while still serving public mandates.
These endowments would focus on developing AI capabilities for domains like health care, climate resilience, education, and economic development within their respective national contexts. The resulting applications would serve domestic needs first, addressing the political imperative to demonstrate national benefits, while still advancing knowledge that could benefit humanity broadly.
As trust develops, nations with similar endowment structures could establish limited collaboration agreements that preserve core sovereignty while allowing selective knowledge sharing. This incremental approach acknowledges current geopolitical realities while creating institutional structures that could gradually evolve toward greater collaboration.
Tellingly, the U.S. may be among the leaders on this front. Reps. Jay Obernolte (R-Calif.) and Don Beyer (D-Va) have made the case for the CREATE AI Act. Though different from the endowment proposed here, the act would perpetuate the National AI Research Resource (NAIRR) pilot. The NAIRR operates as a national AI infrastructure that provides researchers and academics with access to essential AI resources. If this pilot were made permanent, the U.S. would send a clear signal that strong AI sovereignty need not over index on national security AI use cases.
Tiered Technology Access Agreements: An Alternative to Restrictive Diffusion Rules
While public-interest endowments work within the strong AI sovereignty paradigm, tiered technology access agreements represent a more ambitious attempt to reshape international AI cooperation. Unlike the current diffusion rule that defaults to restrictions for much of the world and creates sharp divisions between have and have-not nations, tiered access agreements would establish presumptive sharing as the baseline, with restrictions applied only where specifically necessary for security.
This approach inverts the current logic by requiring nations to justify withholding AI technologies rather than requiring others to justify access. The tiered system would function through multiple cooperation levels that countries could opt into based on their willingness to accept reciprocal commitments. At the foundation tier, participating nations would share basic research infrastructure and nonsensitive datasets with minimal restrictions. At higher tiers, more advanced capabilities would become available to partners accepting additional transparency requirements.
The most significant challenge to this approach is that it necessarily involves surrendering some degree of domestic control—a concession that undermines strong AI sovereignty. However, by structuring the agreements with clear security carve-outs and verification mechanisms, this framework could potentially overcome sovereignty concerns through carefully calibrated trust-building measures.
Compliance would be ensured through a combination of technical monitoring systems, regular inspections, and meaningful consequences for violations. Unlike conventional arms control regimes, these agreements would incorporate positive incentives—including preferential access to scarce resources like advanced chips—that reward consistent adherence to shared norms.
While more ambitious than public-interest endowments, tiered access agreements could potentially emerge from bilateral arrangements between closely aligned nations before expanding to broader participation. The European Union, with its existing frameworks for technology sharing and regulatory coordination across sovereign states, provides a potential model for how such systems might evolve.
Conclusion
A shift toward strong AI sovereignty marks a critical juncture in the trajectory of artificial intelligence. Nations are increasingly framing AI as a tool of geopolitical competition, risking its potential to address shared global challenges. The securitization of AI development threatens to divert resources toward military applications and state control at the expense of open innovation and public benefit. However, this path is not inevitable. By fostering strategic international cooperation, implementing thoughtful governance, and leveraging federated technical approaches, nations can balance sovereignty with collaborative progress. Excessive compartmentalization will only fragment AI research, slow technological advancement, and lead to inefficiencies that delay critical breakthroughs. Innovation thrives in environments of openness and cross-pollination—stifling it through isolationist policies undermines AI’s transformative potential. The true challenge is not AI itself but how we choose to govern it: as a weapon of state power or a force for collective progress.
– Kevin Frazier is an AI Innovation and Law Fellow at UT Austin School of Law and Contributing Editor at Lawfare . Published courtesy of Lawfare.