Algorithmic Foreign Influence: Rethinking Sovereignty in the Age of AI

Code now governs what users see, say, and know—across borders, without consent. It’s time to rethink what foreign influence really means.

Algorithmic Foreign Influence: Rethinking Sovereignty in the Age of AI
Use of cyber attacks in Ukraine (UCSD Jacobs School of Engineering, https://www.sdu.dk/-/media/cws/cws/images/cws-dossiers/ukraine+cyber+attacks.jpeg, CC BY-NC 3.0 US)

In early 2022, TikTok users in Kenya saw their feeds flooded with political disinformation—including content laced with ethnic hate speech and violent threats ahead of the general elections. Researchers at Mozilla documented how the platform’s recommendation system amplified ethnic tensions, suppressed dissenting voices, and subtly promoted pro-government narratives. They found no foreign directive, no cyber operation—just the platform’s algorithm, trained abroad, optimized for engagement, and operating without oversight.

This isn’t an isolated case. Recommender systems, large language models, and machine translation tools now shape civic discourse around the world. They promote certain narratives, erase others, and define what information is available to users—often in ways that reinforce inequality or favor dominant voices. Crucially, they do this without any intention to interfere. They act through infrastructure, not ideology.

This raises a difficult question: If artificial intelligence (AI) can reshape a nation’s public sphere without direction from a foreign power, is it foreign interference?

At first glance, the answer seems obvious: Interference requires intent. Under international law, the principle of non‑intervention is grounded in the assumption that harmful acts are purposeful and attributable to a state. Algorithms are neither. They lack agency, identity, and motive.

But if the outcomes—distorted political discourse, marginalized languages, eroded cultural autonomy—are functionally equivalent to classic interference, shouldn’t the law take effect to address it as such?

Dominant frameworks of sovereignty and non-intervention are being outpaced by a new mode of global influence: one that is stateless, ambient, and infrastructural. As AI systems trained and deployed across borders come to shape the terms of public life, they constitute a new class of foreign actors—not because they intend harm, but because they systematically reorganize civic and epistemic space across jurisdictions. At present, international law is not prepared to respond.

Beyond Intent: The Rise of Algorithmic Foreign Influence

In traditional doctrine, interference is a human act—for example, a covert operation, a propaganda campaign, or a cyberattack. It is intentional, traceable, and attributable. But in the digital present, the most pervasive forms of influence come from systems that were never designed to act politically—and yet do.

TikTok’s recommendation engine is a case in point. In Kenya and elsewhere, it has suppressed critical voices and amplified state-aligned narratives—not through state capture, but through design choices optimized for attention and virality. A report by Mozilla Foundation concluded that the platform’s algorithmic curation had a measurable effect on civic debate ahead of key protests and elections. Researchers identified more than 130 TikTok videos with ethnic hate speech and inflammatory content. In total, these videos garnered more than 4 million views and were amplified through recommendation to users. No foreign command was necessary. The outcome was structural, not conspiratorial.

Machine translation tools offer another dimension. Google Translate, despite advances, has been documented to reinforce gender stereotypes—for example, translating “They are an engineer” as “He is an engineer”—and erase Indigenous linguistic structures. Certain languages—especially from postcolonial or minority communities—are misrepresented, underrepresented, or excluded entirely. For example, many low-resource African and Indigenous languages receive little support in major AI systems. The result is what some scholars now call epistemic erasure: the quiet disappearance of cultures and perspectives from the informational commons.

This influence is not uniquely foreign. American platforms such as Meta and X deploy global-scale systems that shape how politics is understood far beyond U.S. borders. Facebook’s algorithms have been linked to polarization and disinformation in Brazil, particularly during the 2022 election campaign. In the Philippines, troll networks amplified by engagement-optimized algorithms distorted civic narratives. Independent audits of Meta’s ad delivery system have shown how algorithmic personalization reinforces filter bubbles and accelerates polarization. Elon Musk’s Grok chatbot has surfaced conspiracy and antisemitic content in multiple languages, including references to “MechaHitler.” OpenAI’s models, built in the United States and deployed globally, have refused to generate content on politically sensitive topics, without public explanation or accountability.

What unites these cases is not malicious intent—but measurable consequence. Algorithms trained in one country, governed by another’s laws, are now embedded in the civic architecture of societies worldwide. They decide what is seen, said, and suppressed—not with purpose, but through infrastructure. This poses a legal and strategic challenge: If influence no longer requires a foreign agent, how should international law understand interference? As Simon Chesterman argues, private AI systems now rival states in scale and consequence—not through intentional interference, but through structural impact—undermining traditional legal assumptions about agency and sovereignty.

Law Without a Target: Rethinking Attribution in the Age of AI

International law is built around actors that are identifiable, intentional, and accountable. The non-intervention principle— a rule of customary international law, related to Article 2(7) of the UN Charter and articulated by the ICJ in Nicaragua v. United States (1986)—prohibits one state from intentionally coercing another in matters within its domestic jurisdiction. Article 2(4) separately bans the threat or use of force. The logic is clear: violations require an intent to coerce, a coercive method, and attribution to a responsible actor.

Artificial intelligence challenges this foundation—not because it has motive, but because it exerts influence without agency, undermining frameworks built on intent, attribution, and legal personality.

Foreign agent registration laws reflect the same structure. In the United States, the Foreign Agents Registration Act (FARA) requires disclosure from any entity acting “at the order, request, or under the direction or control” of a foreign principal. Similar laws exist in Australia, the United Kingdom, and Russia. But these frameworks all assume direction—that someone is pulling strings. They cannot account for influence emerging from distributed systems trained on global data and optimized for platform goals, not political ones.

This mismatch is already creating legal ambiguity. In a 2024 U.S. Senate hearing on AI and national security, lawmakers raised concerns about the potential for foreign AI systems to shape public opinion in the United States—without necessarily falling under existing laws. The issue wasn’t espionage or cyberwarfare, but the risk of civic manipulation via opaque, black-box infrastructure and the lack of legal tools to address such indirect influence.

But the problem isn’t just technical—it’s conceptual.

Legal scholars are now grappling with what it means to regulate consequence without intent. Can a system that systematically alters democratic discourse, suppresses linguistic communities, or restructures access to truth be treated as a foreign actor if no one gave an order? Or does this demand a new legal paradigm—one that shifts the locus of sovereignty from motive to impact?

One promising direction reframes AI systems not as agents, but as instruments of transnational consequence. From this view, legal accountability must evolve to recognize effects that mirror foreign interference, even if the source is infrastructural and the influence is ambient. The issue is not whether AI intends harm—but whether it enables systemic harm across borders, outside the control of democratic institutions.

As Chesterman argues, sovereignty today requires more than territorial control. It requires informational governance—the ability of a society to determine what knowledge circulates, what voices are heard, and how civic meaning is produced. In this sense, algorithmic influence may not fit classic definitions of coercion—but it does constitute a form of informational domination.

And that, too, demands legal recognition.

What Counts as Sovereignty in an Infrastructural World?

Sovereignty has long been defined by territory: the right of a state to govern what happens within its borders. But in the 21st century, power increasingly flows not through armies or treaties, but through infrastructure—systems that organize visibility, access, and belief. As legal theorist Julie Cohen explores, sovereignty today also entails control over informational environments and the conditions under which knowledge is produced and accessed.

Artificial intelligence intensifies this shift. Large language models, translation systems, and recommender engines do not deploy military force or issue propaganda. They reshape the informational field: what can be expressed, what remains invisible, and whose knowledge counts. And they do so across borders, at scale, and without consent.

Consider the politics of language. When a machine translation system refuses to process an Indigenous language—or distorts its grammar in favor of a colonial standard—it effectively erases a community’s epistemic presence. No policy mandated this outcome. The erasure is embedded in training data, model architecture, and platform priorities. And yet the result is unmistakably political: the undermining of cultural sovereignty by omission.

Or take recommender algorithms. When platforms suppress footage of protests, amplify state-aligned content, or prioritize commercial over civic speech, they alter public discourse without ever issuing a directive. This isn’t censorship in the classical sense. It’s algorithmic gatekeeping, guided by engagement metrics rather than ideology—but no less influential in shaping democratic life.

These examples point to a deeper transformation: from intentional acts of interference to structural acts of conditioning. In this world, sovereignty is not only about keeping others out—it’s about having the capacity to shape your own narrative, in your own terms, for your own people.

Legal frameworks have begun to recognize this shift in adjacent areas. The European Union’s Digital Services Act, for example, imposes obligations on large platforms to mitigate systemic risks to democratic discourse. The African Union’s Continental AI Strategy calls for algorithmic transparency and data sovereignty as essential conditions of digital sovereignty. Across jurisdictions, lawmakers are beginning to understand that when a nation cannot govern the infrastructures that govern its people, its sovereignty is compromised.

From this vantage point, the core question is not: Did a foreign actor intend to interfere?

It is: Does a cross-border system functionally restrict a state’s ability to shape its own civic and epistemic domain?

If the answer is yes, then the legal architecture of sovereignty must adapt—not by granting AI personhood, but by recognizing that algorithmic infrastructure has become a vector of foreign influence.

Algorithmic Foreign Influence

Why categorize algorithmic effects as foreign interference? Why not simply treat them as unintended consequences of global infrastructure—concerning, but distinct from sovereign violation?

Because the functional outcomes mirror the harms that sovereignty law was designed to prevent.

When a foreign-built algorithmic system influences what a population sees, says, or understands about its own political future—without oversight, consent, or reciprocity—it infringes on the same civic autonomy that traditional doctrines of non-intervention seek to protect. The difference is not in the method, but in the medium: not tanks or propaganda, but code.

To address this issue, algorithmic foreign influence (AFI) should be recognized as a distinct legal category—defined not by intent, but by structural effect across borders.

AFI refers to the measurable, cross-jurisdictional impact of algorithmic systems—trained, hosted, or governed abroad—that shape political discourse, suppress civic expression, or restructure cultural and linguistic visibility within a sovereign state, absent direct foreign intent.

This definition breaks with legal tradition in one crucial way: It decouples influence from agency. The relevant metric is not whether some actor meant to interfere, but whether the system systematically reconfigures the informational environment in a way that undermines democratic self-governance.

Critically, this does not mean equating AI with personhood. AFI treats algorithmic systems as infrastructures of consequence, not as legal subjects. The regulatory burden would fall on their developers, hosts, or deployers—not on the code itself.

This shift aligns with how other fields have adapted law to distributed, non-human harms. Environmental law, for example, often imposes strict liability for environmental harm—meaning operators can be held responsible for pollution even without proof of intent or negligence. Financial regulation addresses systemic risk emerging from market structure, not malice. AI, too, now requires a framework that reflects its ambient, infrastructural nature.

Policy design could draw on existing models: the EU Artificial Intelligence Act already recognizes “high-risk” systems that affect fundamental rights and democratic processes—regardless of intent.

In the United States, FARA could be expanded to cover algorithmic platforms trained or governed abroad that exert measurable civic influence.

The National Institute of Standards and Technology AI Risk Management Framework offers a starting point for operationalizing harm-based categories through transparency and accountability standards.

Establishing AFI as a formal designation would not criminalize algorithmic systems—it would simply create a legal trigger for disclosure, oversight, and jurisdictional safeguards when transnational AI platforms exert public-facing influence within another state’s civic sphere.

In a world where influence no longer wears a uniform, legal frameworks must evolve to respond to structure, not just to signal.

Policy Pathways for Democratic Resilience

The legal ambiguity surrounding algorithmic foreign influence creates a dangerous vacuum. States that fail to recognize this new mode of interference face two risks: overreaction and underprotection.

On the one hand, governments may respond to transnational AI systems with blunt instruments: blanket bans, data localization mandates, or algorithmic firewalls. Such moves can fracture the internet and entrench techno-nationalist silos, while doing little to solve the underlying problem.

On the other hand, liberal democracies may hesitate to act at all—unwilling to regulate “neutral” infrastructure for fear of politicizing technology. This passivity invites strategic exploitation. Authoritarian regimes can deploy AI systems abroad while shielding their own populations, exploiting legal blind spots to project influence with impunity.

To avoid these extremes, policymakers need a third path: one that acknowledges influence without intent, and responds with institutional measures rather than reactionary controls. Four steps offer a foundation: the creation of a legal category for AFI, mandated algorithmic impact assessments for transnational AI, the codification of informational sovereignty in international norms, and the establishment of procedural norms for algorithmic neutrality.

Create a Legal Category for AFI

As outlined above, the U.S. should recognize AFI as a distinct class of civic impact—thus, triggering disclosure and accountability when foreign-trained or foreign-governed systems shape domestic discourse. This would expand existing statutes such as FARA, not to criminalize AI, but to classify it when public-facing and politically consequential.

Like the “high-risk” label in the EU’s AI Act, an AFI category would help differentiate benign tools from systems that affect electoral information, cultural representation, or civic epistemology.

Mandate Algorithmic Impact Assessments for Transnational AI

AI platforms operating across borders should be required to disclose potential harms to political speech, linguistic autonomy, and informational equity—especially when deployed in high-stakes environments such as elections, protests, or minority-language contexts.

Even without intent, these systems can distort civic discourse and silence marginalized voices. Transparency is essential because algorithmic choices shape what people see, say, and believe—often with real political consequences.

The U.S. already requires environmental impact assessments for infrastructure projects and conducts risk audits for financial institutions. Algorithmic systems with national-scale effects deserve similar scrutiny because they can quietly influence elections, public discourse, and civil rights—without transparency or accountability. Such assessments could follow models proposed in the White House’s AI Bill of Rights Blueprint, which outlines key safeguards such as transparency, independent audits, and avenues for redress. While nonbinding, the framework offers a practical foundation for evaluating algorithmic harms—especially in contexts where AI systems shape access to information, public discourse, or democratic participation. International adaptation would help extend these protections across borders.

Codify Informational Sovereignty in International Norms

Digital sovereignty is often framed in terms of data localization or censorship. But a more constructive version would focus on a nation’s right to govern the algorithmic conditions of its public discourse.

Multilateral instruments—such as the Council of Europe’s Convention 108+ or a future UN digital governance compact—could embed informational self‑determination into international law by establishing binding norms around data ownership, algorithmic accountability, and the right of individuals and communities to control how their data is used and interpreted across borders.

These efforts should focus on platform obligations, not speech restrictions, because regulating systemic risks at the infrastructure level is more effective and rights-compatible than policing individual content. This mirrors approaches like the EU’s Digital Services Act, which mandates systemic risk mitigation from large platforms without directly interfering with lawful speech.

Establish Procedural Norms for Algorithmic Neutrality

Instead of regulating content, states can require algorithmic due process. Just as the Basel III framework imposed transparency and audit standards on systemic financial actors, AI platforms—especially those operating across borders—could be subject to statutory requirements ensuring their models do not systematically distort political visibility or suppress vulnerable populations.

Independent audits—carried out by accredited third parties—could assess the impact of recommendation engines, moderation algorithms, and generative AI tools on pluralism, linguistic representation, and democratic resilience. These audits would evaluate whether platforms disproportionately amplify certain voices, marginalize minority languages, or algorithmically suppress dissent. Such safeguards offer a middle ground between censorship and passivity by addressing structural harms without infringing on individual expression. None of these measures would eliminate algorithmic interference. But they would give democracies the vocabulary and tools to recognize it—and to respond within a lawful framework, rather than resorting to fragmentation or denial.

The New Sovereignty Crisis

Sovereignty was once defined by armies, treaties, and borders. Today, it is shaped—often invisibly—by infrastructures that no state fully controls. Transnational AI systems now determine what information circulates, whose voices are heard, and which forms of knowledge are elevated or erased. And they do so without crossing a single physical border or issuing a single command.

The question is not whether artificial intelligence intends to interfere. It is whether states can still govern their civic spaces when those spaces are increasingly defined by foreign-built, privately controlled, opaque systems.

This is the sovereignty crisis of the digital age.

To treat AI merely as a neutral tool is to ignore its growing capacity to restructure political life. When a nation cannot control how its elections are framed, how its languages are translated, or how its people access civic knowledge—then it has lost more than autonomy. It has lost self-determination.

As various legal scholars have warned, waiting for intent, state sponsorship, or visible coercion misses the reality of algorithmic power: It is ambient, infrastructural, and often unintentional—but no less consequential. Its cumulative effect is to shift decision-making power away from democratic institutions and toward code optimized for metrics that serve neither truth nor equity.

This form of influence demands legal recognition—not to assign blame, but to create accountability. Without it, democracies will remain vulnerable to forms of interference they cannot name, measure, or contest.

Informational sovereignty in the 21st century is not just the right to speak—but the right to govern the systems that shape speech.

Not just the right to know—but the right to decide how knowledge is produced and circulated.

Not just the right to borders—but the right to algorithmic autonomy.

If sovereignty is to mean anything in the AI age, it must include the right to govern the infrastructures that govern societies and shape public life.

– Angelo Valerio Toma is a writer and international affairs analyst specializing in digital sovereignty, algorithmic governance, and emerging technologies in the Global South. Published courtesy of Lawfare

No Comments Yet

Leave a Reply

Your email address will not be published.

©2025 Global Security Wire. Use Our Intel. All Rights Reserved. Washington, D.C.