A review of Anu Bradford, “Digital Empires: The Global Battle to Regulate Technology” (Oxford University Press, 2023)
A European digital privacy activist lodges an appeal with the High Court of Ireland, arguing that the Irish Data Protection Commissioner’s refusal to prohibit transfer of his personal data to the U.S. by global technology giant Facebook (now Meta) violates European Union regulatory and human rights law. A few years later, in intricate proceedings at the International Telecommunications Union, China seeks to ensure that a standard developed by global technology giant Huawei is endorsed to become the next version of the protocol that governs the transfer of packet-switched communications over the internet. Meanwhile, the U.S. Securities and Exchange Commission issues rules requiring Chinese companies listed on U.S. markets to provide detailed information about their ownership structure.
In “Digital Empires: The Global Battle to Regulate Technology,” Anu Bradford connects these incidents to a host of others and frames them as skirmishes in an intensifying, three-way struggle among preeminent world powers seeking to align the ongoing evolution of networked digital communication technologies with their own values and interests.
There is a lot to like in this rich and thought-provoking book. Bradford constructs an elegant model of competition and contestation among sovereign powers that includes giant technology firms as critical actors wielding significant power of their own. Within that model, the far-flung, digitally networked activities of the technology firms create persistent policy and enforcement challenges for government authority, while the conflicting demands of three especially powerful sovereigns create equally persistent policy and compliance dilemmas for global technology giants. Both sets of tensions are magnified as those sovereigns—the United States, the European Union, and China—compete among themselves to extend their particular visions of digital sovereignty globally. Although—as I will explain—I differ with Bradford on some important particulars, she is right to counsel attention to the fundamental conflicts among differing approaches to digital policy and to the resulting global tensions.
Another great strength of the book is the extent to which Bradford honors the complexity of the subject matter. The services provided by giant technology companies span the globe and touch nearly every facet of our social and economic existence. If one wants to understand current contests over digital sovereignty, one must consider a range of topics, from surveillance and digital privacy to digital content governance, from antitrust and competition policy to taxation of digital services, and from internet standards to smart transit systems. In dense yet readable prose, Bradford deftly draws connections among many seemingly disparate events occurring around the world. The text rewards rereading and cross-referencing. Bradford understands, as well, that it is important to consider the wide assortment of legal and economic mechanisms that create or fail to create favorable environments for investment in digital technologies and business models. She is able to explain, for example, why it matters that the leading global accounting firms have all set up operations in China, and she understands that patterns of global venture capital investment may help to determine whether the internet’s technical standards and the digitally networked services that rely on them will diverge into incompatible fragments.
Having embraced the complex mix of factors shaping both digital policy and ongoing contests for digital supremacy, Bradford wisely forgoes easy predictions about likely outcomes in favor of a more nuanced analysis of the entanglements that conflicting policies and incentives have produced and are likely to produce for the foreseeable future. She does have opinions about why certain values ought to prevail in these contests, and the book’s closing section makes those views clear. But the text is not a morality tale. So, for example, Bradford acknowledges the contradictions embedded in the free speech rhetoric that the U.S. has deployed to justify the actions of its own digital platform giants, even as their services have facilitated the spread of mis- and disinformation and ethnonationalist hate on an unprecedented scale. She notes the ways in which many of the same companies that proclaim the importance of digital civil rights and civil liberties have capitulated to the demands of authoritarian governments for censorship and surveillance.
Importantly, Bradford clearly communicates what is at stake in the ongoing struggles over digital sovereignty. Because of enhanced capabilities for monitoring and surveillance, populations worldwide face new types of threats to their fundamental rights and freedoms. Because of new, data- and algorithm-driven capabilities for manipulating audiences and amplifying disinformation and hate, newer and older democracies alike face threats to their continuing survival. Disagreements about digital policy create what Bradford characterizes as a “growing risk of technological and economic decoupling between China, the US, and the EU.” Readers whose expertise lies in fields such as national security and intellectual property law may disagree on the extent to which China, in particular, is coupled with the U.S. and Europe now. But Bradford is surely right to note that, as policy disagreements harden into competing legal mandates and incompatible technical standards, they undermine prospects for global economic and political cooperation and heighten geopolitical instability.
In seeking to weave these disparate threads into a coherent tapestry, Bradford—whose home field is international economic law rather than law and technology, communications law, data protection law, or national security law—has set herself a considerable challenge. In the balance of this review, I will raise three questions that readers (and Bradford herself) may want to consider. One concerns whether Bradford has accurately assessed each aspiring digital empire’s distinctive blend of values, commitments, and strategies. As I will explain, Bradford takes a few significant missteps. Another question is whether the list of sovereign regimes whose efforts will shape the “global battle to regulate technology” is complete. I will argue that Bradford’s group of three global powers contains an important omission: India. My third question has to do with whether framing the ongoing contest as a battle to “regulate technology” prematurely discounts the possibility that networked digital technologies and the powerful private actors that have shaped their development may also be reshaping the structure and rules of the governance game. Although Bradford declines to predict winners and losers in the various contests among sovereign empires and tech giants, she is committed to the primacy of nation-state sovereignty and to its aptness as an analytical lens. I will argue that considerable skepticism is in order on both points.
Mapping Digital Empires
Consider first whether Bradford has accurately assessed each of the three digital empires that she identifies. In any project of this sort, there are inevitable trade-offs between breadth and depth. At the same time, though, the details selected for inclusion must be right (enough) to inspire confidence in the larger patterns the author claims to have identified. Both relative unfamiliarity and deep familiarity may pose challenges for the aspiring global mapmaker.
Bradford’s global reputation as a preeminent scholar of international law rests, in part, on her work on the Brussels Effect, a kind of hydraulic process by which the EU’s legislated policy preferences spread globally. Her discussion of Europe’s efforts at digital empire-building leans heavily on this widely cited prior work. That the Brussels Effect sometimes exists is undeniable. But in the realm of digital policy, rapidly accumulating evidence suggests that European efforts to shape the linked landscapes of law and practice, globally, are falling short.
One prong of the Brussels Effect involves the deliberate use of regulatory mechanisms to create extraterritorial effects. Bradford devotes considerable attention to Europe’s General Data Protection Regulation (GDPR), which includes provisions allowing cross-border transfers of personal data only to jurisdictions affording adequate legal protections for personal data. For some time now, the EU and the U.S. have been locked in a struggle over the interpretation of those provisions. Three times, negotiators have painstakingly negotiated agreements designed to permit the continued flow of data from Europe to the United States. Twice, the Court of Justice of the European Union (CJEU) has invalidated the agreements, citing statutes and executive orders that permit the U.S. national security establishment broad access to personal data held by private communications providers and other entities. A plan to challenge the third negotiated agreement is being prepared.
Bradford suggests that this long-running struggle jeopardizes security-related cooperation but that the two governments’ willingness to return repeatedly to the bargaining table is reason for optimism about possible future harmonization. Solving the problem of transatlantic commercial data transfers, however, is not nearly as important for security-related cooperation between the U.S. and Europe as Bradford suggests because the GDPR does not govern data collection and processing by the law enforcement and security services of EU member states, nor does it purport to govern cross-border sharing of information directly between those services and their U.S. counterparts. For the time being, at least, those relationships remain strong, and the increasingly volatile global landscape offers every incentive to maintain them.
Bradford also predicts that, as a result of the example set by the GDPR, the U.S. is poised to adopt meaningful regulation of commercial data harvesting within its own borders, even if only via state rather than federal legislation. In particular, she singles out California’s pioneering data protection legislation as strongly influenced by the GDPR. Neither conclusion is well founded. As Anupam Chander, Margot Kaminski, and Bill McGeveran have explained, California’s privacy law differs from the GDPR in a number of essential respects: The California law requires people to opt out of data sharing; it does not confer enforceable rights to limit data collection or processing; and it does not give the regulator it created real authority to require disclosures about corporate privacy practices or to impose and enforce obligations of privacy by design. And California’s law is stronger than those enacted in the many other states that have adopted industry-proposed model legislation. Meanwhile, California’s congressional delegation has helped block subsequent adoption of federal privacy legislation that would preempt some of the California law’s major provisions.
On net, then, evolving U.S.-EU relations with regard to data and data-driven activities seem likely to manifest both less uniformity and more stability than Bradford claims.
This, though, is where the second prong of the Brussels Effect is supposed to kick in, prompting transnational companies to change the ways they produce and sell goods and services for markets around the world regardless of what other legal systems require. If one must retool the manufacturing process for one’s widgets to satisfy European safety standards, the reasoning goes, it makes little sense to maintain different production lines just so that one can continue manufacturing riskier widgets for sale elsewhere. For technology products, consumer expectations about compatibility and uninterrupted cross-border operation also shape production decisions. For both of these reasons, for example, once the EU ordered Apple to discontinue sales of its proprietary Thunderbolt connector, Apple chose to revert to the industry standard USB-C protocol on a worldwide basis.
Unlike hardware, however, services offered via digital platforms can be reconfigured for different regional or national markets at relatively low cost. This is a distinction with enormous implications for the success of European efforts to leverage domestic regulation of platform services to achieve more widespread changes in the way those services operate. Bradford confidently predicts that, because of the Brussels Effect, tech giants such as Google, Meta, and Amazon will begin to change the ways they interact with audiences outside the European Union. So far, at least, that just isn’t true. The GDPR has not produced meaningful changes in the design and implementation of privacy policies for non-Europeans. Information service providers designated as gatekeepers under the EU’s new Digital Markets Act (DMA) are implementing the DMA’s interoperability and portability requirements for European users only.
Disputes about privacy and data protection, moreover, are not the only instance in which a hypothesized Brussels Effect is failing to materialize. To take another example, Bradford’s optimistic predictions about the likely global effects of European hate speech laws are not borne out by the facts. At least as of this writing, the CJEU has not interpreted European law to require global blocking of embargoed content, as Bradford seems to suggest; instead, it ruled only that European law did not preclude Austria from attempting to use its national laws to do so. Mandates issued by individual countries are unlikely to prompt search and social media platforms to self-impose identical restrictions on content globally.
Bradford is correct to note that the large platforms block (or attempt to block) some content regardless of where on the planet it was uploaded—most notably, child sexual abuse material and live-streamed videos of terrorist and hate-related shootings in process—but that list falls far short of the full range of material that the laws in European nations tend to cover. And even within Europe, the numbers counsel caution. Bradford notes approvingly that the largest platforms report removal of 63 percent of hate speech about which they are notified. That’s nice (though, for the moment, impossible to verify), but the threshold requirement of user notification creates a serious denominator problem for claims that such numbers are significant in any absolute sense. It is important to remember that social media are social and that some (much?) content that would violate European member state laws circulates in private groups among consenting recipients.
Turning to the U.S., Bradford is certainly right to observe that U.S. influence on global digital policy has waned and that this is—in part—the result of growing dissatisfaction with the freedom of expression rhetoric wielded by U.S. technology giants and their advocates. It’s strange that the term “neoliberalism” appears nowhere in Bradford’s description of either the U.S. digital policy regime or the Silicon Valley innovation ethos. The reader learns simply that the former is “market-driven” and exemplifies a commitment that regulation should “take a back-seat” and that the latter is countercultural and techno-optimistic. Both characterizations oversimply considerably. As both I and others have explained, the U.S. digital policy regime is the product of decades of regulatory devolution that is ideological in origin. Neoliberal regulatory practice does not simply defer to private economic activity but, rather, actively superintends it—by defining markets, by granting essential kinds of immunity to market actors, by narrowly circumscribing ideas of unlawful conduct, and in many other ways both large and small. Presumptions in favor of the idea of permissionless innovation are woven deeply into the fabric of U.S. legal and regulatory culture, as is the notion that imposing regulatory “burdens” on designers and operators of data-driven digital processes would impair the operation of the “marketplace of ideas.” None of this is remotely countercultural; on the contrary, it exemplifies a mainstream, normalized commitment to clearing away the obstacles to private capital accumulation and private economic activity.
Inattention to the role of neoliberalism in shaping U.S. digital policy may help explain Bradford’s curious under-emphasis on the U.S. global trade agenda. In trade negotiations, the U.S. posture with regard to networked digital services has emphasized free flow of data far more heavily than freedom of speech. Until very recently, U.S. has sought to insulate cross-border flows of data almost completely from interruptions imposed in the name of digital sovereignty. Recent developments may signal some modifications to that approach. As of this writing, however, the policy concerns expressed by those urging a course correction have been articulated in distinctly American ways that include needs for “anti-monopoly protections,” “policies to protect consumer privacy and our kids online,” and the ability to “pre-screen source code and even algorithms … for racial bias and other violations of civil liberties and rights.” There is little evidence suggesting new enthusiasm for the broader European data protection or content regulation agenda. And, so far, the set of countries holding a GDPR adequacy determination from the EU is much smaller than the set of those that have signed a bilateral or multilateral free trade agreement with the United States. Reliance on trade frameworks as mechanisms for vindicating rescoped digital sovereignty interests has other important implications; as Kristina Irion, Margot Kaminski, and Svetlana Yakovleva explain, trade dispute mechanisms are opaque, easily captured, and narrowly focused on minimizing exceptions to free-flow principles.
My knowledge of Chinese digital technology regulation is still a work in progress, and so I am mostly left to wonder what a reader with deeper expertise would make of Bradford’s summaries. It’s worth noting, though, that China is more of a pioneer in digital governance than Bradford acknowledges. Although the EU was first to finalize comprehensive AI legislation, China’s more piecemeal AI regulations contain several innovations that are being seriously debated elsewhere. These include requirements for the labeling of AI-generated content and for quality control of training data for foundation models. China’s emergent social credit system encompasses not only individuals and communities—as to which Bradford is right to note its affordances for omnipresent surveillance and authoritarian control—but also businesses. As to the latter, social credit mechanisms can and do serve purposes that are more congruent with rule-of-law ideals, reinforcing requirements such as building codes and environmental protection obligations that have proved difficult to enforce in China’s rapidly evolving economy.
More significant is that China’s strategy for digital preeminence includes important aspects to which Bradford devotes very little attention, including widely accessible payment systems, alternative credit provision, cross-border e-commerce systems, and digitalized public services. From the beginning, the Chinese technology giants focused on tapping the consuming potential of the many Chinese consumers without credit cards and credit histories, as well as both the consuming and producing potential of rural consumers and businesses. They are therefore well positioned to serve the many unbanked and underbanked users now coming online across the Global South. For example, both financial services and logistical support for petty capitalist production played central roles in the rise of e-commerce behemoth Alibaba Group, which now directs massive flows of goods and financial services globally. To facilitate provision of Chinese-produced goods to buyers all over the globe, China constructed special digital trade zones connected geographically and operationally to Alibaba’s logistics operations, which in turn are connected to overseas e-commerce subsidiaries such as Alipay Express and Lazada and fintech subsidiaries such as Ant Financial. The Chinese party state and Chinese provincial and local authorities have devoted extensive resources to digitalizing not only identification documents but also access to essential public services, and Chinese technology firms—including especially communications and gaming giant Tencent, which also operates the WeChat and WeChat Pay apps used by nearly all Chinese citizens—have developed applications that enable those services to be accessed with relative ease.
How much should any of this affect the reader’s overall assessment of Bradford’s model and conclusions? To summarize, I have explained that the European approach to digital governance has far less extraterritorial bite than Bradford suggests; that, to the extent that the U.S. approach to digital governance is embedded in a large and widening network of reciprocal trade obligations, it may be more resilient than Bradford leads the reader to believe; and that, because of Chinese technology giants’ emphasis on financial inclusion, global e-commerce, and digitalized public services, their digital platform services enjoy competitive advantages over those of technology giants headquartered in the U.S. and Europe. Each of these conclusions has implications for the long-term stability of the three-sovereign contest that Bradford envisions. But Bradford is quite right that nothing is certain. Whether these empire-level miscalibrations ultimately net out to an equilibrium significantly different from the one she predicts is yet to be determined.
Counting Digital Empires: The Rise of India
Before placing bets, however, it is worth considering a fourth nascent digital empire, now under construction in India, that is largely absent from Bradford’s narrative. Understanding the Indian bid for digital sovereignty, in turn, requires a bit of background that is also relevant to understanding Chinese thinking about digital technologies and policy issues.
Both the Indian and the Chinese bids for digital sovereignty have decades-old roots in efforts by developing nations to craft a “new world information and communications order” responsive to their needs for both development and economic and political self-determination. As the promise of networked digital technologies coalesced around the business models of behemoth tech companies headquartered in the Global North—and as the Edward Snowden revelations about systematic U.S. communications surveillance created global shock waves—a more focused effort to counter emerging hegemonic models of digital capitalism and digital sovereignty emerged. At the 2014 NETMundial Conference in Brazil, participants from around the world called for an alternative approach to digital governance consistent with a broad, inclusive vision. Under the stewardship of the CyberBRICS project, researchers from the BRICS nations (Brazil, Russia, India, China, and South Africa) have worked systematically to advance that vision. Conceptually, sources of inspiration ranged from the well-established movement for “Third World Approaches to International Law” to the newer critique of data colonialism developed by media and communication scholars Nick Couldry and Ulises Mejias to the free and open-source software movement and other movements for participatory design in technology. (The fine volume on “Digital Sovereignty in the BRICS Countries,” edited by Luca Belli and Min Jiang and forthcoming from Cambridge University Press, provides a good introduction.)
For any number of reasons, it now seems safe to predict that a distinct BRICS internet governance regime unifying the disparate visions of China and Russia, on the one hand, with those of Brazil, India, and South Africa, on the other, won’t materialize anytime soon. But China is not the only BRICS sovereign that has resisted digital hegemony emanating from the Global North and set itself on a path toward digital empire-building. And although it has come to the table somewhat later than the other three contestants, India—now the world’s most populous nation, with more Facebook subscribers than the U.S.—has arrived with an approach to digital governance that blends elements of the other three approaches but is also distinctly its own.
India’s bid for digital sovereignty has been motivated in part by resistance to perceived imperialism by both the U.S. and China and also complicated by domestic political shifts. In 2016, India’s Telecom Regulatory Authority rejected Facebook’s bid to offer its Free Basics service, which exempted from billed data usage a suite of apps curated by Facebook and designed with the aim of capturing the attention (and the behavioral data) of Indian audiences. The decision stressed the nonnegotiable importance of network neutrality and underscored the need to encourage homegrown innovation. For similar reasons, India has repeatedly declined to join China’s Digital Silk Road (part of its Belt and Road Initiative) on the ground that its projects require promoting the interests of Chinese companies over those of domestic firms. More recent decisions have emphasized other values and priorities. In 2021, the Ministry of Electronics and Information Technology introduced new rules requiring quick removal of content deemed to threaten Indian “sovereignty and integrity.” The Narendra Modi administration has invoked the rules repeatedly to require U.S. social media companies to remove posts critical of the government. And the ministry has banned over 300 Chinese apps, including most notably social networking juggernaut TikTok and a number of popular gambling apps, citing concerns about data sovereignty and about asserted threats to Indian national security that might result from harvesting and mining large amounts of data about Indian citizens.
But India also has systematically pursued an affirmative vision of digital sovereignty. At the core of that vision is digital public infrastructure constructed with the goal of enabling both delivery of government services and financial inclusion. The system’s building blocks include the Aadhaar system of biometric identifiers, through which the Indian federal government seeks to ensure that every citizen receives a unique identifier that can be used without a requirement of print literacy, and the India Stack, a set of APIs made available to providers of both public and private services wishing to authenticate user identities. Relying on Aadhaar, public agencies have worked toward developing ways of digitally tracking public benefits ranging from food allotments to health care provision. In ruling on a legal challenge to the Aadhaar system brought under the Indian Constitution, the Indian Supreme Court articulated a rich theory of personal privacy but also largely declined to intervene in the system’s operation. The India Stack has enabled the rapid development and spread of homegrown micropayment systems that rely on scannable QR codes, along with an assortment of other financial services designed specifically for the vast, multilingual population of Indian consumers. Elements of the Indian digital public infrastructure model are now being touted globally using platforms provided by influential transnational actors, and (at least in theory) the model may promise an alternative to full-surveillance Chinese models offered, with strings attached, as part of the Digital Silk Road.
I don’t mean to suggest that the Indian approach to digital governance is superior either in an absolute sense or in terms of competitive advantage. The Aadhaar biometric identifier system has been criticized on a variety of counts—for failing to live up to its promises of inclusivity, for excessive openness to privatization of personal information, and for excessive leakiness that has introduced new threats of fraud and identity theft. The Indian regime of content regulation, which draws inspiration from those adopted by many European countries, has proved no more successful at curbing the spread of ethnonationalist hate and harassment. To the contrary, it is routinely manipulated for ethnonationalist purposes by an incumbent and increasingly authoritarian regime seeking to bolster its own power.
My point, rather, is that the Indian approach both disrupts Bradford’s three-sovereign model and scrambles some of the categories that Bradford uses to set up her taxonomy of fundamental contrasts. Echoing the European approach, it reflects dignitarian and social democratic impulses for privacy and inclusion—but, like the Chinese approach, it facilitates authoritarian and nationalist strategies for content control. Echoing the Chinese approach, it facilitates broad-based development and financial inclusion—but, like the U.S. approach, its emphasis on rapid development of a privatized service ecosystem has fueled data security problems and enabled fraudsters to thrive. Echoing the U.S. approach, it prioritizes free-wheeling, homegrown innovation and entrepreneurship—but, like the European approach, it attempts to craft a prosocial balance of privileges and responsibilities. It is a different model for governing at scale via the digital transformation than any of the other three. At the very least, then, it suggests the need for some modifications to predictions about the shape of global struggles to define the digital future.
Giving Private Power Its Due
A final kind of challenge in a project like Bradford’s is conceptual and involves deciding what pieces of the puzzle to hold constant as one attempts to describe the many others that are in motion. Bradford situates nation-state sovereignty at the center of her project and frames it as challenged but essentially unchanged. From that perspective, technology businesses have a lesser, more constrained type of agency than (some) nation-states, and “technology” itself is something that laws and sovereignty act on, not something that alters the enterprise of governance in essential respects.
As I’ve shown in my own work, that framing is far too simple. Large-scale changes in technology and political economy have reshaped both governance challenges and governance institutions in foundational ways. And, for that reason, the exercise of charting digital futures is not and cannot be simply a project of “regulating technology.” It is more fundamentally about developing the public capacity to govern emergent patterns of economic and social activity using the capabilities that new technologies provide.
Some of the governance challenges created by networked digital technologies are challenges pertaining to communication. As Mireille Hildebrandt has explained, the legal and regulatory systems that developed over the course of the industrial era relied on fixed texts, generally applicable rules, and processes of public reason-giving—and, critically, developed theories about why those features are essential for the rule of law. The outputs of data-driven machine learning-based systems are emergent, typically unexplainable in cause-and-effect terms, and tailored to probabilistically determined user profiles. And, to borrow a turn of phrase, such systems tend to interpret externally generated interference as damage and route around it. So, for example, machine learning systems can be instructed not to use particular data points about race, ethnicity, or religion, but unless very great care is taken to train the systems differently, they will proceed to draw inferences that replicate the prohibited information and reproduce associated patterns of structural or systemic disadvantage. And, relying on population and social network data, they can easily draw inferences that substitute for the very same information that particular individuals opt not to have collected.
Other governance challenges are structural. The disaggregated data architectures and processes that make up current digital ecosystems, and that will underpin the coming artificial intelligence (AI) era, do not present isolated sites of accountability. Understanding and superintending the operations of contemporary digital services and business models requires the ability to scrutinize globally distributed data, labor, and algorithmic supply chains. These attributes, moreover, make disaggregated data architectures powerfully resistant to localized assertions of oversight authority.
Simply mandating greater respect for “rights” or restored “competition” in “markets” begs genuinely difficult questions about how such mandates are to be put into effect. Networked, data-driven, machine learning processes that operate at scale based on population inferences are resistant to localized anomalies—including assertions of fundamental rights by or on behalf of individual claimants who sit at the endpoints of such processes. By this I don’t mean to suggest that thick theories of fundamental rights are unimportant. The point is that reorienting the design and operation of such architectures to achieve results more consonant with a rights-based vision will require different institutional mechanisms for oversight and intervention. A “market,” meanwhile, isn’t a generic term for any economic system that privileges private, self-interested activity. Markets are systems in which supply and demand are mediated, more or less automatically, by more or less arms-length interactions among parties whose reservation prices, plans, and goals are more or less private. Self-evidently, these are questions of degree. In the real world, market participants often demand informational mechanisms for mitigating downside risks. But digital platforms, which use data-intensive, fine-grained, algorithmic auction methods to structure and tune interactions among their users, are inherently non-market allocation mechanisms. Reorienting their design to achieve results more consistent with notions of fairness may be achievable, but traditional competition remedies that presume the underlying existence of markets won’t get us there.
As I have explained elsewhere, the institutional design challenges that complicate governance of networked digital technologies are also more difficult because of a parallel “control revolution” in the structure and operation of legal institutions.
Some of the institutional design challenges now confronting law- and policymakers involve problems of scale in dispute resolution. Court systems, for example, cannot plausibly oversee every content takedown request, every data access request, or every consumer or third-party reseller dispute. Bradford takes up this issue relatively late in the book and, perhaps for that reason, largely confines herself to observing that digital platform firms are essential participants and partners in governance. That’s true, of course, but it also suggests that the principal problem now confronting sovereign states is how to exact more enforcement using the same tools. As a result of the capabilities for fine-grained, scaled-up management that information technologies provide, processes of dispute resolution have already been shape-shifting for decades. New informational capabilities have facilitated both widespread outsourcing of small, low-dollar-value disputes (in areas such as consumer satisfaction and human resources) and new organizational mechanisms for producing and managing settlements (in consumer protection and mass tort litigation) in ways that do not produce citable opinions articulating rules about proper conduct and that are only nominally supervised by courts. The questions that policymakers should be considering involve how to structure and oversee scaled-up dispute resolution in ways sufficiently responsive to public values.
Scaling up dispute resolution, however, is far from the only important problem in digital governance. In some contexts—with debates about content moderation as perhaps the most important example—the dispute resolution lens is an active distraction because it tends to suggest that the most important problems involve how to effectuate post hoc removal of particular items of content accurately and fairly. This ignores all the ways in which data-driven, machine learning systems are deliberately tuned and optimized to serve particular values. And it papers over the fact that the construction, expansion, and maintenance of disaggregated data architectures and supply chains are deliberate acts.
Designing new institutional mechanisms for regulatory oversight of networked digital processes and business models is, therefore, essential. But that project must reckon with the fact that regulatory toolkits also have been shape shifting for decades. New modes of interacting with highly information-intensive industries have emerged that deemphasize traditional rulemaking and enforcement proceedings; that foreground compliance rubrics, technical standards, and other managerial control practices; and that rely on an array of third-party auditors, systems vendors, and other compliance intermediaries. In practice, this has tended to mean devolving considerable self-regulatory authority to technology companies without effective public oversight. As Ari Waldman has documented, such self-regulatory processes are often performative, working principally to legitimate existing practices that serve industry interests. Regulators wanting to engage in more active oversight must also confront the fact that, although most dominant platforms operating in the U.S. and Europe produce a continual stream of “transparency reports” purporting to offer windows into certain governance matters, they carefully disclose only the most basic and superficial information about how their disaggregated data architectures and algorithmic supply chains work. Conducting effective regulatory oversight of digital technologies and businesses requires processes of regulatory innovation designed to create new, more effective capabilities for monitoring, policy formulation, and enforcement.
Still other dimensions of the problem concern the relationships between regulation, capital markets, and the activities and business models of giant technology enterprises. As noted above, Bradford understands the importance of these issues. At the same time, she fails to wrestle with the extent to which those activities and business models are fundamentally entangled with law in ways that necessitate moving the public conversation beyond an assumed inverse relationship between “regulation” and “innovation.” Regulation and innovation interact in complex and generative ways. Regulatory mandates can both catalyze innovation and channel it in particular directions—which is why, for example, there are now major research initiatives in areas such as clean energy, climate risk mitigation, and sustainable urban planning, due in no small part to Europe’s ambitious approach to sustainability mandates. As Bradford’s colleague Katharina Pistor has explained, the legal rules and institutions that govern ownership and transfer of capital also shape the workings of economic activity and economic power. There surely are connections to be drawn between the Silicon Valley ethos of “permissionless innovation,” the U.S. venture capital and private equity ecosystems, and the corporate governance structures selected by U.S.-based global technology firms, which tend to reserve outsized voting power, and hence continuing control, for founding “innovators” and venture investors. But the lessons may not be those that Bradford seems to assume.
* * *
I would have loved to know what Bradford makes of these arguments, which seem at least worth engaging. Of course, this request is not entirely fair, because it asks for additional complexity from a text that is already quite long. But questions about the changing shape of digital governance are important, and paying more careful attention to them could have helped Bradford work through her analysis in several places where it seems to bog down.
Forecasting the digital future is a fiendishly difficult exercise, to which each of us brings our own expertise and our own preconceptions and blind spots. I have identified certain aspects of that challenge that I believe require more detailed consideration, but I also have no crystal ball. I think Bradford is quite right to worry both generally about the risks of economic and technological fragmentation and more specifically about the human rights threats created by certain approaches to digital governance. I’m grateful to Bradford and to “Digital Empires” for prompting me to think more carefully about the large-scale structural dynamics of the ongoing contests over digital sovereignty and digital supremacy.
My thanks to Chinmayi Arun, Laura Donohue, Mark Jia, Smitha Krishna Prasad, Milton Regan, Greg Shaffer, and Ari Waldman for their very helpful comments.
Julie E. Cohen is the Mark Claster Mamolen Professor of Law and Technology and a faculty co-director of the Institute for Technology Law and Policy at Georgetown Law. Published courtesy of Lawfare.