The connection between the timing and the locus of a regulatory intervention should excite considerable interest in the study of ex
ante
regulation. To illustrate this argument, we draw on the example of the precautionary principle. It emerges that timing is important when legislation is being drafted. However, time is not the only relevant variable in decision-making. When it is ripe for application, both temporal and locus-of-regulation considerations matter. For instance, a question that policymakers should address is when it is the right time to regulate new or emerging technologies and at which governance level. Addressing such questions would allow them to strike a balance between facilitating the development of new technologies and addressing the legitimate concerns of their citizens.

By Alberto Quintavalla & Leonie Reins[1]

 

I. THE TIME AND LOCUS OF REGULATION

Two variables serve as the fulcra around which most academic studies resolve. The two are time and locus. Studies on law and governance studies are no exception. Time affects the effectiveness of the regulatory frameworks that governments design. Can old laws resolve new issues effectively? Several legal quandaries turn on this very question. The debate on the long-term applicability of regulatory provisions is a salient example, as is the controversy about sunset clauses, a cause célèbre in many an academic circle.[2] A regulatory framework that features such clauses would, the argument runs, be aligned more closely to the preferences of citizens without, however, running the risk of becoming anachronistic.[3]

Locus is the second key variable in the study of ex ante regulation. Generally, the term “locus” refers to the geographical scope of a given regulatory framework as well as to the level at which it is adopted. The common questions that arise are whether it is adoption at the local, the national, or the supranational level that would be most effective and who it is that should be bound by the regulatory framework in question. This type of inquiry is particularly important given the ever-expanding set of jurisdictions characterized by a multi-level governance. Moreover, the emergence of transnational issues, such as environmental challenges and the growing influence of multinationals, in the contemporary world makes this discussion particularly prominent.

The locus at which regulation is adopted is of critical importance to its effectiveness. Indeed, while regulations that are adopted at the global or the supranational level may be more effective in addressing particular transboundary or global problems, such as climate change, they are arguably also further removed from the citizens that are subject to them. Questions may also arise about the enforcement of globally negotiated norms because, absent an authority with powers of enforcement, issues of free riding are bound to occur and recur. Their recurrence may prompt states to adopt unilateral measures that are intended to incent third parties to comply.[4] The Carbon Border Adjustment Mechanism, which seeks to shield EU industries from the carbon leakages that may result from the failure of the signatories to the Paris Agreement to abide by their commitments, is a case in hand.[5]

Time and locus are not discussed in the same depth in all of the legal disciplines. There is an imbalance between the two in the domain of technological regulation. Scholars in that domain are concerned mainly with timing – temporal considerations are thought to be more exigent than locus ones.[6] The explanations for this imbalance range from the entrenchment of social practices to the perceived dangers that lurk behind novel technologies. In the present day, an innovation can dismantle established social and legal routines in the blink of an eye. For instance, the rapid emergence of artificial intelligence-related applications such as ChatGPT means that regulators are always lagging behind developments. By the time legislation is promulgated to control one application of AI, dozens of others may have entered the social domain, and the regulatory framework may already be outdated.[7] Accordingly, time is becoming ever more important. When is the right time to regulate new or emerging technologies? Is a technology-specific approach to regulation desirable, or should principles predominate?

 

II. THE ISSUE OF TIMING IN THE REGULATION OF TECHNOLOGY

The Collingridge dilemma is one of the most widely cited propositions of social science. It has to do with control over new technologies: “[a]ttempting to control a technology is difficult, and not rarely impossible, because during its early stages, when it can be controlled, not enough can be known about its harmful social consequences to warrant controlling its development. By the time these consequences are apparent, control has become costly and slow.”[8] In other words, timing is decisive for the regulation of new or emerging technologies.

The Collingridge dilemma originated from the social science. In the legal context, the question that it poses to practitioners concerns the point in time at which technological developments should be subjected to regulatory interventions. Early action is difficult because “insufficient, conflicting or confusing data about the nature and impact of the new technology” may render the intervention premature and ineffective.[9] When a technology is nascent, its adverse impact on society or the environment is neither clear nor even predictable. It may therefore be impossible to formulate an efficient regulatory regime at this point. If, however, the regulatory intervention arrives too late, the technology is liable to have become entrenched in society. At that stage, “influence and change become correspondingly more difficult [slow and expensive][10] to effect.”[11]

Another example of the salience of the time factor originates from the long-running debate about the need to develop a separate ad hoc body of law to regulate matters that the legislators of yore failed to anticipate. This issue was central to the academic discussion of the so-called “law of the horse.” The expression dates back to the mid-1990s. Then, Judge Easterbrook drew a parallel between the need for having a sectoral regulation on the then-incipient cyberspace and the need for creating a law of the horse.[12] According to Judge Easterbrook, the ad hoc regulation of cyberspace would have been undesirable.[13] Lessig challenged Easterbrook’s theory, and the course of events proved him right.[14] Time was at the core of the discussion: the argument was not that the cyberspace should not be regulated specifically but simply that the time was not yet ripe for an ex ante regulation.

This theme features in other debates in technology regulation in which the link to timing is neither obvious nor particularly intimate. The debate about the type of regulation that should be applied to technology supplies an apposite example. Scholars distinguish between uncertain and risky technologies.[15] When a technology is uncertain, humans cannot anticipate the consequences of its deployment by attributing numerical probabilities to various eventualities; when a technology is merely risky, the risks that it entails are calculable.[16]

Despite being somewhat blurry, this distinction can be handy to policymakers. By examining types of technologies, policymakers can identify the regulatory bodies that are likely to be most capable of observing the impact of an innovation and, if necessary, of regulating its use. Uncertain technologies call for action on the part of legislators, whereas risky ones can be addressed more effectively by the courts. This is so because the existence of unpredictable consequences entails decisions that turn on subjective preferences – the courts lack both the legitimacy and the competence to strike such balances.[17] Risky, that is, calculable, consequences, conversely, can be addressed through extensions of extant regulations, a task at which the judiciary evidently excels.[18]

In this framework, time takes a secondary, yet still prominent, role. Time is essential for the conversion of an uncertain technology into a risky one. Society collects observations that, over time, enable it to convert uncertainty into risk. This accumulation of information also changes the attitudes of legislators towards the regulation of technologies. This is so because an uncertain technology that eventually becomes risky gradually comes to require more attention from the judiciary rather than from the legislature. Accordingly, the need to regulate from scratch becomes less pressing over time.

 

III. EX ANTE REGULATION AND THE NEED TO ACCOUNT FOR LOCUS CONSIDERATIONS

The regulation of technological innovation calls for caution. Ex ante regulation occupies a prominent role in the legal literature. The precautionary principle, which enables intervention to occur at an early stage of the innovative process, is a crystal-clear example. The principle provides a general framework for those who make law and policy when they decide to intervene in a given domain. The framework, moreover, is robust to many uncertainties. At the same time, the precautionary principle does not imply that when one is in doubt, one should opt out.[19]

Despite its laudable objective of generating stronger protections for citizens and the environment, the precautionary principle has been said to be subject to a number of important limitations. For instance, Sunstein argues that the precautionary principle paralyses because risks are everywhere and the principle, in itself, forbids action.[20] The application of the principle is therefore limited to the prioritization and allocation of risks: its content is vague; accordingly, it provides little effective guidance to the policymaker, it is argued.[21] Another argument against its use is that its application may obstruct innovation and hinder progress in practice.[22]

Yet another criticism, which is critical to the present ends, has to do with locus, a variable that is mostly neglected in the literature on technology regulation. As things stand, the applicability of the precautionary principle is circumscribed, and its effectiveness is limited. Its legal force is constrained to the specific jurisdictions in which it has been adopted; even there, its application and definition are sources of controversy. In the EU, for example, the precautionary principle is included in primary legislation on the environment.[23] By virtue of the integration principle,[24] it is also applicable to other policy areas such as trade, finance, agriculture, industrial policy, and such like. However, the guidance on its application is restricted to a non-binding Communication, which does not even attempt to define it.[25]

In general, few jurisdictions have implemented the precautionary principle, and it is not considered to be a principle of customary international law.[26] In consequence, the risks that could stem from the deployment of an innovation are neglected in a large number of polities. The influence of the precautionary principle thus varies, which is undesirable for two principal reasons. First, only a handful of individuals are protected. Second, and even worse, the individuals in question may suffer harm regardless of the protection that the precautionary principle affords to them.

An illustration may help. Let us suppose that the deployment of a ground-breaking innovation is harmful to the environment because it increases greenhouse gas emissions by a significant margin. The EU might then invoke the precautionary principle and regulate the innovation. Other jurisdictions may refrain from acting thus, either because scientific consensus is (inevitably) lacking at the point in time at which the innovation is rolled out to market or merely to boost corporate profits. The innovation, then, is deployed without reservations in some jurisdictions, while others either ban it or introduce novel regulatory requirements in order to mitigate its harmful effects to the environment. The consequences of such a development would be dire for the citizenry of the laissez-faire jurisdictions and even worse for those who live in the EU. The latter must bear the negative consequences of the deployment of the technology while reaping none of the benefits, be they pecuniary or otherwise.

The benefits in question accrue to the jurisdiction that does not regulate; the harms are distributed evenly across the globe. To adopt the economic jargon that is currently in vogue, this type of situation materializes whenever there is a possible harm to a global public good such as climate change mitigation.[27] In short, ex ante regulation and, even more specifically, the adoption of the precautionary principle in a single jurisdiction are not entirely effective in preventing harmful activities.

One corollary of the foregoing is that the locus of regulation matters. Therefore, there is at least an arguable case for elevating the precautionary principle to a more global level and for integrating it into customary international law, a proposal that has been ventilated for decades. What is clear at present is that the regulatory approaches that are adopted in various jurisdictions and the cultural norms that animate them differ widely. In consequence, no agreement has been reached on the locus of the precautionary principle.

Another issue that pertains to the locus of regulation is that, as mentioned previously, no institution has the authority to enforce rules on a global level. There is no world police. Responsibility for the enforcement of global norms ultimately rests with the states that negotiate them. Accordingly, there is always a risk of discrepancies between commitments that are made or obligations that are assumed on the global level and their municipal enforcement. The Nationally Determined Contributions of the Paris Agreement are a vivid illustration.[28] Although all of the parties to the agreement decided to limit their greenhouse gas emissions to certain levels by certain points in time, no-one except the signatories is authorized to enforce compliance with these undertakings.

 

IV. CONCLUSION

As noted at the outset, timing and the locus of regulation are linked inextricably in the context of ex ante regulation. We used the precautionary principle as an illustration in order to outline the problem. None of what we wrote should be taken to imply that the principle ought to be discarded. The precautionary principle can be effective when time is of the essence. The objective should be to integrate locus-of-regulation considerations into its application. It is important to determine when it is appropriate for a certain technology to be regulated at a certain level of governance.

A more fundamental question is whether technology should be regulated globally through uniform and harmonized laws and regulations or whether it would be more desirable to enable different polities to tailor their approaches to the particularities of specific technologies. Global norms and rules may be desirable from the standpoint of efficiency, but it is important not to overlook local preferences or considerations. The legitimacy of the resultant regulations is also a problem of considerable import. The further the locus of regulation is from the populace that is subject to it, or which is intended to benefit from its protection, the stronger the resistance that the deployment of the technology is likely to induce at the local level. When it comes to problems with global dimensions, of which climate change is but one, the rollout of novel and disruptive technologies can produce radical and sudden change at the local level. In such contexts, it is imperative that regulators strike an appropriate balance between legitimacy and effectiveness. Ex ante regulation should not serve primarily to hamper the development of new technologies, but the legitimate concerns of the citizens whose are most directly exposed to the negative consequences must never be ignored.


[1] Assistant Professor, Erasmus School of Law, Erasmus University Rotterdam, and Full Professor, Erasmus School of Law, Erasmus University Rotterdam, respectively.

[2] Sofia Ranchordás, Sunset clauses and experimental regulations: blessing or curse for legal certainty?, 36(1) statute law review 28 (2015); Antonios E. Kouroutakis, Disruptive innovation and sunset clauses: The case of Uber and other on demand transportation networks, in Time, law, and change: an interdisciplinary study 291 (Sofia Ranchordás & Yaniv Roznai eds., 2020).

[3] Ranchordás, id.

[4] Philippe Sands, Principles of International Environmental Law (4th. ed. 2018).

[5] Proposal for a Regulation of the European Parliament and of the Council establishing a carbon border adjustment mechanism, Brussels, 14.7.2021, COM(2021) 564 final, 2021/0214(COD).

[6] See, e.g. Lyria Bennett Moses & Monika Zalnieriute, Law and Technology in the Dimension of Time, in Time, law, and change: an interdisciplinary study 303 (Sofia Ranchordás & Yaniv Roznai eds., 2020); Lyria Bennett Moses, Recurring Dilemmas: The Law’s Race to Keep Up with Technological Change, University of Illinois Journal of Law, Technology and Policy 239 (2007); Gary E. Marchant, The Growing Gap Between Emerging Technologies and the Law, in The Growing Gap between Emerging Technologies and Legal-Ethical Oversight: The Pacing Problem 19 (Gary E. Marchant, Braden Allenby & Joseph Herkert eds., 2011).

[7] Ronald Leenes, Regulating new technologies in times of change, in Regulating New Technologies in Uncertain Times 3 (Leonie Reins ed., 2019).

[8] D. Collingridge, The Social Control of Technology 19 (1980).

[9] Graeme Laurie, Shawn H.E. Harmon & Fabiana Arzuaga, Foresighting Futures: Law, New Technologies, and the Challenges of Regulating for Uncertainty, 4 Law, Innovation and Technology 1, 5f (2012).

[10] Lyria Bennett Moses, How to Think about Law, Regulation and Technology: Problems with ‘Technology’ as a Regulatory Target, 5 Law, Innovation and Technology 1, 8 (2013).

[11] Laurie, Harmon & Arzuaga, supra note 2, at 5f.

[12] Frank H. Easterbrook, Cyberspace and the Law of the Horse, U. Chi. Legal F. 207 (1996).

[13] Id.

[14] Lawrence Lessig, The law of the horse: What cyberlaw might teach, 113(2) Harvard law review 501 (1999).

[15] Marta K. Kołacz, Alberto Quintavalla & Orlin Yalnazov, Who should regulate disruptive technology?, 10(1) European Journal of Risk Regulation 4 (2019).

[16] The distinction builds on the difference between risk and uncertainty made by Knight. See Frank Knight, Risk, Uncertainty, and Profit (1921).

[17] Kołacz et al., supra note 15.

[18] Id.

[19] Geert Van Calster, Diana Megan Bowman & Joel D’Silva, ‘Trust me, I’m a Regulator’: the (In)adequacy of EU Legislative Instruments for Three Nanotechnologies Categories, in Dimensions of Technology Regulation 207, 230 (Morag Goodwin, Bert-Jan Koops & Ronald Leenes, eds., 2010).

[20] Cass R. Sunstein, Laws of Fear – Beyond the precautionary principle 26f (2005).

[21] Id.

[22] Daniel Castro & Michael McLaughlin, Ten ways the precautionary principle undermines progress in artificial intelligence (Information Technology and Innovation Foundation 2019). For an opposing view, see Oliver Todt & José Luis Luján, Analyzing precautionary regulation: do precaution, science, and innovation go together?, 34(12) Risk Analysis 2163 (2014).

[23] Article 19 Treaty on the Functioning of the European Union.

[24] Article 11 Treaty on the Functioning of the European Union.

[25] European Commission, Communication from the Commission on the precautionary principle, Brussels, 2.2.2000, COM(2000) 1.

[26] David Freestone and Ellen Hey, The Precautionary Principle and International Law: The Challenge of Implementation (1996).

[27] William D. Nordhaus, Paul. Samuelson and Global Public Goods, in Samuelsonian Economics and the Twenty-First Century 88 (Michael Szenberg, Lall Ramrattan & Aron A. Gottesman, eds., 2006).

[28] Article 4(2) of the Paris Agreement to the United Nations Framework Convention on Climate Change, Dec. 12, 2015, T.I.A.S. No. 16-1104.