New technologies bring with them many promises, but also a series of new problems. Even though these problems are new, they are not unlike the types of problems that regulators have long addressed in other contexts. The lessons from regulation in the past can thus guide regulatory efforts today. Regulators must focus on understanding the problems they seek to address and the causal pathways that lead to these problems. Then they must undertake efforts to shape the behavior of those in industry so that private sector managers focus on their technologies’ problems and take actions to interrupt the causal pathways. This means that regulatory organizations need to strengthen their own technological capacities; however, they need most of all to build their human capital. Successful regulation of technological innovation rests with top quality people who possess the background and skills needed to understand new technologies and their problems.

By Cary Coglianese1

 

Technology brings with it great promise for improving the quality of life. But it can create problems too. And when it does, society usually turns to regulators for help.

Although many of the problems with today’s newest technologies are themselves new, they still have much in common with the types of problems that regulators have long addressed. Moreover, even in this era of new tech, the main strategies available to regulators in the past will generally remain the same strategies available to them today. Regulators will continue to need to focus on understanding problems and the pathways that lead to them so that they can take action to shape the behavior of those in industry to avoid or reduce the problems that technology creates.

Most of all, regulatory agencies need to strengthen their organizations’ capacities to oversee new tech firms vigilantly and efficiently. Toward this end, regulatory organizations need to strengthen their own technological capacities. But most important of all, they need, perhaps somewhat counterintuitively, to focus on building capacity in terms of their people. The key to the successful regulation of technology is to find, train, and retain top quality people to fill the ranks of regulatory agencies, people who have the background and skills needed to understand the technologies they oversee and to regulate them effectively.

 

I. NEW TECH’S “PROBLEM” PROBLEM

Traditionally, the problems that regulators address have been defined in terms of market failures, such as imperfect competition, insufficient consumer information, and harmful spillovers. In addition, regulatory problems emanate from other normative concerns, such as fairness and equity. The problems created by technology still tend to fit within these longstanding categories of regulatory concern about market failures and other social values.2 As a result, the lessons learned in the past from both regulatory practice and scholarship can offer insight about overcoming the regulatory challenges created by technology today.

Yet one of the major challenges today stems from the diversity and dynamism inherent in an era of rapid innovation in technology and its application. The problems with today’s technologies are themselves highly varied, changing, and often ill-defined.

“New tech,” after all, is not a single, homogeneous product or process. It comprises a broad range of distinct technologies and applications that each in its own way may be transforming economic transactions and other activity — and each that comes along with its own social and economic concerns.3 This variability might be considered the “problem” problem with new tech.

You can choose your own label, but innovations today constitute what has been variously called a new “digital economy,”4 “networked economy,”5 “sharing economy,”6 “platform economy,”7 “optimizing economy,”8 or even “zero marginal cost economy.”9 The range of innovations today is stunningly broad, including cryptocurrency, artificial intelligence, social media, fintech, gig labor, autonomous vehicles, online retail, bioengineering, the internet of things, precision medicine, biometric identification, and more.

As varied as they are, today’s technologies admittedly bear certain common threads. To different degrees and in different ways, they have all been made possible by advances in digital computing. These advances, for example, allow for the processing of large quantities of data using powerful algorithms that can be highly effective at finding patterns in data — often at remarkable speeds. The analysis of big data can allow for existing tasks to be automated, distributed, or organized in new ways, and these new techniques allow for altogether new forms of economic and social activity.

But from the standpoint of what is needed to regulate new tech, these broad commonalities will rarely be enough to bring them under a common, unified regulatory strategy. The heterogeneity and dynamic nature of new tech makes for a diverse, and at times vaguely defined, set of problems to be solved.

Consider that computer scientists and statisticians, for example, do not even always agree on precisely what they mean by terms such as “artificial intelligence” and “machine learning.” Even when they agree on the scope of these terms, what travels under their banners can be extraordinarily varied: distinct categories of supervised, unsupervised, semi-supervised, and reinforcement learning algorithms, with many different types of algorithms and data architectures falling within each of these categories.

Moreover, although it is true that a certain broad set of concerns with machine-learning algorithms have been commonly characterized in terms of fairness, accountability, transparency, and ethics, how these general concerns manifest themselves and exactly how they should be operationalized in specific contexts have yet to yield any widely accepted precise definition.

The resolution of the problem definition question for new technologies will undoubtedly vary widely from application to application. The regulatory problems raised by an algorithm used in a voice activation function in a smart phone will differ from those presented by an algorithm contained in life-support equipment used by hospitals. And these problems will vary altogether from the problems created by algorithms used in social media platforms. Even when it comes just to social media, the range of problems is highly diverse, including concerns over privacy intrusions, the propagation of misinformation, the facilitation of hate speech and cyberbullying, and various ill effects on children and teens.10

In Europe, there appears to be some effort to recognize such differences, as the EU’s proposed regulation on artificial intelligence distinguishes between high-risk and low-risk uses of the technology.11 But risk itself can be a slippery notion.12 Even when understood squarely as the probability of harm, the probabilities and the harms are often not yet clearly understood—an inherent problem with anything new. Even when the harms are known, they can vary widely across different applications. The harms that can arise from fintech, for example, are hardly the same kind of potential harms presented by precision medicine, even when they both are driven by machine-learning algorithms.

Moreover, with most types of regulation, risks are only part of the equation when it comes to defining the regulatory problem. The risks of new tech need to be considered in light of the benefits of these technologies. Autonomous vehicles, for example, will present risks of accidents, some of which might not have occurred with human drivers; however, autonomous transportation also promises to reduce the overall level of accidents and to decrease energy usage. Regulators need to take account of all these effects—the bad and the good.

Other technologies promise improvements too, even while they also create other potential side effects or spillovers. Part of the process of problem definition demands some appreciation for how tradeoffs should be made, such that a sufficient reduction in the harms from new technologies can be achieved without unduly undermining the beneficial effects of these innovations.

These are tough issues that, to be sure, have long vexed regulators in other settings. What is distinctively difficult about the regulatory challenges related to new tech, though, is that the definitions of the ultimate problems remain unsettled, if not even changing as technology changes. And those problems are highly varied. Regulating new tech means not merely recognizing that a one-size-fits-all regulatory solution will prove elusive; rather, it demands acknowledging that the nature of the regulatory problems are themselves varied and changing, both across and within different technologies and applications.

What we might consider new tech’s “problem” problem, then, is simply the fact that regulators face a plethora of diverse problems and that societal expectations about regulatory goals are often still emerging at the same time as new tech continues to evolve, with too little guidance over priorities and tradeoffs. Some of the problems with new tech also cut across existing regulatory jurisdictions and even at times may fail to fall within the ambit of any current regulatory body’s authority. And for many new tech problems, there exists too little understanding of the causes of regulatory problems nor of the potential for unanticipated consequences from regulation itself.

 

II. SOLVING NEW TECH PROBLEMS

The heterogeneity and dynamism of new technologies does not mean that nothing can or should be done today to regulate new and emerging technologies. Problems need not be defined permanently, fully, or with complete precision for government to intervene in markets. But the diverse, changing nature of new tech’s problems does certainly pose challenges for regulators and ultimately it may drive their selection of regulatory strategies. The strategies that have proven workable and effective for older technologies and more static, better-studied sectors are not likely to work nearly as well for new tech.

A. Markets as Regulators?

One response to varied, and even vague, conceptions of new tech’s problems would be to seek to leverage market forces. Rather than have a government regulator need to define the problems with new tech, and then put in place regulations to solve them all, the basic regulatory function could be left to consumers who could pressure firms to reduce potential harms. Consumers could freely choose from among competing firms and products those that they think best address their harms.

The desire to leverage market forces is certainly part of the impetus behind calls for greater antitrust scrutiny of big tech firms today.13 The thinking is that, if companies such as Amazon, Apple, Facebook, and Google faced more vigorous competition, then they might do more to protect consumers’ data or guard against other social and economic harms arising from their tech products and services.

This way of thinking certainly has some merit. Monopolists have less reason to deliver everything that consumers want. Market pressures from consumers and investors, on the other hand, can indeed lead companies to reduce certain types of problems that concern both consumers and regulators.14 And in some instances, self-regulation or “soft law” professional norms may well help moderate firm behavior.15

Yet in the face of genuine market failures or other regulatory problems, there seems little reason to be optimistic that market pressures by themselves can entirely eliminate the need for regulatory interventions.16

For one thing, for competitive pressures to work, the market actors — such as consumers and investors — need relevant and credible information on which to base their decisions. And yet information asymmetries — a classic market failure problem — surely exist with new technologies and will necessitate regulatory intervention to ensure, if nothing else, adequate and accurate disclosure of information to consumers and investors. Determining exactly what information needs to be disclosed, and then auditing to make sure disclosed information is accurate, will demand that regulators define problems clearly and assess how well disclosed information captures those problems.

But in addition, there is little reason to think that just the disclosure of information will always drive new tech firms to design and deploy their products and services in a sufficiently socially responsible manner. After all, with respect to other problems of information asymmetries, information disclosure is often not enough. Many consumers do not read the fine print or otherwise pay attention to the compelled disclosure of information — even when the disclosure is simple and readily available.17 With respect to modern technology, the relevant disclosures might well need to be complex or technical, making it difficult for consumers to base their decisions on the information. The regulation of pharmaceuticals, for example, is justified as a solution to an information asymmetry problem but it does not rest solely on the disclosure of information. Instead, an entire system has been developed to test drugs for safety and efficacy that essentially relies on sophisticated regulators and their advisors to stand in for consumers.

Moreover, even if consumers did act on complete information, a competitive marketplace is not likely to prove sufficient to achieve the socially optimal resolution of all the problems with big tech. For example, when these problems are ones of true externalities — such as, say, with systemic risks to the economy that might conceivably be created by certain types of algorithmic transactions, cryptocurrencies, or fintech products — then by definition consumers are not going to put sufficient pressure on companies. In short, since a regulatory problem is inherently one that markets by themselves will not solve adequately, then some kind of regulatory intervention will likely be needed even in a more competitive tech environment.

B. The Problem-Pathway Framework

A regulatory intervention seeks to change the behavior of firms and their managers so that it reduces targeted problems. In seeking to shape the behavior of those who design and deploy new technologies, regulators can certainly take advantage of new technologies themselves to improve their work.18 But even with the use of automated forms of regulatory oversight, regulators will still need to rely on the strategies upon which regulators have drawn in the past for shaping human behavior — although with some different emphases.

These strategies can be distilled to their essence. By either commanding action or results, regulators can seek to orient the behavior of regulated individuals and entities toward either (1) solving an ultimate problem themselves, or (2) adopting behavior that will interrupt specific causal pathways that lead to an ultimate problem.19

The first of these approaches demands, at a minimum, that the regulator be able to define a problem with sufficient clarity or know that it has arisen and caused someone harm. The second demands both clarity about the problem and a sound understanding of its causes.  By understanding the causes of problems, the regulator can identify the major pathways that lead to their generation and then impose, and monitor compliance with, rules demanding actions or results aimed at blocking off those pathways.

Take, as a simple example, the problem of injuries and fatalities from automobile accidents. The first approach focuses on the accidents themselves—such as by imposing an overall obligation on drivers to drive safely and holding them liable when they cause injuries to others. The second approach comprises various vehicle safety equipment standards and traffic laws, such as speed limits and stop signs, that can block the pathways leading to accidents and injuries in the first place.

The dichotomy between regulations directing attention at ultimate problems versus those directed at pathways to the ultimate problems helps reveal the basic strategies available to regulators in an era of new tech. These are the same strategies that have long been deployed by regulators; they are not necessarily mutually exclusive and can be combined when regulating the same or different problems that they create.20 And as in any regulatory domain, and with respect to any regulatory problem, each of these strategies will have both advantages and disadvantages, especially relative to the others.

In the case of new tech, regulatory strategies that mandate action or results along specific pathways may be the least appealing option, simply because these pathways are still being understood and are likely changing as technology changes. Moreover, too much interference on the pathways may also risk stifling technological innovation, which could have its own ill effects.

C. Problem-Based Liability

A natural starting point, then, would be simply to impose liability on tech firms when problems develop from their technology—just as negligent drivers are held liable when they injure others. This is one of the oldest strategies for shaping behavior and solving regulatory problems as it can help focus firms’ attention on avoiding an ultimate problem that causes harm. Such liability can be imposed either through general products liability rules or through what regulators sometimes call the general duty clauses within legal codes.21

No matter the source of liability, under this strategy tech firms would have an obligation to avoid an ultimate problem, whether fatalities, the loss of funds, or other harms. When the ultimate problem manifests itself due to a firm’s actions (or inactions), the firm needs either to compensate for the harm, pay a penalty, or both. These financial costs can be imposed on the firm automatically whenever the firm causes harm, or only when the harm arises from the firm acted negligently by failing to exercise reasonable care. Either way, because firms know that they can be held liable after the fact when their products or services cause harm, they have some incentive to focus on avoiding that harm — a greater incentive than if they were not subject to the background risk of problem-based liability.

Of course, many new tech firms are in fact already exposed to problem-based liability. This shows how liability is a relatively tractable strategy from the standpoint of the regulator, for the problem need be stated in only the most general of terms. Once harm occurs, the problem has not only manifested but also practically defined itself — rather than the regulator needing to do so ex ante. As a result, in terms of feasibility for the government, the notion of ex post liability would seem a viable strategy to deploy in the context of new tech, where problems are varied and changing.

Businesses often balk at being held to such liability and they would certainly prefer to avoid it. Indeed, social media and other platform companies have successfully won immunity from much of this liability under Section 230 of the federal Communications Act.22 Others have suggested that autonomous vehicle manufacturers should similarly escape from normal liability rules.23 But as much as businesses may bristle at being held accountable after harms do occur, there is also the argument that such liability may actually treat them too softly.

Liability does have its limits as a regulatory strategy. It ultimately takes on faith that firms’ managers will sufficiently internalize the possibility of being held liable at some future time and then will be motivated to change their firms’ current behavior in ways that sufficiently address the underlying regulatory problem. But for several reasons — including cognitive biases, insurance coverage, and bankruptcy — these future risks of liability are often not enough to induce sufficient behavioral change in the present.   

D. Regulating Pathways

Because the backdrop of liability is often perceived as delivering less than the socially optimal level of protection, regulators have traditionally spent much effort seeking to identify the causes of regulatory problems and then imposing rules that seek to impede these causal pathways.

The longer a technology has been around, and the more stable it is, the more feasible it is for regulators to target pathways. Building codes, for example, are grounded in extensive general knowledge that has been developed over centuries, as well as on specific engineering research which justify mandates that builders use fire-resistant materials and install fire suppression technologies. These mandates target the multiple pathways that lead to property damage and injuries and fatalities from building fires. Much the same can be said for other regimes regulating older forms of technology and economic activity. As noted, traditional automobile safety regulation puts in place rules that address the multiple pathways that can lead to vehicle accidents: driver errors, vehicle malfunctions, and roadway hazards.

As much as it is feasible to target pathways when regulating buildings or automobiles, the same will not always be true when it comes to regulating new tech. New tech’s “problem” problem means that regulators will often be behind the curve in understanding the causes of regulatory problems and in being able sufficiently to target their pathways. This does not mean, of course, that regulators will never be able to impose pathway-related obligations on new tech firms. For example, it almost surely makes sense for regulators to consider imposing a requirement that all technology firms use differential privacy techniques to protect sensitive information contained in datasets that they use. Similarly, when it comes to cybersecurity risks, regulators can likely identify specific security measures that firms ought to implement, such as multi-factor authentication.

The more that regulators learn about a technology, the more able they will be to identify pathways to target with regulation. As such, regulators can and should invest in substantial research to learn more about the technology they oversee, and the causal pathways leading to their problems. Still, even with additional research, new tech will likely always present distinctive challenges for regulators when it comes to understanding pathways and regulating them. Regulators will know less than firms do about their technologies — and thus regulators will always be relatively disadvantaged when it comes to knowing what measures to require or what outputs to measure to interrupt the pathways to their problems.24

E. Mandating a Focus on Problems

Regulators can seek to leverage firms’ informational advantages for the public good through a type of regulatory strategy known as management-based regulation.25 Management-based regulation requires firms to engage in the study of their own operations, products, and services, all to get firms thinking harder about the risks they create and then identifying measures they can take to manage these risks better.

The management-based approach to regulation is used around the world to address problems where it is difficult to define or measure outcomes or where pathway prevention does not come neatly organized in a one-size-fits-all package. For example, management-based regulation has been applied to address issues of food safety, chemical accidents, toxic pollution, financial fraud, and the safety of offshore energy development—all regulatory domains with considerable heterogeneity in regulated entities and where outcomes, such as risk, are difficult to assess on a routine basis.26 For these same reasons, management-based regulation seems likely to be an oft-desired approach to regulating new tech given the diversity and dynamism within most technology markets today.

The aim of management-based regulation is to induce firms’ managers to address their own technologies’ problems. Rather than telling a firm exactly what measures to adopt to solve a regulatory problem, management-based regulation compels firms to assess how their own products and operations contribute to the problem and then to develop their own internal plans, procedures, and other steps aimed at solving the problem. This regulatory strategy does not by itself require firms to take any specific actions beyond the managerial actions of planning, analysis, and the establishment of internal procedures. In fact, some management-based regulations only require firms to identify internal actions to take to control risks, not even to implement these actions or the required internal plans and procedures that they develop. The threat of ex post liability, of course, gives firms a reason to implement the plans they develop.

Management-based regulation, which sometimes called mandated or enforced self-regulation,27 has been shown to work in practice. One study compared toxic pollution from facilities in U.S. states with and without management-based pollution prevention laws, and it found that facilities located in states with these laws reduced their toxic pollution more than facilities in other states, at least for the first six years after management-based regulations had been adopted.28 Another study demonstrated a reduction in foodborne illnesses associated with the adoption of management-based food safety regulations.29

A management-based approach to regulation seems well-suited for new tech because, when different technologies can lead to different problems, this approach takes some of the pressure off regulators to identify and define problems with precision. It places more of an onus on firms, while keeping the regulator working at arms length to oversee the industry’s management efforts. It also gives firms flexibility to find the most cost-effective ways to solve the problems that they identify. Admittedly, it is not entirely flexible, as it is mandatory regulation; it does require compliance with specified management steps — often characterized under the quality management rubric of “plan-do-check-act.” But other than the required management steps, management-based regulation imposes on the firms themselves the responsibility of identifying their own specific risk control measures, procedures, and responses.30

When it comes to new tech, this flexibility that management-based regulation affords is important because it allows firms to innovate. It is thus hardly surprising to see proposals for requiring certain kinds of new tech firms to conduct algorithmic audits — an idea that fits well within the framework of management-based regulation.31 Similarly, it is not surprising that the National Highway Traffic Safety Administration (NHTSA) has recommended that manufacturers of automated driving systems (ADSs) adopt management-based “safety assessments” that are designed to ensure that their engineering teams are more fully focused on the ultimate problem of accident avoidance.32

The suitability of a management-based regulatory strategy for new tech does not mean it will not face some challenges. The regulator needs to ensure that firms take their required management responsibilities seriously. Especially with the passage of time, management-based requirements risk turning into empty paperwork exercises rather than serious attempts to identify, analyze, and manage problems.33 Access to information and ongoing vigilance by the regulator is thus necessary.34] Regulatory agencies must have auditors who know how to distinguish between firms that engage in meaningful management efforts and those that treat managerial requirements as simply a box-checking ritual.35 In short, regulating new tech via management-based regulation requires having the right kind of regulatory resources in place — especially the necessary human capital.

 

III. PEOPLE ARE KEY, EVEN WITH TECH

Finding the right kind of people should be a running theme in any discussion of the regulation of new tech.36 To regulate well, agencies need analytically sophisticated staff members. These staff members must work constantly to keep abreast of developments in their fields, especially if they hope to regulate any of the pathways to problems.

Even though management-based regulation leverages the information advantages of the firms, regulators still must know enough to be able to gauge how seriously firms take their management obligations. This requires personnel who know more than just how to check boxes on a checklist or inspection form. Regulatory staff members need to have strong skills in risk analysis as it applies to the technology they oversee.37

Given the pace of change with technology, regulatory personnel need to find ways to monitor and analyze innovations no matter what kind of regulatory strategy they adopt. To regulate well, they must understand technology markets and the pathways to the problems that different technologies create. And if regulators are themselves to rely on certain technologies — so-called regtech tools — their organizations need the right kind of people who can design and deploy those tools successfully within their specific regulatory settings.38

Unfortunately, government confronts serious shortfalls in its technology-oriented talent pool at present — and the competition with the private sector for technically sophisticated staff will remain fierce. The federal government currently faces a dramatic turnover in due to an aging workforce — a trend that is problematic for the regulation of older technologies, where experience can be a premium. But perhaps this turnover affords an opportunity for building regulatory staffs capable of overseeing new tech markets. Regulatory agencies need to develop channels for bringing in new talent with the analytic capabilities needed to oversee today’s innovative market environment.39

Of course, government’s own technological infrastructure needs upgrading as well. Too many federal computer systems in the United States remain woefully out of date. The U.S. Government Accountability Office reported as recently as five years ago that three-quarters of federal spending on information technology supports old “legacy systems” which “are becoming increasingly obsolete” due to “outdated software languages and hardware parts that are unsupported.”40 In addition to updating antiquated hardware, steps are needed to build a robust, usable data infrastructures, such as by creating common identifiers that can link disparate datasets, building adequate data storage capabilities, and ensuring effective cybersecurity protections.41

With new and better technological capacities, regulatory agencies can then allocate their human capital more optimally. Machine-learning algorithms, for example, can help regulators improve the targeting of regulated firms to inspect or audit.42 Regulators may find that they can improve their performance by leveraging firms’ own data for analytical purposes too.43

In addition to possessing technical sophistication, the people who staff regulatory agencies also must have skills needed to interact productively with other people in their orbit, particularly the managers and employees within regulated technology firms but also with various interested members of the public and with legislative overseers.

Successful regulation is ultimately more relational than technological. It is about changing human behavior, building credibility, and displaying the fairness and empathy that promotes trust. It demands a workforce that is steadfast in its commitment to public service and eager to remain vigilant in seeking to solve problems and thereby making a meaningful, positive impact on society.44

 

IV. CONCLUSION

The present era of rapid innovation in technology promises to deliver improvements in both economic productivity and the quality of daily life. But just as with any type of change, innovations in new tech bring with them the potential for problems. Regulators will inevitably be given responsibility for solving these problems, and when they seek to intervene in the technological marketplace, they will need to draw on a toolkit that regulators have long used to change behavior and reduce harms.

That toolkit contains strategies that seek to induce firms to focus on the underlying problems their technologies create, as well as strategies that target specific pathways to these problems. Because new tech is new, the pathways will not always be well-understood, which will limit the ability to regulate in traditional ways. This means that regulators are increasingly likely to look to strategies such as ex post liability and management-based regulation. These strategies will seek to shape firms’ incentives and steer their managers’ attention toward the ultimate problems associated with different technologies, rather than forcing them to comply with discrete prescriptions aimed at the pathways to these problems. In this way, regulating new tech is likely to look a bit different, and provide regulated firms with more flexibility, than older domains of regulation.

No matter whether they regulate in ways oriented more toward ultimate problems or their pathways, though, regulatory agencies need to strengthen the skills and knowledge of their workforces. Even when agencies themselves rely on modern technologies to help with their work, they will need staffs with the technological sophistication to design and use these tools well.45 Perhaps ironically, the most important ingredient for success in regulating new tech will not be technology. It will be people.


1 Edward B. Shils Professor of Law and Political Science, and Director, Penn Program on Regulation, University of Pennsylvania Law School.

2 For an illustration outlining the market failures associated with online services, see Ofcom, Online Market Failures and Harms: An Economic Perspective on the Challenges and Opportunities in Regulating Online Services (2019), https://www.ofcom.org.uk/__data/assets/pdf_file/0025/174634/online-market-failures-and-harms.pdf.

3 Cary Coglianese, Optimizing Regulation for an Optimizing Economy, 4 U. Pa. J. L. & Pub. Affairs, 1, 1-13 (2018)

4 See, e.g. OECD Digital Economy Outlook 2020, OECD (Nov. 27, 2020), https://www.oecd.org/sti/ieconomy/oecd-digital-economy-outlook-2020-bb167041-en.htm.

5 See, e.g. Augusto Lopez-Carlos ET AL., The Global Information Technology Report 2006-2007: Connecting to the Networked Economy (6th ed. 2007).

6 See, e.g. Michael C. Munger, Tomorrow 3.0: Transaction Costs and the Sharing Economy (2018).

7 See, e.g. Martin Kenny & John Zysman, The Rise of the Platform Economy, 32 Issues in Science and Technology (2016), https://issues.org/rise-platform-economy-big-data-work/.

8 Coglianese, supra note 3.

9 Jeremy Rifkin, The Zero Marginal Cost Society: The Internet of Things, the Collaborative Commons, and the Eclipse of Capitalism (2015).

10 See, e.g. Social Media at Crossroads: 25 Solutions from the Social Media Summit @MIT, Social Media Summit @ MIT (2021) https://www.yumpu.com/en/document/read/65717082/the-smsmit-report.

11 European Commission, Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) And Amending Certain Union Legislative Acts (April 4, 2021), https://eur-lex.europa.eu/legal-content/EN/TXT/?qid=1623335154975&uri=CELEX%3A52021PC0206.

12 Cary Coglianese & André Sapir, Risk and Regulatory Calibration: WTO Compliance Review of the U.S. Dolphin-Safe Tuna Labeling Regime, 16 World Trade Rev., 327-348 (2017); Cary Coglianese, Listening Learning Leading: A Framework for Regulatory Excellence, Penn Program on Regulation, 44-46 (2015), https://kleinmanenergy.upenn.edu/wp-content/uploads/2020/08/Listening-Learning-Leading_Coglianese-1.pdf; Cary Coglianese & Gary Marchant, Shifting Sands: The Limits of Science in Setting Risk Standards, 152 U. Pa. L. R., 1255 (2004).

13 See, e.g. Amy Klobuchar, Antitrust: Taking on Monopoly Power from the Gilded Age to the Digital Age 175-214 (1st ed. 2021).

14 See, e.g. Forest L. Reinhardt, Down to Earth: Applying Business Principles to Environmental Management (2000).

15 See, e.g. Gary E. Marchant et al., Governing Emerging Technologies Through Soft Law: Lessons for Artificial Intelligence, 61 Jurimetrics, 1-18 (2020).

16 One of the market failures justifying regulation, of course, might well be a lack of sufficient market competition in the relevant technology sector. For an argument that regulation is needed to ensure adequate competition among digital platforms, see William P. Rogerson & Howard Shelanski, Antitrust Enforcement, Regulation, and Digital Platforms, 168 U. Pa. L. Rev. 1911 (2020).

17 See, e.g. Omri Ben-Shahar & Carl E. Schneider, More than You Wanted to Know: The Failure of Mandated Disclosure (2014).

18 See, e.g. Cary Coglianese & Alicia Lai, Antitrust by Algorithm, Stan. J. Computational Antitrust (forthcoming); Cary Coglianese & David Lehr, Regulating by Robot: Administrative Decision-Making in the Machine Learning Era, 105 Geo. L. J. 1147 (2017).

19 Cary Coglianese, Management-Based Regulation: Implications for Public Policy, in Risk and Regulatory Policy: Improving the Governance of Risk (Gregory Bounds & Nikolai Malyshev, eds., 2010); National Academies of Sciences, Engineering & Medicine, Designing Safety Regulations for High-Hazard Industries (2018), https://www.nap.edu/catalog/24907/designing-safety-regulations-for-high-hazard-industries.

20 National Academies of Sciences, supra note 19, at 23, 32, 90.

21 For an example of a general duty clause, see 29 U.S.C. § 654(a)(1) (“Each employer shall furnish to each of his employees employment and a place of employment which are free from recognized hazards that are causing or are likely to cause death or serious physical harm to his employees.”)

22 47 U.S.C. § 230(c).

23 James M. Anderson, et al., Autonomous Vehicle Technology: A Guide for Policymakers xxiii (2016), https://www.rand.org/pubs/research_reports/RR443-2.html.

24 Regulators do, of course, have some strategies and tactics available to them to try to elicit information from industry. See Cary Coglianese, Richard Zeckhauser, and Edward Parson, Seeking Truth for Power: Informational Strategy and Regulatory Policy Making, 89 Minn. L. Rev. 277 (2004).

25 Cary Coglianese & David Lazer, Management-Based Regulation: Prescribing Private Management to Achieve Public Goals, 37 L. & Soc’y Rev. 691 (2003).

26 Cary Coglianese & Shana Starobin, Management-Based Regulation, in Policy Instruments in Environmental Law 292-307 (Kenneth R. Richards and Josephine van Zeben, eds., 2020).

27 See, e.g. John Braithwaite, Enforced Self-Regulation: A New Strategy for Corporate Crime Control, 80 Mich. L. Rev. 1466 (1982); Bridget Hutter, Regulation and Risk: Occupational Health and Safety on the Railways (2001).

28 Lori S. Bennear, Are Management-based Regulations Effective? Evidence from State Pollution Prevention Programs 26 J. Pol’y Analysis & Mgmt. 327 (2007).

29 Travis Minor & Matt Parrett, The Economic Impact of the Food and Drug Administration’s Final Juice HACCP Rule, 68 Food Pol’y 206 (2017).

30 Coglianese & Lazer, supra note 25.

31 See, e.g. Miles Brundage et al., Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims (April 2020); James Guszcza et al., Why We Need to Audit Algorithms, Harv. Bus. Rev. (Nov. 28, 2018), https://hbr.org/2018/11/why-we-need-to-audit-algorithms; Joshua Kroll et al., Accountable Algorithms, 165 U. Pa. L. Rev. 633 (2017).

32 NHTSA, Automated Driving Systems 2.0: A Vision for Safety 16 (2017), https://www.nhtsa.gov/sites/nhtsa.gov/files/documents/13069a-ads2.0_090617_v9a_tag.pdf.

33 See, e.g. Garry C. Gray & Susan S. Silbey, Governing Inside the Organization: Interpreting Regulation and Compliance, 120 Amer. J. Soc. 96 (2014).

34 Cary Coglianese, Regulatory Abdication in Practice, 79 Pub. Admin. Rev. 794 (2019).

35 National Academies of Sciences, Engineering & Medicine, supra note 19, at 133-137.

36 Cary Coglianese, Regulatory Excellence as “People Excellence,” Reg. Rev. (Oct. 23, 2015), https://www.theregreview.org/2015/10/23/coglianese-people-excellence/.

37 Coglianese, supra note 19, at 179-180.

38 Coglianese, supra note 3, at 10-11; Coglianese & Lai, supra note 18

39 Recently, the U.S. Department of Homeland Security has announced an initiative to improve its ability to recruit cybersecurity talent. U.S. Department of Homeland Security, DHS Launches Innovative Hiring Program to Recruit and Retain World-Class Cyber Talent (Nov. 15, 2021), https://www.dhs.gov/news/2021/11/15/dhs-launches-innovative-hiring-program-recruit-and-retain-world-class-cyber-talent. In addition, the National Security Commission on Artificial Intelligence (NSCAI) has urged in a congressionally mandated report that the federal government create a U.S. Digital Service Academy that “should be modeled off of the five U.S. military service academies but produce trained and educated government civilians for all federal government departments and agencies.” NSCAI, Final Report 127 (2021), https://www.nscai.gov/wp-content/uploads/2021/03/Full-Report-Digital-1.pdf. See also U.S. Governmental Accountability Office, Digital Services: Considerations for a Federal Academy to Develop a Pipeline of Digital Staff (Nov. 19, 2021), https://www.gao.gov/assets/gao-22-105388.pdf.

40 Federal Agencies Need to Address Aging Legacy Systems: Hearing Before the H. Comm. on Oversight and Gov’t Reform, 114th Cong. (2016) (testimony of David A. Powner, Director, Information Technology Management Issues), https://www.gao.gov/assets/680/677454.pdf.

41 Cary Coglianese & Alicia Lai, Assessing Automated Administration, in Oxford Handbook of AI Governance (Justin Bullock et al., eds., forthcoming).

42 See, e.g. Miyuki Hino, Elinor Benami, & Nina Brooks, Enhancing Environmental Monitoring Through Machine Learning, 1 Nature Sustainability 583, 583-584 (2018).

43 Coglianese & Lai, supra note 18.

44 See, e.g. Malcolm Sparrow, The Regulatory Craft: Controlling Risks, Solving Problems, and Managing Compliance (2000); Mark H. Moore, Creating Public Value: Strategic Management in Government (1995). See also Cary Coglianese, Regulatory Vigilance in a Changing World, Reg. Rev. (Feb. 25, 2019), https://www.theregreview.org/2019/02/25/coglianese-innovation-regulatory-vigilance/.

45 Coglianese, supra note 3.