As the widespread application of artificial intelligence permeates an increasing number of businesses, ethical issues such as algorithmic bias, data privacy, and transparency have gained increased attention, raising renewed calls for policy and regulatory changes to address the potential consequences of AI systems and products. In this article, we build on original research to outline distinct approaches to AI governance and regulation and discuss the implications for firms and their managers in terms of adopting AI and ethical practices going forward. We examine how manager perception of AI ethics increases with the potential of AI-related regulation but at the cost of AI diffusion. Such trade-offs are likely to be associated with industry specific characteristics, which holds implications for how new and intended AI regulations could affect varying industries differently. Overall, we recommend that businesses embrace new managerial standards and practices that detail AI liability under varying circumstances, even before it is regulatory prescribed. Stronger internal audits, as well as third-party examinations, would provide more information for managers, reduce managerial uncertainty, and aid the development of AI products and services that are subject to higher ethical as well as legal, and policy standards.

By Benjamin Cedric Larsen & Yong Suk Lee[1]

 

I. INTRODUCTION

Artificial intelligence (“AI”) application has expanded rapidly in the last decade, spurred by advances in machine learning and computing power as well as increased availability of large datasets. But as the widespread application of artificial intelligence permeates an increasing number of businesses, governments have started to focus on various ethical concerns. Ethical issues such as algorithmic bias, data privacy, and transparency have gained increased attention, raising renewed calls for policy and regulatory changes to address the potential consequences of AI systems and products. The U.S. Office of Science and Technology Policy’s recent request for information on the application of biometric technologies, as well as the EU’s proposed AI Regulation, are both examples of increased regulatory scrutiny and new forms of governance that target AI systems.

AI technologies may create or exacerbate negative externalities when firms develop or deploy AI products driven purely by profit and shareholder interest, without taking extant social costs, such as aggravating social biases, violating data privacy practices, or new forms of algorithmic dependencies that change social behavior, into account. Existing algorithms have, for example, been shown to aggravate racial and gender bias and discrimination in hiring, raise safety and accountability issues in autonomous driving, and data privacy issues in online retail.[2] The growing visibility of varying forms of algorithmic impact has caused an increase in the interest in AI ethics in both the private and public sectors while raising calls for new forms of AI-related regulation.

However, currently there are no clear guidelines on how to regulate or moderate AI adoption in most countries. Relying entirely on firms to self-regulate AI use and adoption is a flawed approach that is often caught up in arguments over shareholder maximization, which may neglect social and ethical considerations. This has, for example, been seen in the premature adoption of inaccurate or flawed facial recognition systems in law enforcement, or in the failure of Google’s AI Ethics Board. Relying on governments to produce regulations on the other hand will be slow – The first proposed AI bill in the U.S., the Algorithmic Accountability Act, has stalled since its introduction to Congress in 2019, while a new rendition of the Act was introduced in February of 2022. In this article, we build on original research to outline distinct approaches to AI governance and regulation, before we discuss the implications for firms and their managers in terms of adopting AI and ethical practices going forward.

 

II. APPROACHES TO AI REGULATION

Companies and governments are currently in the process of translating general principles of AI ethics into concrete practices.[3] This implies that two distinct but connected forms of AI governance are currently emerging. One is soft law governance, which functions as self-regulation based on non-legislative policy instruments. This group includes private sector firms issuing principles and guidelines for ethical AI, multi-stakeholder organizations such as The Partnership on AI, as well as standard-setting bodies such as the International Organization for Standardization and interest organizations such as the Association for Computing Machinery, for example. Actionable mechanisms by the private sector usually focus on the development of concrete technical solutions, including the development of internal audits, standards, or explicit normative encoding.

This means that soft-law governance and associated mechanisms already play an important part in setting the default for how AI technologies are governed.[4] Hard law measures, on the other hand, entails legally binding regulations that are passed by the legislatures to define permitted or prohibited conduct. Regulatory approaches generally refer to legal compliance, the issuing of certificates, or the creation or adaptation of laws and regulations that target AI systems.[5] Policymakers are currently contemplating several approaches to regulating AI, which broadly can be categorized across existing laws and legislation, new horizontal regulations, domain-specific regulations, as well as data-related regulations.

A. Existing Laws

AI technologies are implicitly regulated through common law doctrines such as tort and contract law which affect liability risks and the nature of agreements among private parties. Common law also entails statutory and regulatory obligations on the part of organizations, referring to areas such as emerging standards for autonomous vehicles, for example. In the United States, the use of AI is implicitly governed by a variety of common law doctrines and statutory provisions, such as tort law, contract law, and employment discrimination law.[6] This means that official rulings on common law-type claims already play a vital role in how society governs AI. Federal agencies also engage in important governance and regulatory tasks, which may affect AI use and adoption across a variety of sectors of the economy.[7] Through tort, property, contract, and related legal domains, society already shapes how people utilize AI, while gradually emphasizing what it means to misuse AI technologies. Existing law such as tort law may, for example, require that a company avoid any negligent use of AI to make decisions or provide information that could result in harm to the public.[8] Likewise, current employment, labor, and civil rights laws imply that a company using AI to make hiring or termination decisions could face liability for decisions that involve human resources.

B. Horizontal Regulation

Several countries are currently devising new horizontal regulations that are sector agnostic and aim to regulate systems and technologies at the algorithmic level. In the US, for example, the Algorithmic Accountability Act was first introduced in the House of Representatives in April 2019 and was aimed at regulating large firms with gross annual receipts of $50 million, or which possess or control personal information on more than 1 million consumers.[9] The Algorithmic Accountability Act proposed to regulate large firms through mandatory self-assessment of their AI systems, including disclosure of firm usage of AI systems, their development process, system design, and training, as well as the data gathered and in use. The Act has since been amended and was reintroduced as the Algorithmic Accountability Act of 2022. In line with the originally proposed legislation, the Act of 2022 requires greater transparency and accountability for automated decision systems.

The European Union’s AI Act (“AIA”) has advanced further and is expected to go into effect in 2023. AIA works by imposing requirements for market entrance and certification of High-Risk AI Systems through a mandatory CE-marking procedure.[10] The comprehensive regulations of the EU aim to lay the foundations for a pre-market conformity regime that is guided by technological standards which apply to areas such as machine learning training, testing, and validation of datasets in the economy. Providers of high-risk AI systems are, for example, expected to conduct “conformity assessments”[11] (internal audits) as well as “post-market monitoring plans,”[12] which include documenting and analyzing the performance of high-risk AI systems throughout their lifecycles.

In China, new regulation is aimed specifically at recommender algorithms and will be effective from March 2022. Under the regulation, algorithmic recommendation services that provide news-related information need to obtain an official license, while companies that deploy recommender systems are under the obligation to inform users about the “basic principles, purpose and main operation mechanism” of the algorithmic recommendation service. Users will also be able to opt-out of having recommendation services via algorithms and users must be able to select or delete tags that are used to power individual suggestions and recommendations.

C. Domain Specific Regulation

In the United States, domain-specific AI regulations are currently being developed by federal regulators such as the Food and Drug Administration (“FDA”), the National Highway Traffic and Safety Administration (“NHTSA”), and the Federal Trade Commission (“FTC”), among others. Domain specific regulations tend to pay special attention to sector-based ways of utilizing various algorithms and AI systems. The FDA, for instance, aims to examine and pre-approve the underlying performance of a firm’s AI products before they are marketed, and post-approve any algorithmic modifications. NHTSA on the other hand emphasizes the importance of removing unnecessary barriers to self-driving vehicles, which makes the regulator issue voluntary guidance rather than regulations that could dampen innovation in the sector. The FTC has engaged in hearings to safeguard consumers from unfair and deceptive practices surrounding potential issues across algorithmic discrimination and bias. This includes AI systems that are used in online ads, or which engage in micro-targeting of consumer groups, as well as establishing greater transparency with how and when product recommender algorithms are used.

D. Data Regulation

In terms of data, regulations include the European Union’s General Data Protection Regulation (effective May 2018), the California Consumer Privacy Act (effective January 2020), and China’s Personal Information Protection Law (effective November 2021). Data-related regulation generally affects all businesses that buy, sell, or otherwise trade “personal information,” including companies that use online-generated data from residents in their products. Data regulation thus adds another layer of oversight to the area of data handling and privacy, on which many AI applications are heavily contingent.

In short, AI regulation is emerging and is likely to materialize across several domains simultaneously: from existing laws, new horizontal regulations, evolving domain-specific regulations as well as data related regulations.

The main goal of regulators is to limit negative externalities in the areas of competition, privacy, safety, and accountability while ensuring continued opportunity in the application and innovation of AI-based tools, products, and services. During this process, however, little is known about the interactions between new and incoming public-sector regulation and firm-level behavior and innovation. It is therefore important to understand how new rules and regulations interact with and guide firm-level behavior in areas of ethical development and implementation of new AI tools and systems.

 

III. AI REGULATION’S IMPLICATIONS FOR FIRM BEHAVIOR

Despite the increasing adoption of AI in businesses and the growing realization that AI should be regulated, very little is known about how AI-related regulation might affect firm behavior. The literature that examines the effects of technology-related regulations, especially privacy regulation does offer some insight. Goldfarb & Tucker (2012) have found that in data-driven industries, privacy regulation impacts the rate and direction of innovation.[13] Too little privacy protection means that consumers may be reluctant to participate in market transactions where their data are vulnerable. Too much privacy regulation means that firms cannot use data to innovate. The evidence generally indicates that most attempts at government-mandated privacy regulation lead to slower technology adoption and less innovation. However, regulation can spur innovation as well. In the case of environmental regulation, such as laws targeting automobile emissions, regulation has in fact encouraged the development of more fuel-efficient vehicles, as well as hybrid and electric vehicles. Hence, it is not entirely clear how AI-related regulation could affect firm behavior, especially in terms of adoption and innovation. Furthermore, the ways in which governments intend to regulate AI are still unclear. As we discussed in the previous section, AI regulation can come in the form of horizontal regulation, which could be based on a centralized regulatory agency and authority, or may be further cemented in decentralized approaches to AI regulation that are based on existing agencies and sector-specific approaches.

Very little is known, however, about how these different kinds of new or intended AI regulation –– or even the prospect of regulation –– might affect firm behavior. Therefore, we have examined the impact of actual and potential AI regulations on business managers. Together with two coauthors, we examined how likely managers are to adopt AI technologies and alter their AI-related business strategies when faced with different kinds of AI regulation.[14] We conducted a randomized online survey experiment where we randomly exposed managers to one of the following treatments: (1) a horizontal AI regulation treatment based on the Algorithmic Accountability Act, (2) an industry-specific regulation treatment based on the regulatory approaches of the FDA (healthcare), NHTSA (transportation), and the FTC (retail), (3) a common law treatment based on tort law, labor law, and civil rights law, and (4) a data privacy regulation treatment based on the California Consumer Privacy Act. In particular, we studied how these varying regulatory treatments affect managers’ decision-making in terms of AI adoption, as well as how managers are likely to revise their business strategies when reminded of each of the regulatory approaches.

Our results indicate that exposure to information about regulation decreases managers’ reported intent to adopt AI technologies in the firm’s business processes, with the effect strongest for the horizontal regulation treatment and the common law treatment. We find that exposure to information about general AI regulation, such as the Algorithmic Accountability Act, reduces the reported number of business processes in which managers are willing to adopt and use AI by about 16 percent. We also find that exposure to information about AI regulation significantly increases expenditure intent on developing AI strategy. The increase in budget for developing AI business strategy is, however, offset by a decrease in the budget for training current employees on how to code and use AI technology, and purchasing AI packages from external vendors. In other words, making the prospect of AI regulation more salient seems to force firms to “think,” inducing managers to report greater willingness to expend more on strategizing, but at the cost of developing internal human capital.

Exposure to information about AI regulation also increased how importantly managers consider various ethical issues when adopting AI in their business. Each regulation treatment increased the importance managers put on safety and accident concerns related to AI technologies, and the common law treatment and data privacy regulation treatment significantly increased manager perceptions of the importance of privacy and data security. The industry-specific regulation also increased manager perceptions of the importance of bias and discrimination, and transparency and explainability.

Interestingly, we find no significant impact of the regulation treatments on AI adoption in the automotive industry, which we believe reflects the generally positive sentiment towards developing autonomous driving systems by NHTSA. The different manager responses we find across industries suggests that actual regulation may likely affect industries differently in adopting AI as well as in the ethical concerns and business strategies due to varying industry-specific characteristics. For example, in terms of ethical concerns, safety and accidents are the key concern in automotive, whereas privacy and data security are the key concern in retail.

Overall, these results highlight some of the potential trade-offs between regulation and the diffusion of AI technologies in firms, as well as their ethical concerns related to AI. Our results also indicate that such trade-offs are likely to be associated with industry specific characteristics, which holds implications for how new and intended AI regulations could affect varying industries differently.

 

IV. IMPLICATIONS FOR MANAGERS

The perceived level of regulatory enforcement and other forms of algorithmic compliance is associated with specific legislation, regulation, as well as standards that exert varying forms of institutional pressure over actors to conform to best practice. Enforcement, therefore, is going to be context specific, which means that managers are going to perceive varying levels of enforcement across industries such as transportation, retail, and healthcare. The AI systems that are being used and deployed across industries may also look very different, which also implies that ethical issues may be based on diverse and sector-specific concerns across areas such as privacy, transparency, safety, bias/discrimination, labor, and so on.

In areas that involve high-stakes decisions (e.g. autonomous driving, credit applications, judicial decisions, and medical recommendations), algorithmic accuracy alone may not be sufficient in terms of adoption, as applications also require high levels of social trust in order to be implemented[15] and legitimized.[16] In high-stakes environments such as in healthcare or autonomous vehicles, strict standards e.g. surrounding privacy and safety are also likely to create high expectations for basic levels of enforcement. In other areas where practices are less clear and where levels of enforcement historically have been more arbitrary (e.g. recommender algorithms used in online shopping, or the regulation of content on social media platforms), expectations about enforcement levels are motley and harder for managers to ascertain and devise ethical actionable mechanisms for. In such cases, compliance is situated between social expectations, self-governance, and vague or missing legislation and regulation, which makes it harder for managers to develop sound forms of algorithmic governance.[17]

Though AI regulation may conceivably slow innovation or reduce competition through lower adoption, instituting regulation at the early stages of AI diffusion could improve consumer welfare through increased safety and by better addressing bias and discrimination issues. At the same time, there is an inherent need to distinguish between innovation at the level of the firm consuming AI technology and at the level of the firm producing such technology. Even if regulation indeed slows innovation in the former, it can still spur innovation in the latter.[18] The approach of regulating early, however, contrasts with the common approach of relying on competitive markets, at least in the U.S., to generate the best technology so that government only needs to regulate anticompetitive behavior to maximize social welfare.[19]

At this point, it is clear that the different regulatory regimes that are currently being debated across the EU, the U.S., and China, in particular, are going to have wide-ranging implications for firms in terms of how they develop and adopt different systems, tools, and practices legitimately. Ultimately, this is going to trickle down and have important and wide-reaching effects on consumers in areas such as fairness, bias, trust, transparency, safety, privacy, and security, among others. As AI principles increasingly mature into practices, both internally within businesses and externally guided by new laws and regulations, it is important to consider that not all practices will be developed and implemented equally. In the coming years, there will be important national and international deviations concerning areas such as consumer safety and privacy, for instance. Based on our current point of departure, we have assembled a few key recommendations that are important for managers to take into consideration when devising internal methods and tools that are ready for meeting new and external forms of AI regulation.

At a general level, managers need to ensure that the functional aspects of a model i.e. accuracy, data, performance, etc. are soundly established through measures such as certification, testing, auditing, as well as through the elaboration of technological standards.[20] Recommendations include documenting the lineage of AI products or services, as well as their behaviors during operation.[21] Documentation could include information about the purpose of the product, the datasets that have been used for training and while running the application, as well as ethics-oriented results on safety and fairness, for example. Large technology companies have already created and adopted workable documentary models, such as Google’s model cards[22] and End-to-End Framework for Internal Algorithmic Auditing, IBM’s AI Factsheets,[23] or Microsoft’s datasheets for datasets, for example. Managers can also work to establish cross-functional teams consisting of risk and compliance officers, product managers, and data scientists, enabled to perform internal audits to assess ongoing compliance with existing and emerging regulatory demands.

For businesses that develop or deploy AI products or services, this implies that a new set of managerial standards and practices that details AI liability under varying circumstances needs to be embraced, even before it is regulatory prescribed. As many of these practices are yet to emerge, stronger internal audits, as well as third-party examinations, would provide more information for managers, which could reduce managerial uncertainty and aid the development of AI products and services that are subject to higher ethical as well as legal and policy standards. As policymakers continue to grapple with the best way forward in terms of regulation, managers and businesses that have developed standardized ways of internal algorithmic assessment are, in the meantime, expected to be better equipped to handle any regulatory obstacles in the future.


[1] Copenhagen Business School/University of Notre Dame.

[2] Raub, M. (2018). Bots, Bias and Big Data: Artificial Intelligence, Algorithmic Bias and Disparate Impact Liability in Hiring Practices. Arkansas Law Review, 71(2). Koopman, P., & Wagner, M. (2017). Autonomous Vehicle Safety: An Interdisciplinary Challenge. IEEE Intelligent Transportation Systems Magazine, 9(1), 90–96. https://doi.org/10.1109/MITS.2016.2583491.

[3] AI Ethics Impact Group. (2020). From Principles to Practice – An interdisciplinary framework to operationalise AI ethics. VDE Association for Electrical Electronic & Information Technologies e.V., Bertelsmann Stiftung, 1–56. https://doi.org/10.11586/2020013.

[4] Wallach, W., & Marchant, G. (2018). An Agile Ethical/Legal Model for the International and National Governance of AI and Robotics. Proceedings of the AIES, 107(3), 7. https://doi.org/10.1109/JPROC.2019.2899422.

[5] Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), 389–399. https://doi.org/10.1038/s42256-019-0088-2.

[6] Cuéllar, M. (2019). A Common Law for the Age of Artificial Intelligence: Incremental Adjudication, Institutions, and Relational Non-Arbitrariness. Working Paper.

[7] Barfield, W., Pagallo, U. (2018) Research Handbook on the Law of Artificial Intelligence. Edward Elgar Publishing. Northampton Massachusetts.

[8] Galasso, A. & Luo, H. (2019). Punishing Robots: Issues in the Economics of Tort Liability and Innovation in Artificial Intelligence, in The Economics of Artificial Intelligence: An Agenda, Ajay Agrawal, Joshua Gans & Avi Goldfarb. University of Chicago Press.

[9] Congress. (2019). Algorithmic Accountability Act 2019, 1–15.

[10] Kop, Mauritz. (2021) EU Artificial Intelligence Act: The European Approach to AI. Stanford – Vienna Transatlantic Technology Law Forum, Transatlantic Antitrust and IPR Developments, Stanford University, Issue No. 2/2021.

[11] AIA, Article 43.

[12] AIA, Article 61.

[13] Goldfarb, A., & Tucker, C. (2012). Privacy and Innovation. In Innovation Policy and the Economy (Vol. 12, pp. 65–89).

[14] Cuellar, M. Larsen, B. Lee, Y. Webb, M. (2021) Does Information About AI Regulation Change Manager Evaluation of Ethical Concerns and Intent to Adopt AI? Journal of Law, Economics, & Organization, forthcoming.

[15] Arnold, M. et al. (2019) “FactSheets: Increasing Trust in AI Services through Supplier’s Declarations of Conformity.” IBM Journal of Research and Development 63(4–5): 1–13.

[16] Larsen, B. (2021). A Framework for Understanding AI-Induced Field Change: How AI Technologies are Legitimized and Institutionalized. Proceedings of the AIES. https://doi.org/10.1145/3461702.3462591.

[17] Ghosh, D. (2021). Are we entering a new phase for social media regulation? Harvard Business Review.

[18] Porter, M., & Van der Linde. C., 1995. Toward a New Conception of the Environment-Competitiveness Relationship. Journal of Economic Perspectives, 9 (4): 97-118.

[19] Shapiro, C. (2019). Protecting Competition in the American Economy: Merger Control, Tech Titans, Labor Markets. Journal of Economic Perspectives, 33 (3): 69-93.

[20] Mittelstadt, B. Allo, P. Taddeo, M. Wachter, S. Floridi, L.  (2016) The ethics of algorithms: Mapping the debate. Big Data and Society.

[21] Madzou, L., & Firth-Butterfield, K. (2020). Regulation could transform the AI industry. Here’s how companies can prepare. World Economic Forum.  October 23, 2020.

[22] See https://arxiv.org/abs/1810.03993.

[23] See https://arxiv.org/abs/1808.07261.