Artificial intelligence using machine-learning (“AI/ML”) is already providing countless benefits to society, but is also presenting some risks and concerns that require governance.  Yet, the rapid pace of AI/ML, the many diverse applications and industries across which it is being implemented, and the complexity of the technology itself challenge effective governance. At the international level, no binding treaties or conventions are likely anytime soon, but organizations such as the OECD and UNESCO have developed non-binding recommendations that can help guide AI/ML governance by governments and industry. Other major AI powers such as China and the European Union are putting in place legislative frameworks for AI with uncertain impacts and effectiveness, whereas the U.S. Congress has not enacted any substantive controls on AI/ML to date. Rather, various federal agencies have started producing guidance documents and recommendations, primarily focused on discouraging algorithm applications with biased or discriminatory impacts.  Some state and local governments are also in the process of starting to adopt some restrictions on problematic AI/ML applications and uses. At this time, most governance of AI/ML consists of a variety of “soft law” programs. Given the central role these programs in AI/ML governance, it is important to make these programs more effective and credible.

By Gary E. Marchant[1]

 

Artificial intelligence (“AI”) has surged in its applications, public awareness, and policy priority in recent years. Several technical advances have driven this surge, including faster computer processors, unprecedented availability of massive sets of data and images on the internet, rapidly improved capabilities in optical recognition, and greatly improved abilities of computers to understand and interact with written and verbal human speech, a skill known as natural language processing. Yet the most important factor driving AI forward has been the rise of machine learning (“ML”).  In contrast to previous models of AI in which a human programmer codes a set of instructions for the AI to follow (rule-based AI), in ML the machine learns itself by processing data and incrementally learning from that data (data-based AI).

ML AI has already achieved many valuable benefits, with many more to come. But is has also generated some concerns, which must be effectively governed if we are to enjoy the full benefits of this technology.[2] This comment summarizes the challenges and opportunities of governing ML AI. Part I discusses unique issues and problems in governing ML. Part II addresses the international framework and status for AI governance. Part III summarizes U.S. government efforts to regulate AI to date. Finally, Part IV discusses a “soft law” alternative to traditional government regulation of AI.

 

I. GOVERNANCE CHALLENGES OF AI MACHINE LEARNING

The capabilities of ML turn out to far surpass those of earlier AI models, which explains the recent proliferation of AI usefulness across virtually every industry sector and human activity. But ML also presents some unique policy and governance challenges. For one, because AI systems require large sets of data to learn from, they have an almost insatiable need for data, including data that may present significant privacy concerns. Unlike earlier products, ML algorithms continue to learn and thus evolve throughout their lifespan, making obsolete regulatory approval systems based on a “once and done” government review.

Another complication with ML systems is that the data they are trained on is derived from actual human experience, which often reflects various types of societal bias. The ML algorithms will often replicate or even amplify the biases hidden in the training data, which can result in discrimination against under-privileged groups in applications such as criminal justice or hiring.[3] ML systems also do not follow pre-set human-created instructions, but rather are capable of making their own decisions as they learn, creating unique issues of who is accountable when a machine makes a decision. Finally, ML systems currently cannot explain their decisions, so their reasoning remains a black box.[4]

In addition to the substantive aspects of ML, the dynamic adoption of ML also creates governance challenges. ML applications are developing and evolving at a frantic pace, much faster than traditional regulatory systems can keep up, creating a pacing problem.[5] Moreover, even if new rules are enacted, they will quickly be out of date, and nations are understandably concerned about “freezing” in place their AI technology with outdated regulations in a highly competitive global economy. Another challenge is that AI is being applied across every industry in the economy, and spanning almost every regulatory agency, creating a formidable coordination problem.[6] AI also potentially presents a broad range of potential risks, going beyond health and safety risks traditionally regulated by governments to also include other concerns that agencies have less experience and delegated authority to regulate, such as privacy, bias, fairness, worker displacement, autonomy, lack of transparency, and more.[7] Finally, AI has international applications, making national regulation problematic.[8]

 

II. INTERNATIONAL FRAMEWORK FOR AI GOVERNANCE

While there has been much discussion about possible international regulatory instruments for AI, especially for lethal autonomous weapons, no international treaties or conventions on AI have been enacted. Various international organizations have adopted non-binding international guidelines on AI, including UNESCO[9] and the OECD.[10] Other international initiatives, such as the Global Partnership on AI, led by Canada and France,[11] have also considered international governance options, but nothing concrete has come of such efforts to date.

In the absence of any binding international AI regulation for the foreseeable future, many jurisdictions pursuing AI technology have also been developing their own regulatory frameworks. Most notable is the European Union (“EU”), which is actively developing a comprehensive regulatory program known as the AI Act,[12] anticipated to be completed in 2023 and to take effect in 2024. The draft EU AI Act takes a risk-based approach and applies different regulatory requirements to different tiers. The highest risk applications that present a central threat to fundamental rights are banned outright, high-risk applications are subject to conformity assessments, and lower risk applications rely on industry standards and other soft law measures.[13]

The third major AI power in addition to the U.S. and E.U. is China, which has promulgated a series of AI regulatory programs. Some of these requirements are unique to China, such as the requirement that recommendation algorithms must “vigorously disseminate positive energy,” but others address more common ML governance challenges such as transparency and accountability.[14] On March 1, 2022, another major set of AI regulations took effect in China that among other things prevented companies from discriminating among users in price based on ML algorithms.[15] Many other countries such as Australia, Canada, the U.K., Japan, Singapore and others have adopted their own AI policy frameworks, but have generally not yet enacted enforceable requirements that apply to individual companies.

 

III. U.S. GOVERNMENT REGULATORY INITIATIVES

There have been several bills in the U.S. Congress to regulate AI, most notably the Algorithmic Accountability Act, the most recent iteration of which would mandate the Federal Trade Commission (“FTC”) to require impact assessments for high-risk automated decision systems.[16] This proposed bill was not enacted, and although a similar bill is likely to be introduced in the new Congress, there is no evidence it will fare any better than previous versions. Absent a major accident or abuse, which is usually needed to trigger Congress to adopt new statutes, it unlikely that Congress will undertake major legislative change on AI anytime soon. Instead, the U.S. government is likely to approach AI in the same way it had governed other emerging technologies such as the internet, biotechnology and nanotechnology, relying primarily on existing regulatory agencies and statutes to apply oversight, supplemented by private governance initiatives. This results in a more decentralized, sector-specific, and incremental governance approach, quite distinct from the European approach of centralized, top-down control.[17]

U.S. government policy on AI through the Obama, Trump, and Biden (so far) administrations has consisted of a “light touch” sector-specific approach that has become gradually more proactive as AI technology and applications have advanced over the past decade.[18] The U.S. government first started identifying AI as a policy priority in the latter days of the Obama Administration, when a subcommittee on ML/AI was created by the White House Office of Science and Technology Policy (“OSTP”) to coordinate government AI policy. The OSTP subcommittee held a series of public hearings across the country and issued reports, including one entitled Preparing for the Future of AI.[19] This report raised several concerns about the implementation of AI/ML, such as the potential for discrimination based on biased data used to train ML systems, but noted that experts agreed “that broad regulation of AI research or practice would be inadvisable at this time” and instead called for relying on existing statutory authority to address problems created by AI.[20]

The Trump Administration was somewhat more active on AI, but continued the “light touch” approach of his predecessor. President Trump issued Executive Order 13859 in February 2019 that emphasized the need for the U.S. to retain global leadership in AI.[21] While much of the Executive Order focused on enhancing investment and innovation in AI, on the regulatory side it called upon the National Institute of Standards and Technology (“NIST”) to promote standard-setting on AI, and instructed the Office of Management and Budget (“OMB”) to produce a memorandum on regulatory principles for AI that federal agencies should follow. That guidance memorandum was finalized in November 2020, and identified ten principles for regulation of AI, with an emphasis on ensuring safety, but also advising US regulatory agencies to consider “nonregulatory approaches for AI.”[22]

Just as the Biden administration was about to take office, Congress passed the National Artificial Intelligence Initiative Act of 2020, which took effect on January 1, 2021.[23] This bipartisan statute created the National AI Initiative, which “provides an overarching framework to strengthen and coordinate AI research, development, demonstration, and education activities across all U.S. Departments and Agencies, in cooperation with academia, industry, non-profits, and civil society organizations.”[24] The Initiative is structured around six “strategic pillars” – Innovation, Advancing Trustworthy AI, Education and Training, Infrastructure, Applications, and International Cooperation.[25] This Initiative created a framework for the incoming Biden administration to structure its AI activities.

To date, the Biden administration has continued the sector-specific approach that relies on existing statutory authorities, with no proposals or efforts to establish comprehensive regulation of AI. However, many federal agencies have ramped up their focus on AI under the Biden presidency. Perhaps the highest profile activity was the promulgation of a “Blueprint for an AI Bill of Rights” by the OSTP in October 2022.[26] The Blueprint set forth five principles for responsible AI: (1) safe and effective systems; (2) algorithmic discrimination protections; (3) data privacy; (4) notice and explanation; and (5) human alternatives, consideration, and fallback.[27] The proposed Bill of Rights received mixed reviews, with one frequent criticism being that the document was “toothless.”[28]

Several other agencies have started new AI guidance or enforcement initiatives for specific industry sectors, mostly driven by the potential for bias from ML systems. The FTC has been at the forefront of these efforts. In April 2021, the FTC issued a statement notifying stakeholders that it intends to use its authority under the 1970 Fair Credit Reporting Act, the 1970 Equal Credit Opportunity Act, and section 5 of the FTC Act to ensure that AI systems are fair, transparent and truthful.[29] The FTC has used this authority to take enforcement action against a number of algorithmic AI products that violate its principles, including applying a new remedy of “algorithmic disgorgement” to require an offending company to destroy all records of the relevant algorithm.[30] Perhaps most significantly, the FTC published an advance notice of proposed rulemaking in August 2022 on possible new regulations “concerning the ways in which companies collect, aggregate, protect, use, analyze, and retain consumer data….”[31] Although this notice applied broadly to all types of commercial surveillance, it included a section specifically addressing automated decision-making systems (i.e. ML).[32]

The Food and Drug Administration (“FDA”) has been particularly proactive in considering the impact of AI and ML for its regulatory programs. Many medical devices are using AI and ML – the FDA has already approved over 500 such devices.[33] One problem with the traditional FDA regulatory model for medical devices is it assumes that the products are static, and thus once approved, they will remain the same for their useful life. AI devices using ML are dynamic in that they continue to learn and improve even after FDA approval, which the existing FDA oversight approach does not accommodate or address. The FDA released a discussion paper and then a follow-up action plan to create a revised regulatory approval pathway for AI/ML systems given their unique dynamic nature.[34] FDA also explored the development of a software pre-certification program to allow more flexible approval of complex software programs such as those using AI/ML.[35] Unfortunately, the FDA determined that this model would not comport with its existing statutory authority and thus would not proceed further with the program,[36] a clear example of an outdated regulatory statute blocking an innovative governance approach.

The Department of Transportation has also actively engaged the development of AI for autonomous vehicle driving systems, including publishing a series of major reports providing guidance for industry and state and local governments on the safe development of autonomous vehicles.[37] These reports primarily rely on private standards to ensure autonomous vehicle safety, but the agency has recently issued a request for comment on a governance framework for autonomous driving system safety.[38] The National Institute of Standards and Technology (“NIST)” has also been very active in interacting with private standard-setting efforts, by issuing a series of recommendations on topics such as explainable AI, AI bias, and risk management that can inform both standards-setting bodies and individual companies.[39] Other federal agencies are also taking action by issuing various types of guidance documents, including the Equal Employment Opportunity Commission (“EEOC”),[40] the Department of Health and Human Services (“DHHS”),[41] the Consumer Product Safety Commission (“CPSC”),[42] the Department of the Treasury,[43] the Consumer Financial Protection Bureau (“CFPB”),[44] and the Federal Housing Finance Agency (“FHFA”).[45]

In addition to these federal efforts, some state and local governments have also begun regulatory initiatives on AI/ML. At the state level, California has been most active, and is pursuing a number of regulatory measures for AI. The State has recently proposed amendments to its employment anti-discrimination laws that would impose liability on companies using AI tools that discriminate against protected groups.[46] California has also adopted a law that requires AI bots to disclose their non-human nature.[47] California’s data privacy statutes, specifically the California Consumer Privacy Act of 2018 as amended by the California Privacy Rights Act of 2020, will apply to many AI applications using ML, since they will often use consumer data. California and several other states are in the early stages of trying to adopt other measures relating to AI, although many such initiatives have been unsuccessful in previous years.[48] At the local level, New York City is leading the way by enacting Local Law 144 that will require employers to conduct a bias audit before using any algorithm in the hiring process, and will require notification to job applicants before its use.[49] This law was originally scheduled to take effect on January 1, 203, but has now been delayed to April 15, 2023.[50]

In summary then, we have seen a significant ramping up of activity relating to AI by US regulatory agencies in the past couple years, primarily at the federal level but also at the state level, but this activity is limited to applying existing statutory authority to AI. Many of these statutes were enacted decades ago, long before the modern wave of AI/ML and there does not appear to be any momentum in Congress towards adopting comprehensive AI legislation. As such, U.S. government regulation of AI will likely remain limited for the foreseeable future, and various soft law initiatives, discussed in the next section, are likely to continue to play a central role in AI governance.

 

IV. AI SOFT LAW

The light touch of AI regulation in the U.S. has been supplemented with soft law to fill the governance gaps, if not voids. Soft law are programs that set substantive expectations but which are not directly enforceable by governments.[51] Soft law comes in many different forms, including private standards, codes of conduct, best practices, statements of principles, certification programs, voluntary programs, and private-public partnerships.[52] A variety of different types of organizations can promulgate soft law, including governmental bodies, industry groups, individual companies, non-governmental organizations, or any combination of the above.[53]

Soft law is the most prominent form of AI governance today, both in the United States and elsewhere. A recent empirical survey by Carlos Ignacio Gutierrez identified and characterized over 600 AI soft law programs that had been adopted by the end of 2019.[54] These soft law programs were extremely diverse, varying in what issues they addressed, the form of the soft law instrument, the type of organization that promulgated them, the geographical origin and reach of the program, and whether they included any implementation or enforcement provisions. One of the most surprising findings was that government entities were the most frequent participant in developing soft law programs, serving in more of a convening or coordination role rather than traditional coercive regulatory role.[55] Another significant finding was that only about one-third (31 percent) of the soft law programs analyzed publicly disclosed any type of implementation or enforcement provisions.[56]

Soft law is currently the dominant form of AI governance, and is likely to continue to be so for some time, but as the empirical study by Gutierrez shows, the AI soft law environment is complex and multi-layered. At the international level, organizations such as the OECD and UNESCO have promulgated principles or codes of ethics for responsible AI, which many organizations in the private and public sector attempt to integrate into their own practices. In addition, international standard setting bodies such as the ISO and IEEE are issuing private standards on responsible AI and AI governance. For example, the IEEE P7000 standards are a set of standards under development addressing various aspects of ethical AI.[57] IEEE is also developing a standard for governance of AI by entities that develop or use AI.[58] NIST is developing a series of documents to assist AI standard-setting bodies, or to assist companies directly in building their own AI governance programs, such as the recently released NIST framework for AI risk management.[59] A large variety of more focused AI soft law instruments have been produced by trade associations, professional societies, think tanks, non-governmental organizations, and individual companies.

In recent years there has been a “techlash” against technology companies as a result of incidents such as the Boeing crashes, Theranos’ fraud, and data handling scandals such as Facebook’s Cambridge Analytica debacle. This has translated into a backlash against self-regulatory and soft law approaches to technology governance. The lack of implementation and enforcement measures in the majority of AI soft law programs no doubt contributes to this unease.  We can learn from the history of soft law for AI and other technologies that accountability and indirect enforcement mechanisms can make soft law more effective and credible, without losing the important benefits of soft law in terms of flexibility, agility and diversity.[60] Since soft law will be essential for the safe and responsible development of beneficial AI, making it successful should be a common goal. To paraphrase Winston Churchill, “[Soft law] is the worst form of govern[ance], except for all the others.”


[1] Regents Professor and Faculty Director, Center for Law, Science & Innovation, Sandra Day O’Connor College of Law at Arizona State University.

[2] Wendell Wallach & Gary Marchant, Toward the Agile and Comprehensive International Governance of AI and Robotics, 107 Proceedings of the IEEE 505, 505-06 (2019).

[3] Nicol Turner Lee, Paul Resnick & Genie Barton, Algorithmic Bias Detection and Mitigation: Best Practices and Policies To Reduce Consumer Harms, Brookings Inst., May 22, 2019, available at https://www.brookings.edu/research/algorithmic-bias-detection-and-mitigation-best-practices-and-policies-to-reduce-consumer-harms/#footref-6.

[4] Will Knight, The Dark Secret at the Heart of AI, Technology Review, April 11, 2017, available at https://www.technologyreview.com/2017/04/11/5113/the-dark-secret-at-the-heart-of-ai/.

[5] Gary E. Marchant, The Growing Gap Between Emerging Technologies and the Law, in The Growing Gap Between Emerging Technologies and Legal-Ethical Oversight: The Pacing Problem 19, 22–23 (Gary E. Marchant et al. eds., 2011.

[6] Wallach & Marchant, supra note 2, at 505.

[7] Id. at 505-06.

[8] Id. at 506.

[9] https://www.unesco.org/en/artificial-intelligence/recommendation-ethics.

[10] https://oecd.ai/en/ai-principles.

[11] Launch of the French-Canadian Initiative Global Partnership on Ai (GPAI) (June 15, 2020), available at https://ai-regulation.com/launch-of-the-french-canadian-initiative-global-partnership-on-ai-gpai/.

[12] EUROPEAN COMMISSION, Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts, (2021), available at https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=celex%3A52021PC0206.

[13] Id.

[14] See Matt Sheehan, China’s New AI Governance Initiatives Shouldn’t Be Ignored, Carnegie Endowment for International Peace, Jan. 4, 2022, available at https://carnegieendowment.org/2022/01/04/china-s-new-ai-governance-initiatives-shouldn-t-be-ignored-pub-86127.

[15] Jennifer Conrad & Will Knight, China Is About to Regulate AI – And the World is Watching, Wired (Feb. 22, 2022), available at https://www.wired.com/story/china-regulate-ai-world-watching/.

[16] Algorithmic Accountability Act of 2022, H.R. 6580, 117th Cong. (2021-22).

[17] See Adam Thierer, U.S. Artificial Intelligence Governance in the Obama-Trump Years, 2 IEEE Trans. Tech. & Soc’y 175, 179 (2021).

[18] Id. at 176.

[19] Executive Office Of The President National Science And Technology Council Committee On Technology, Preparing For the Future of Artificial Intelligence (Oct. 2016), available at https://obamawhitehouse.archives.gov/sites/default/files/whitehouse_files/microsites/ostp/NSTC/preparing_for_the_future_of_ai.pdf.

[20] Id. at 17.

[21] President Donald Trump, Executive Order 13859: Maintaining American Leadership in Artificial Intelligence, 84 Fed. Reg. 3964 (Feb. 14, 2019).

[22] Russell T. Vought, OMB Director, Guidance for Regulation of Artificial Intelligence Applications (Nov. 17, 2020), available at https://www.whitehouse.gov/wp-content/uploads/2020/11/M-21-06.pdf.

[23] See National Artificial Intelligence Initiative Act of 2020, §§ 5001 et seq., 2020 Defense Appropriations Act (Dec. 3, 2020), available at https://www.congress.gov/116/crpt/hrpt617/CRPT-116hrpt617.pdf#page=1210.

[24] National Artificial Intelligence Initiative (NAII), About NAII (undated), available at https://www.ai.gov/about/#NAII-NATIONAL-ARTIFICIAL-INTELLIGENCE-INITIATIVE.

[25] Id.

[26] OSTP, Blueprint For an AI Bill Of Rights: Making Automated Systems Work For

the American People (Oct. 2022), available at https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf.

[27] Id.

[28] See, e.g. Khari Johnson, Biden’s AI Bill of Rights Is Toothless Against Big Tech, Wired, Oct. 4, 2022, available at https://www.wired.com/story/bidens-ai-bill-of-rights-is-toothless-against-big-tech/.

[29] FTC, Aiming for Truth, Fairness, and Equity In Your Company’s Use of AI (April 19, 2021), available at https://www.ftc.gov/news-events/blogs/business-blog/2021/04/aiming-truth-fairness-equity-your-companys-use-ai.  

[30] See Kate Kaye, The FTC’s New Enforcement Weapon Spells Death for Algorithms, Protocol, March 14, 2022, available at https://www.protocol.com/policy/ftc-algorithm-destroy-data-privacy.

[31] FTC, Trade Regulation Rule on Commercial Surveillance and Data Security, 87 Fed. Reg. 51273 (Aug. 22, 2022).

[32] Id. at 51283-84.

[33] FDA, Artificial Intelligence and Machine Learning (AI/ML)-Enabled Medical Devices (Oct. 5, 2022), available at https://www.fda.gov/medical-devices/software-medical-device-samd/artificial-intelligence-and-machine-learning-aiml-enabled-medical-devices.

[34] FDA, Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) Action Plan (Jan. 2021), available at https://www.fda.gov/media/145022/download.

[35] FDA, Developing Software Precertification Program: A Working Model, v 2.0, June 2018, available at https://www.fda.gov/media/113802/download.

[36] FDA, The Software Precertification (Pre-Cert) Pilot Program: Tailored Total Product Lifecycle Approaches and Key Findings (Sept. 2022), available at https://www.fda.gov/media/161815/download.

[37] DOT, USDOT Automated Vehicles Activities, available at https://www.transportation.gov/AV.

[38] NHTSA, Framework for Automated Driving System Safety, 85 Fed. Reg. 78058 (Dec. 2, 2020).

[39] NIST, Trustworthy and Responsible AI, available at https://www.nist.gov/programs-projects/trustworthy-and-responsible-ai.

[40] EEOC, Draft Strategic Enforcement Plan, 88 Fed. Reg. 1379, 1381 (Jan. 10, 2023).

[41]DHHS, Nondiscrimination in Health Programs and Activities, 87 Fed. Reg. 47824 (Aug. 4, 2022).

[42] CPSC, Artificial Intelligence and Machine Learning In Consumer Products (May 19, 2021), available at https://www.cpsc.gov/s3fs-public/Artificial-Intelligence-and-Machine-Learning-In-Consumer-Products.pdf.

[43] Department of Treasury et al., Request for Information and Comment on Financial Institutions’ Use of Artificial Intelligence, Including Machine Learning, 86 Fed. Reg. 16837 (March 31, 2021).

[44] CFPB, Consumer Financial Protection Circular 2022–03: Adverse Action Notification Requirements in Connection With Credit Decisions Based on Complex Algorithms, 87 Fed. Reg. 35864 (June 14, 2022).

[45] FHFA, Advisory Bulletin AB 2022-02: Artificial Intelligence/Machine Learning Risk Management (Feb. 10, 2022), available at https://www.fhfa.gov/SupervisionRegulation/AdvisoryBulletins/AdvisoryBulletinDocuments/Advisory-Bulletin-2022-02.pdf.

[46] Fair Employment & Housing Council, Draft Modifications to Employment Regulations Regarding Automated-Decision Systems (March 15, 2022), available at https://calcivilrights.ca.gov/wp-content/uploads/sites/32/2022/03/AttachB-ModtoEmployRegAutomated-DecisionSystems.pdf.

[47] California Code, Business and Professions Code – BPC § 17940 – last updated January 01, 2019 | https://codes.findlaw.com/ca/business-and-professions-code/bpc-sect-17940/.

[48] National Conference of State Legislatures, Legislation Related to Artificial Intelligence (Aug. 26, 2022), available at https://www.ncsl.org/technology-and-communication/legislation-related-to-artificial-intelligence.

[49] New York City Council, Law 2021/144, Local Law to Amend the Administrative Code of the City of New York, In Relation To Automated Employment Decision Tools (Dec. 11, 2021), available at https://legistar.council.nyc.gov/LegislationDetail.aspx?ID=4344524&GUID=B051915D-A9AC-451E-81F8-6596032FA3F9.

[50] Ryan Golden, NYC Delays Enforcement of AI in Hiring Law to April 2023, HR Dive (Dec. 14, 2021), available at https://www.hrdive.com/news/nyc-ai-in-hiring-law-delayed-enforcement-april-2023/638793/.

[51] Gary E. Marchant & Brad Allenby, Soft Law: New Tools for Governing Emerging Technologies, 73 Bull. Atomic Sci. 108, 108 (2017).

[52] Gary Marchant, Lucille Tournas & Carlos Ignacio Gutierrez, Governing Emerging Technologies Through Soft Law: Lessons for Artificial Intelligence- An Introduction, 61 Jurimetrics 1, 5 (2020).

[53] Kenneth W. Abbott, Gary E. Marchant & Elizabeth A. Corley, Soft Law Oversight Mechanisms for Nanotechnology, 52(3) Jurimetrics, The Journal of Law, Science, and Technology 279, 298-99 (2012).

[54] Carlos I. Gutierrez & Gary Marchant, A Global Perspective of Soft Law Programs for the Governance of Artificial Intelligence, SSRN (May 28, 2021), available at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3855171.

[55] Id. at 13-14.

[56] Carlos Ignacio Gutierrez, Transitioning From Ideas to Action: Trends in the Enforcement of Soft Law for the Governance of Artificial Intelligence, 2 IEEE Transactions on Technology and Society 210, 211 (2021).

[57] IEEE P7000 Projects, available at https://ethicsstandards.org/p7000/.

[58] IEEE P2863, available at https://sagroups.ieee.org/2863/.

[59] NIST, AI Risk Management Framework, available at https://www.nist.gov/itl/ai-risk-management-framework.

[60] Gary Marchant, Lucille Tournas & Carlos Ignacio Gutierrez, Governing Emerging Technologies Through Soft Law: Lessons for Artificial Intelligence- An Introduction, 61 Jurimetrics 1, 9-16 (2020); Gutierrez, supra note 56, at 211-15.