Different AI regulatory regimes are currently emerging across Europe, the United States, China, and elsewhere. But what do these new regulatory regimes mean for companies and their adoption of self-regulatory and compliance-based tools and practices? This article outlines how and where AI regulations emerge and how these, in some cases, seem to be on divergent paths. Second, it discusses what this means for businesses and their global operations. Third, it comments on a way forward in the growing complexities of AI use and regulation, as it exists between soft law practices and emerging hard law measures.

By Danni Yu & Benjamin Cedric Larsen[1]

 

Different AI regulatory regimes are currently emerging across Europe, the United States, China, and elsewhere. But what do these new regulatory regimes mean for companies and their adoption of self-regulatory and compliance-based tools and practices? This article outlines how and where AI regulations emerge and how these, in some cases, seem to be on divergent paths. Second, it discusses what this means for businesses and their global operations. Third, it comments on a way forward in the growing complexities of AI use and regulation, as it exists between soft law practices and emerging hard law measures.

 

I. AI GOVERNANCE CONCEPTUALIZED

Two distinct but connected forms of AI governance are currently emerging. One is soft law governance, which functions as self-regulation based on non-legislative policy instruments. This group includes private sector firms issuing principles, guidelines, and internal audits and assessment frameworks for developing ethical AI. Actionable mechanisms by the private sector usually focus on developing concrete technical solutions, including the development of internal audits, standards, or explicit normative encoding.[2] Soft law governance also entails multi-stakeholder organizations such as The Partnership on AI, international organizations such as the World Economic Forum, standard-setting bodies such as the ISO/IEC,[3] CEN/CENELEC,[4] NIST,[5] and interest organizations such as the Association for Computing Machinery (“ACM”), among others. This means that soft-law governance and associated mechanisms are essential in setting the default for how AI technologies are governed.

Hard law measures, on the other hand, entail laws and legally binding regulations that define permitted or prohibited conduct. Regulatory approaches generally refer to legal compliance, the issuing of standards-related certificates, or the creation or adaptation of laws and regulations that target AI systems.[6] Policymakers are currently contemplating several approaches to regulating AI, which broadly can be categorized across AI-specific regulations (e.g. EU AI Act), data-related regulations (e.g. GDPR, CCPA, COPPA), existing laws and legislation (e.g. antitrust and anti-discrimination law), and domain or sector-specific regulations (e.g. HIPAA and SR 11-7).

 

II. EMERGING REGULATORY LANDSCAPES

According to the OECD AI Policy Observatory, which tracks 69 countries and territories, these have already released more than 200 initiatives targeting AI governance and regulation. Initiatives are aimed at different areas such as antitrust concerns, interoperability standards, risk mitigation -hereunder consumer and social protection, the delivery of public services, and the protection of public values.[7]

While many countries have implemented national AI strategies, not all countries and territories take the same approach to AI governance and regulation. Different approaches are connected to a country’s existing institutions, including culture and value systems and economic considerations, e.g. regarding innovation. Before understanding what this means for businesses and their international operations, a few examples of emerging AI regulations are highlighted below.

In many ways, the European Union (“EU”) has been a frontrunner in data and AI regulation. The EU’s AI Act (“AIA”),[8] which is expected to gradually go into effect starting in 2024, establishes a horizontal set of rules for developing and using AI-driven products, services, and systems within the EU. The Act is modeled on a risk-based approach where AI systems that pose unacceptable risks are entirely banned, while high-risk systems will be subject to conformity assessments, including independent audits and new forms of oversight and control.[9] Limited risk systems are subject to transparency obligations, and little or no risk systems remain unaffected by the EU AI Act. The EU has also proposed an AI Liability Directive, which targets harmonization of national liability rules for AI.[10]

In the United Kingdom, the government released a proposal for regulating the use of AI technologies in June 2022, which focuses on a “light touch” sectoral approach where guidance, voluntary measures, and sandbox environments are encouraged as a means to assess and test AI technologies before they are marketed. The proposal is meant to reflect a less centralized approach than the EU AI Act.[11]

In Canada, the Directive on Automated Decision-Making came into effect in April 2019 to ensure that the government’s use of AI to make administrative decisions is compatible with core administrative values.[12] Canada’s Artificial Intelligence and Data Act (“AIDA”) was introduced in June of 2022 and would be the first law in the country to regulate the use of AI systems if approved. The objective of AIDA is to establish common requirements across Canada for the design, development, and deployment of artificial intelligence technologies that are consistent with national values and international standards.[13]

The United States‘ approach to artificial intelligence is more fragmented and characterized by the idea that companies, in general, must remain in control of industrial development and governance-related criteria.[14] In terms of AI regulation, the U.S. Algorithmic Accountability Act,[15] a horizontal AI regulation, was reintroduced in 2022. Should the Act be passed, it would require companies that develop, sell, and use automated systems to be subject to new rules on when and how AI systems are used.[16] It would require organizations to perform impact assessments of automated decision-making systems (“ADS”) before deployment and augmented decision-making processes after deployment. This approach mirrors the conformity assessments and post-market monitoring plans mandated by the EU AI Act. In the absence of national legislation, some states and cities have started implementing their own regulations, such as The California Consumer Privacy Act (“CCPA”) and New York City’s Law on Automated Employment Decision Tools (Local Law 144). Local Law 144 stipulates that any automated hiring system used on or after January 1, 2023, in NYC must undergo a bias audit consisting of an impartial evaluation by an independent auditor, including testing to assess the potential disparate impact on some groups.[17]

China’s approach to AI legislation is evolving rapidly and is heavily based on central government guidance.[18] China, for example, oversees recommender engines through the “Internet Information Service Algorithmic Recommendation Management Provisions,”[19] which went into effect in March 2022, the first regulation of its kind worldwide. The law gives users new rights, including the ability to opt-out of using recommendation algorithms and delete user data. The regulation goes further, however, with its content moderation provisions, which require private companies to actively promote “positive” information that follows the official line of the Communist Party.[20] Regarding generative AI, the Cyberspace Administration of China implemented regulations on AI-generated image, audio, and text-generation software, so-called synthetic media, on January 10, 2023, also marking the first regulation of its kind globally.[21]

In Singapore, A.I. Verify[22] was introduced in May of 2022 as the world’s first AI Governance Testing Framework and Toolkit for companies who want to demonstrate responsible AI (“RAI”) in an objective and verifiable manner. The toolkit, which remains voluntary, provides a governance testing framework that verifies the performance of an AI system against the developer’s claims – with respect to internationally accepted AI ethics principles.[23]

Many other countries currently devise AI-related regulations. The Philippines, for example, enacted regulations on spreading false news in 2021.[24] In Brazil, a December 2022 proposal outlines a risk-based approach to AI regulation which includes specifying new rights for individuals affected by AI systems.[25] In India, the Ministry of Electronics and Information Technology (“MeitY”) is considering Niti Aayog’s proposed Responsible #AIForAll10 to be incorporated into India’s AI mission,[26] and MeitY has also proposed new privacy legislation, the Digital Personal Data Protection Act, 2022.[27]

While there are too many national AI regulations to recite here, it serves the point that these are materializing across a variety of countries and contexts. It is likely that governments’ disparate approaches to AI application and regulation could have varying consequences for businesses in terms of the perceived costs of compliance, which could result in diverging organizational practices.

 

III. BUSINESSES TAKE THE LEAD ON SELF-GOVERNANCE

As the regulatory landscape slowly evolves, companies increasingly take the lead on self-governance to ensure their development and use of AI systems comply with incoming regulations across regions of operation.

Early adopters of AI-related self-governance come from various sectors such as technology, media, and telecom (“TMT”), financial services, healthcare, and consumer goods. As AI is widely used in these sectors, some companies have adopted a global best practices approach to AI governance.

The first step in this approach relies on creating a list of principles that demonstrate the business’ commitment to responsible AI. These principles are usually created by the company’s senior leadership and are aligned with the company’s core values and culture. Microsoft,[28] Google,[29] Amazon,[30] Meta,[31] HSBC,[32] AstraZeneca,[33] Novartis,[34] and H&M,[35] among others, have also publicly shared their responsible AI principles. Fairness, transparency, privacy, explainability, safety, controllability, and human-centeredness are among the most common themes and are generally in line with the OECD’s AI Principles.[36]

While AI principles is a good starting point, successful implementation rests on developing a cross-organization AI governance structure. One common approach is to have decision-making and oversight responsibilities at a centralized level, for example, in a hub or Center of Excellence (“CoE”). In this model, a board of senior business and functional leaders are responsible for decisions on AI, including for creating and enacting associated governance mechanisms. To operationalize AI governance, the hub or CoE usually assembles a group of technical and subject matter experts tasked with increasing awareness and literacy, e.g. on sensitive use cases, while developing processes, tools, and best practices linked to responsible AI.

An example of this structure can be found in Microsoft. The Microsoft senior leadership team is the final decision maker accountable for the company’s direction on responsible AI and steers the company’s commitments to AI principles, values, and human rights. A committee called AETHER, made up of expert working groups, provides advice to the senior leadership and practitioners on questions, challenges, and opportunities linked to the development and use of AI.[37] Their decisions are subsequently enacted by the Office of Responsible AI, which serves as a hub working with stakeholders across the company to define governance mechanisms and establish new best practices.[38]

While the above structure is effective for AI governance in some companies, it is not a one-size-fits-all solution. Companies must choose their AI governance model based on their culture, organizational structure, and existing governance model. For example, a company with highly autonomous business units may decentralize decision-making for individual use cases while creating a Center of Excellence to provide expertise and best practices across business units.

Despite differences in governance models, the global best practices approach usually features a group, hub, or CoE that embodies the following capabilities:

  • Understanding of the company’s values, culture, and operations.
  • Multi-disciplinary expertise on the topics of data & AI, risks, compliance, legal, public policy, and any sector- and business-specific knowledge relevant to key AI use cases.
  • Up-to-date knowledge of the RAI landscape, including regulations and best practices.
  • Sponsorship from the top management and ability to navigate the organizational structure to roll out communications, cultural change, and upskilling.
  • And, for companies that wish to take a lead role in responsible AI – R&D capabilities devoted to developing new frameworks and solutions.

This group / hub / CoE can, for example, facilitate the risk classification of AI systems, monitor high-risk AI use cases, create resources such as guidelines and tools for responsible assessment, development, and deployment of AI. Furthermore, this group can also collaborate with external actors, such as policymakers and researchers, that are devoted to shaping new laws and regulations around AI technology.

As an industry example, J.P. Morgan created the Explainable AI Center of Excellence to research the explainability and fairness of AI systems. The center aims to develop new techniques, tools, and frameworks that make AI/ML models more explainable and fair to advance the company’s AI vision.[39]

By setting up a rigorous self-governance approach to responsible AI, these first-mover companies aim to not only comply with the legal standards across the regions they operate in – but also stay ahead of them. This avoids a patchwork approach in dealing with compliance and risk in the evolving regulatory landscape. By demonstrating sufficient and advanced self-governance practices, companies are better positioned to promote public and private collaboration on AI governance, for example, in support of more flexible regulatory arrangements.

When it comes to sector-specific regulations, these tend to differ considerably across countries and regions, calling for a more targeted approach to compliance for a company with global operations. The company will need to understand varying jurisdictions and decide on a potential local path diverging from the global best practices approach. For example, China’s Internet Security Law and National Intelligence Law could require companies to share data with the Chinese government if requested,[40] which could conflict with a global best practices approach.

To address such conflicts, companies may opt for a customized/localized approach, adopting separate regional operations and governance structures to meet local regulatory requirements. This approach is currently embraced by many Chinese tech companies with large international customer bases to reconcile geopolitical implications and diverging regulatory requirements.

For instance, Bytedance carved out TikTok as a standalone business that operates independently from its Chinese counterpart Douyin.[41] Despite having almost the same user interfaces, TikTok, and Douyin are allegedly separated in terms of user data and operations. The implications of the separation go beyond data and are directly linked to China’s specific vision for socio-technological governance, which, among others, requires social media companies to promote “positive” content aligned with the Communist Party’s values. Consequently, social media companies operating in China must adopt content monitoring and moderation protocols that differ from requirements placed on social media platforms in other countries.

The rise of digital sovereignty, defined as a nation’s ability to control and affect domestic information infrastructure,[42] is another challenge for companies, which compels a regional customization approach. For instance, Xiaomi, a Chinese consumer electronics company, moved its international user data and cloud services out of China to comply with data protection regulations in other markets.[43] Furthermore, Xiaomi developed different phone operating systems for the Chinese and international markets and built a version specifically for India after more than 100 Chinese apps and services were banned in India.[44] Examples can also be found in American companies operating in China. To meet Chinese regulations, companies such as Apple, AWS, and Microsoft have all partnered with local Chinese entities, which is a legal requirement that needs to be fulfilled for them to provide their data center services in the country.[45]

If geo-political tensions in the digital space keep intensifying and regulatory requirements diverge, we may see more multinational businesses customize, separate, or, in some cases, even shut down entire business units to be compliant. In particular, the diverging governance approaches indicate increasing differences in socio-technological values among these countries, and alignment with all these values at the same time could be increasingly difficult. Hence, this type of decision goes beyond sheer regulatory considerations and reflects on a company’s core values. One prominent example is Google’s exit from the Chinese market in 2010 due to increased Internet censorship in China, along with regular cyber-attack concerns.[46]

While many companies with global operations have adopted a best practices approach, sometimes with regional characteristics, this approach is not feasible for many small and medium-sized enterprises (“SME”). A local recruiting agency operating only in New York City, likely neither has the resources nor the incentives to keep track of the highest global standards surrounding AI but still needs to meet local legal requirements, for example, on the use of automated employment decision tools. For many SMEs, a local approach to AI governance allows them to comply with regional and sector-specific regulations in a cost-effective way.

However, even for businesses that choose a local approach, there may still be significant costs associated with compliance. At a minimum, companies must establish an oversight process and sometimes work with external auditors. This process involves building entirely new capabilities that most local businesses currently do not have, such as understanding and evaluating the technical and social implications of the algorithmic systems and tools that are used.

Last but not least, startups play a crucial role in creating new AI innovations. While most startups (except for the ones in the RegTech space) will not devote many resources to AI governance e.g. due to cost constraints, it is vital that they have basic checks in place to ensure their innovations are responsible. One possible way is to incorporate existing oversight tools such as Model Cards.[47] Another incentive can be guidelines and checklists, for example, provided by investors to ensure the legality of the start-up’s products and long-term viability of its business model.

 

IV. EMERGING RESPONSIBLE AI ECOSYSTEMS

As companies move from AI principles to adopt self-governance practices and new organizational processes, sometimes linked to external audits and services, they increasingly fill the institutional vacuum of trailing AI regulations. However, as discussed at the beginning of this article, a growing plethora of legislation is slowly emerging globally. In many cases, these support the advancement of an entirely new ecosystem of third-party auditors, assessment bodies, and services at the intersection of soft- and hard law measures.

In the case of the European Union, the EU AI Act delineates one vision for what an AI auditing ecosystem could look like.[48] The system would need two core components: First, a clear organizational structure for assigning responsibilities to private companies, government agencies, and supranational organizations would need to be established, along with delineating accountability for different types of system failures. Second, these actors all need access to effective auditing tools and expert knowledge to ensure that high-risk systems are safe and in compliance with the EU AI Act.[49]

Several private sector startups have been moving into the AI governance space and provide a range of services that are specifically linked to optimizing AI governance across enterprises. Companies such as Fiddler[50] and Vera,[51] for example, ask clients to provide access to their models, code, and data, potentially allowing them to adjust model features and find more equitable outcomes. This process can be accompanied by an algorithmic impact assessment that could be provided to third-party auditors and regulators. Credo AI[52] helps companies manage Al risk through a unified platform that standardizes Al governance efforts across an organization, and TruEra[53] similarly provides a platform for explaining and monitoring AI models according to quality and reliability.

Traditional consulting companies are also creating new services to assess AI. EY, for example, sells a service that turns responses to questions about AI systems into a score that quantifies risks.[54] BCG X created Rate.AI, a web-based self-administered tool to assess AI projects and benchmark companies across seven dimensions of responsible AI.[55] Accenture[56] provides an algorithmic assessment process that checks for disparities in potential outcomes of AI systems and monitors for future problems once a model is deployed. BSR[57] does human rights assessments without auditing for bias or accuracy of the algorithm itself.

For now, it remains clear that on the public side of the regulatory equation, the necessary know-how of putting words into practice is lagging, and the public sector has, in many cases, not yet built the necessary institutional infrastructure to operationalize new policies. This is also true for the underlying standards where these are intended as governance mechanisms.[58]

NYC Local Law 144 is a case in point. While the law went into effect on January 1st, 2023, enforcement has been postponed to April 14, 2023. New York City’s Department of Consumer and Worker Protection will use this time to provide additional guidance on how companies can comply with the law before the new enforcement date.[59]

To ensure regulatory oversight in the case of the EU, the European Commission has proposed setting up a governance structure that spans both Union and national levels. At a Union level, a “European Artificial Intelligence Board” is intended to be established to collect and share best practices among member states and to issue recommendations on uniform administrative practices. At the national level, member states will be required to appoint a competent agency to oversee the application and execution of the AI Act. This structure has similarities to the self-governance model in the private sector, as the role of the European Artificial Intelligence Board related to recommending and operationalizing best practices, is comparable to the functions of a corporate AI hub / CoE.

Going forward, the idea of creating AI Centers of Excellence (“CoE”) is therefore not only applicable to private sector organizations but also to the public sector. Establishing public and private AI-focused CoEs could prove to be a critical step in (1) strengthening and (2) harmonizing approaches to AI governance and regulation, both nationally and also at the international level.

One promising avenue toward building common capacity in the public sector could be creating an AI and Regulation Common Capacity Hub (“ARCCH”).[60] To act as a trusted partner for regulatory bodies, the Hub could have its home at a politically independent institution, established as a Center of Excellence in AI, drawing on multidisciplinary knowledge and expertise from across the national and international research community. The Hub would also act as an interface for regulators to interact with relevant stakeholders, including other regulators, industry, and civil society.[61] It would serve as an important source of expertise, especially for companies with fewer resources and technical expertise to draw from to understand and address risks posed by AI. Singapore’s A.I Verify is a good example of a publicly provided tool that promotes transparency and trust in AI products and services through voluntary adoption and disclosure by companies.[62] Additionally, a national hub or CoE could provide regulatory sandboxes that businesses could use to test their AI innovations, and it could work with sector-specific CoEs to advise on the interactions between horizontal AI- and sector-specific regulations.

When establishing a public sector AI hub / CoE, it is important to clarify its roles and interactions with other public agencies. In the UK, for example, a new Hub or AI CoE could interface with the Digital Regulation Cooperation Forum (“DRCF”) in cross-regulator collaboration, providing knowledge and expertise on AI regulations while liaising with the Office for Artificial Intelligence (OAI) to get the latest strategy updates and ensure a pro-innovation governance approach. The Hub could also collaborate with the Centre for Data Ethics and Innovation (CDEI) in best practices for operationalizing data and AI policies and collect and curate research e.g. conducted by the Alan Turing Institute (ATI) to improve its policies and recommendations.

While national AI Hubs / Centers of Excellence would be enabled to work with the private sector, they would also be able to work with national and supranational AI CoEs, such as the European Artificial Intelligence Board and the OECD’s AI Policy Observatory, for example. Over time, this networked approach to AI governance could form a new institutional arena for debating potential issues and areas of alignment between private sector practices and the growing complexities of emerging regulatory regimes.


[1] Project Fellow, Artificial Intelligence and Machine Learning, World Economic Forum, Consultant, Boston Consulting Group, and AI Lead, Centre for the Fourth Industrial Revolution, World Economic Forum, respectively.

[2] AI Ethics Impact Group. (2020). From Principles to Practice – An interdisciplinary framework to operationalise AI ethics. VDE Association for Electrical Electronic & Information Technologies e.V., Bertelsmann Stiftung, 1–56. https://doi.org/10.11586/2020013.

[3] “ISO/IEC JTC 1/SC 42 – Artificial Intelligence.” Accessed January 25, 2023. https://www.iso.org/committee/6794475.html.

[4] “CEN and CENELEC Launched a New Joint TC on Artificial Intelligence.” CEN-CENELEC. March 03, 2021. https://www.cencenelec.eu/news-and-events/news/2021/briefnews/2021-03-03-new-joint-tc-on-artificial-intelligence.

[5] “Artificial Intelligence Risk Management Framework (AI RMF 1.0).” 2023. https://www.nist.gov/system/files/documents/2022/08/18/AI_RMF_2nd_draft.pdf.

[6] Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), 389–399. https://doi.org/10.1038/s42256-019-0088-2.

[7] OECD.AI (2021), powered by EC/OECD (2021), database of national AI policies, accessed on 4/01/2023. https://oecd.ai/en/dashboards/policy-instruments/Emerging_technology_regulation.

[8]EUR-lex Access to European Union law, accessed on 4/01/2023. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=celex%3A52021PC0206.

[9] European Commission. “Regulatory framework proposal on artificial intelligence.” https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai.

[10] European Commission, 28 September 2022, Brussels. https://ec.europa.eu/commission/presscorner/detail/en/ip_22_5807.

[11] Zhang, Cynthia O’Donoghue, Sarah O’Brien & Yunzhe. “UK Government Announces Its Proposals for Regulating AI.” Technology Law Dispatch. September 2, 2022. https://www.technologylawdispatch.com/2022/09/privacy-data-protection/uk-government-announces-its-proposals-for-regulating-ai/#:~:text=On%2018%20July%202022%2C%20the.

[12] Government of Canada. Directive on Automated Decision-Making. https://www.tbs-sct.canada.ca/pol/doc-eng.aspx?id=32592.

[13] “Government of Canada’s Artificial Intelligence and Data Act: Brief Overview.” 2022. https://www.osler.com/en/resources/regulations/2022/government-of-canada-s-artificial-intelligence-and-data-act-brief-overview.

[14] Cath, C., Wachter, S., Mittelstadt, B., Taddeo, M. & Floridi, L. “Artificial Intelligence and the ‘Good Society’: the US, EU, and UK approach.” Science and Engineering Ethics 24, no. 2: 505–528. https://doi.org/10.1007/s11948-017-9901-7.

[15] https://www.congress.gov/bill/117th-congress/house-bill/6580/text#:~:text=To%20direct%20the%20Federal%20Trade,Algorithmic%20Accountability%20Act%20of%202022%E2%80%9D.

[16] Vought, R. “Guidance for Regulation of Artificial Intelligence Applications Introduction.” Executive Office of the President, Office Of Management and Budget. November 17, 2020. https://www.whitehouse.gov/wp-content/uploads/2020/11/M-21-06.pdf.

[17] Crowell. New York City Issues Proposed Regulations on Law Governing Automated Employment Decision Tools. October 14, 2022. https://www.crowell.com/NewsEvents/AlertsNewsletters/all/New-York-City-Issues-Proposed-Regulations-on-Law-Governing-Automated-Employment-Decision-Tools#:~:text=October%2014%2C%202022&text=Local%20Law%20144%2C%20which%20is,the%20use%20of%20such%20tool.

[18] Larsen, B. C. (2022). Governing Artificial Intelligence: Lessons from the United States and China. Copenhagen Business School [Phd]. PhD Series No. 29.2022. https://research.cbs.dk/en/publications/governing-artificial-intelligence-lessons-from-the-united-states-.

[19] Rogier C, Graham W. & Helen T. “Translation: Internet Information Service Algorithmic Recommendation Management Provisions – Effective March 1, 2022.” DigiChina. Stanford University, January 10, 2022. https://digichina.stanford.edu/work/translation-internet-information-service-algorithmic-recommendation-management-provisions-effective-march-1-2022/.

[20] Huld, A. China’s Sweeping Recommendation Algorithm Regulations in Effect from March 1. China Briefing. January 6, 2022. https://www.china-briefing.com/news/china-passes-sweeping-recommendation-algorithm-regulations-effect-march-1-2022/.

[21] Hao, Karen. n.d. “China, a Pioneer in Regulating Algorithms, Turns Its Focus to Deepfakes.” The Wall Street Journal. January 8, 2023. https://www.wsj.com/articles/china-a-pioneer-in-regulating-algorithms-turns-its-focus-to-deepfakes-11673149283.

[22] “Singapore’s A.I.Verify Builds Trust through Transparency.” OECD.ai. Accessed January 25, 2023. https://oecd.ai/en/wonk/singapore-ai-verify.

[23] “Singapore Launches World’s First AI Testing Framework and Toolkit to Promote Transparency; Invites Companies to Pilot and Contribute to International Standards Development.” Infocomm Media Development Authority. Accessed January 25, 2023. https://www.imda.gov.sg/content-and-news/press-releases-and-speeches/press-releases/2022/singapore-launches-worlds-first-ai-testing-framework-and-toolkit-to-promote-transparency-invites-companies-to-pilot-and-contribute-to-international-standards-development.

[24]Seventeenth Congress of the Republic of the Philippines. June 17, 2021. http://legacy.senate.gov.ph/lisdata/2624822593!.pdf.

[25] Iapp. (2022) Brazil’s AI commission to deliver final report. December 2, 2022. https://iapp.org/news/a/brazils-ai-commission-to-deliver-final-report/.

[26] https://www.niti.gov.in/sites/default/files/2022-11/Ai_for_All_2022_02112022_0.pdf.

[27] Iapp (2022) India’s Digital Personal Data Protection Bill 2022: Does it overhaul the former PDPB? https://iapp.org/news/a/indias-digital-personal-data-protection-bill-2022-does-it-overhaul-the-former-pdpb/.

[28] “Responsible AI Principles from Microsoft.” Microsoft. Accessed January 25, 2023. https://www.microsoft.com/en-us/ai/responsible-ai.

[29] “Our Principles.” Google AI. Accessed January 25, 2023. https://ai.google/principle.

[30] “Responsible use of artificial intelligence and machine learning.” Amazon. Accessed January 25, 2023. https://aws.amazon.com/machine-learning/responsible-machine-learning.

[31] “Facebook’s Five Pillars of Responsible AI.” Meta AI. June 22, 2021. https://ai.facebook.com/blog/facebooks-five-pillars-of-responsible-ai/.

[32] “HSBC’s Principles for the Ethical Use of Data and AI.” Accessed January 25, 2023. https://www.hsbc.com/-/files/hsbc/our-approach/risk-and-responsibility/pdfs/220308-hsbc-principles-for-the-ethical-use-of-data-and-ai.pdf.

[33] “Astrazeneca Data and AI Ethics.” Accessed January 25, 2023. https://www.astrazeneca.com/sustainability/ethics-and-transparency/data-and-ai-ethics.html.

[34] “Our commitment to ethical and responsible use of AI.” Accessed January 25, 2023. https://www.novartis.com/about/strategy/data-and-digital/artificial-intelligence/our-commitment-ethical-and-responsible-use-ai.

[35] “Responsible AI, Is Better AI.” H&M Group, June 17, 2021. https://hmgroup.com/our-stories/responsible-ai-is-better-ai/.

[36] “The OECD Artificial Intelligence (AI) Principles” OECD.AI. Accessed January 25, 2023. https://oecd.ai/en/ai-principles.

[37] Green, Brian, Daniel Lim, and Emily Ratté. “Responsible Use of Technology: The Microsoft Case Study.” World Economic Forum. February 2021. https://www.weforum.org/whitepapers/responsible-use-of-technology-the-microsoft-case-study/#:~:text=The%20World%20Economic%20Forum%20Responsible,technology%20product%20design%20and%20development.

[38] “Putting principles into practice: How we approach responsible AI at Microsoft.” Microsoft AI. Accessed January 25, 2023. https://www.microsoft.com/cms/api/am/binary/RE4pKH5.

[39] “Explainable AI Center of Excellence.” J.P. Morgan. Accessed January 25, 2023. https://www.jpmorgan.com/technology/artificial-intelligence/initiatives/explainable-ai-center-of-excellence.

[40] Cimpanu, Catalin. “China’s Cybersecurity Law Update Lets State Agencies ‘Pen-Test’ Local Companies.” ZDNET, Feb. 8, 2019. https://www.zdnet.com/article/chinas-cybersecurity-law-update-lets-state-agencies-pen-test-local-companies/.

[41] Feng, Coco. “ByteDance carves out TikTok as world’s most valuable technology unicorn finds way to satisfy US-China regulatory demands.” South China Morning Post, November 2, 2021. https://www.scmp.com/tech/article/3154537/bytedance-carve-out-tiktok-worlds-sole-hectocorn-splits-six-units-delineating.

[42] Larsen, B. C. (2022). “The Geopolitics of AI and the Rise of Digital Sovereignty.” Brookings, December 8, 2022. https://www.brookings.edu/research/the-geopolitics-of-ai-and-the-rise-of-digital-sovereignty/.

[43] “Xiaomi Moving International User Data and Cloud Services out of Beijing.” ZDNET. Accessed January 25, 2023. https://www.zdnet.com/article/xiaomi-moving-international-user-data-and-cloud-services-out-of-beijing/.

[44] Wright, Arol. “Xiaomi Is Rebuilding MIUI for India without Any of Its Banned Apps.” XDA Developers. August 8, 2020. https://www.xda-developers.com/xiaomi-rebuilding-miui-for-india-without-banned-apps/.

[45] Swinhoe, Dan. 2021. “Apple Officially Opens Data Center in China.” DCD, May 28, 2021. https://www.datacenterdynamics.com/en/news/apple-officially-opens-data-center-in-china/.

[46] Dahiya, Rekha. “Google’s Exit from China – a Case Study.” Delhi Business Review, Vol. 11, No. 2 (July – December 2010). https://www.delhibusinessreview.org/V_11n2/v11n2case-study.pdf.

[47] “Model Card.” Google. Accessed January 25, 2023. https://modelcards.withgoogle.com/about.

[48] “The European Commission’s Artificial Intelligence Act Highlights the Need for an Effective AI Assurance Ecosystem – Centre for Data Ethics and Innovation Blog.” CDEI. May 11, 2021. https://cdei.blog.gov.uk/2021/05/11/the-european-commissions-artificial-intelligence-act-highlights-the-need-for-an-effective-ai-assurance-ecosystem/.

[49] Mökander, J., Axente, M., Casolari, F. et al. Conformity Assessments and Post-market Monitoring: A Guide to the Role of Auditing in the Proposed European AI Regulation. Minds & Machines 32, 241–268 (2022). https://doi.org/10.1007/s11023-021-09577-4.

[50] https://www.fiddler.ai/ Accessed January 25, 2023.

[51] https://www.askvera.io/ Accessed January 25, 2023.

[52] https://www.credo.ai/ Accessed January 25, 2023.

[53] https://truera.com/ Accessed January 25, 2023.

[54] “EY Trusted AI Platform.” Accessed January 25, 2023. https://www.ey.com/en_uk/consulting/trusted-ai-platform.

[55] Duranton, Sylvain, Mills, Steven. “Responsible AI: Leading by Example.” Medium. February 3, 2021. https://medium.com/bcggamma/responsible-ai-leading-by-example-c25a8a0a98ea.

[56] “Responsible AI | AI Ethics & Governance.” Accessed January 25, 2023. https://www.accenture.com/us-en/services/applied-intelligence/ai-ethics-governance.

[57] https://www.bsr.org/.

[58] “The EU’s AI Act Is Barreling toward AI Standards That Do Not Exist.” Lawfare. January 12, 2023. https://www.lawfareblog.com/eus-ai-act-barreling-toward-ai-standards-do-not-exist#:~:text=The%20EU.

[59] “New York City Proposes Regulations to Clarify Requirements for Using Automated Employment Decision Tools.” JD Supra. September 26, 2022. https://www.jdsupra.com/legalnews/new-york-city-proposes-regulations-to-3740630/.

[60] Aitken, M., Leslie, D., Ostmann, F., Pratt, J., Margetts, H., & Dorobantu, C. “Common Regulatory Capacity for AI.” The Alan Turing Institute. 2022. https://doi.org/10.5281/zenodo.6838946.

[61] Ibid.

[62] “Singapore’s A.I.Verify Builds Trust through Transparency.” OECD.ai. Accessed January 25, 2023. https://oecd.ai/en/wonk/singapore-ai-verify.