The topic of regulating Artificial Intelligence has gained momentum in the past few years, most recently with the European Union’s AI Act, which was released last year. At the heart of these discussions is opacity of machine learning models, the risk of bias from AI systems and issues like agency and keeping humans in the loop. There has been a proliferation of principles related to ethical and responsible AI which includes sector specific approaches and guidance. But there is also an increased demand from stakeholder groups, especially civil society, to ensure that these principles are adopted and implemented. While the AI governance landscape continues to evolve, businesses will have to prepare for emerging regulation which includes elements like certifications and conformity assessments for high-risk use cases (e.g. automated hiring). Governments, private sector and civil society will have to work together on multistakeholder and agile approaches for governing AI to ensure balance between innovation and regulation.
By Jayant Narayan[1]
Consider these artificial intelligence and machine learning applications and use-cases: an application trained on historical consumer data, that can assess if a loan should be disbursed to an individual or not or to detect financial fraud. Or consider leveraging energy distribution and consumption data to better forecast energy demand. These and several other examples aren’t use-cases on the horizon; these are current and real-world examples of artificial intelligence and machine learning (AI & ML) applications. AI & ML applications and solutions have been rapidly penetrating industries and our lives. As per estimates, the global machine learning market is projected to grow from $15.50 billion in 2021 to $152.24 billion in 2028 at a compound annual growth rate (“CAGR”) of 38.6 percent in the forecast period.
While several of these applications are delivering benefits and efficiency gains for businesses, they are fraught with risks and biases and have larger societal implications which must be taken into consideration, especially in applications that directly impact end-users; an example is using AI solutions for automated recruitment and hiring. Amazon had to scrap its AI-based hiring tool after the tool reportedly discriminated against female candidates. In addition, bias issues cited in sensitive AI-powered applications like facial recognition have led big tech companies like IBM to rethink their strategy and approach. As a result of these emerging issues, the past few years have witnessed a growing momentum on the topic of governing and regulating artificial intelligence. Several public and private sector leaders have called for regulation of artificial intelligence and responsible and ethical development and deployment of the technology.
I. PROLIFERATION OF AI PRINCIPLES/GUIDELINES AND EMERGING REGULATION
The topic of regulating or governing artificial intelligence is often driven by the potential risk emerging from the bias in AI systems as well as issues concerning opacity of models (often referred to as black-box models) leading to lack of transparency, especially in self-learning models. Data governance is also a fundamental layer in the discussion of governing AI. A machine learning model’s accuracy and efficacy are highly influenced by the data being used to train it and any bias in data can be reinforced by the models and propagated at scale. Taking the example cited above of AI systems disbursing loans, if the algorithm making this decision is trained on historical data which has been biased against certain ethnicities or genders, the AI system will continue to propagate these biases while making decisions.
Other important factors in AI governance are agency and accountability. The issue of agency is important – both in terms of how much agency an AI system has, to make autonomous decisions, as well as the agency of the end-user either using the AI system or being impacted by it. Agency and autonomy of AI have led to considerations on keeping humans in the loop, particularly for sensitive use cases that are consumer-facing or in high-risk sectors like healthcare. From an end-user perspective, they must be provided with appropriate reasoning with respect to decisions of an AI-based system and have recourse in case of disparate impact – something which brings into focus the explainability of AI systems. AI systems should be able to provide a reasonable level of explanation behind certain decisions taken by the algorithms.
These discussions have led to the development of hundreds of principles and frameworks related to the governance of artificial intelligence, both by governments as well as the private sector. Different policy levers have been explored including high-level frameworks, principles, voluntary guidelines, soft law as well as enforceable regulation. ASILOMAR AI principles released in 2017, comprise 23 guiding principles for research and development of artificial intelligence and were endorsed by Stephen Hawking and Elon Musk. National AI strategy documents issued by countries feature a section on responsible development and deployment of artificial intelligence.
Some countries have also done a deep-dive on the topic, like India’s approach document on Responsible AI and others including the private sector have explored different levers and options like setting up an AI ethics Board (IBM) or internal audit frameworks (Google) amongst other efforts. In addition, standards bodies like IEEE have been exploring several linked to Artificial Intelligence affecting human well-being. This includes standards for child and student data governance. Many international organizations and UN bodies have also released principles and frameworks related to AI, the associated ethics of AI systems, and their impact on society. This includes OECD’s AI principles, UNESCO’s recommendations on the ethics of artificial intelligence which was adopted by its member states, and UNICEF’s Generation AI, a program focused work on AI and its impact on children and provides policy guidance on the topic.
However, there is a growing acknowledgment as well as demand from civil society and other actors to ensure that responsible AI principles and guidelines are adopted and implemented. Also, the need for effective laws in addition to voluntary guidelines/principles, which fall outside the purview of regulation. EU’s AI act which was released last year has been one of the biggest steps in this direction. The act has adopted a risk-based classification of AI systems. While some systems like social scoring are outrightly banned, several others like recruitment, management of critical infrastructure, and law enforcement have been classified as high-risk.
High-risk AI systems must conform to stringent quality standards which incl robustness, accuracy, cybersecurity, appropriate data governance and will be subject to other important requirements including conformity assessments and certifications. The act is currently receiving feedback from within the EU and from other stakeholders like the private sector and vendors who would be directly impacted when this act becomes law. Current discussion and feedback points include the definition of AI as stated in the act, the exact process for conducting conformity assessments, and in terms of certification, what are the parameters across which systems would be certified (robustness, fairness, accuracy, transparency, etc.)
II. LINK TO EXISTING LAWS AND REGULATION BY INDUSTRY AND USE-CASES.
AI governance is also closely linked to existing laws, some of which would cut across any legislation related to AI, for example – data privacy laws. In particular, any AI application which has models being trained on historical consumer/customer data needs to ensure an appropriate level of privacy and consent before their data is used, while also taking into consideration the potential bias in such data sets. In certain other use cases like AI systems for hiring, loan disbursement, etc. AI governance would cross-intersect with existing laws related to discrimination and consumer protection.
If we take an industry and use case lens, not all uses of artificial intelligence require the same level of scrutiny, governance, or regulation. The application of machine learning for predicting when a machine breaks down in a factory is very different from its application to assess if a radiology image indicates a cancerous tumor. The latter has critical implications since a wrong assessment could impact human life. Nuances vary across sectors and some sectors already have several governance requirements.
For example, the banking and financial services sector already has existing governance for algorithms in trading and other use cases, so any discussion on AI governance should build off these existing governance mechanisms. In addition, if one goes down to the use-case level, the considerations for AI governance for financial trading platforms that could self-learn and collude, thereby distorting market fairness, would be different from AI governance for systems assessing creditworthiness. Regulators are cognizant of these differences and hence, there has been an increase in the number of frameworks, efforts, or laws being considered at an industry level as well. Some examples are presented below:
- Finance: The work being done by the Monetary Authority of Singapore (“MAS”) through their project Veritas. Veritas aims to enable financial institutions to evaluate their AI solutions against the principles of fairness, ethics, accountability, and transparency (“FEAT”) that MAS co-created with the financial industry in late 2018 to strengthen internal governance around the application of AI and the management and use of data. They are also developing open-source tools that financial industry players can utilize for AI explainability, especially for consumer facing services or applications. There is a big emphasis on keeping humans in the loop, as is also highlighted in Humans keeping AI in check – emerging regulatory expectations in the financial sector from the Financial Stability Institute at the Bank for International Settlements.
- Healthcare: In the past, algorithms or software code could be ‘locked’ or ‘frozen’ for healthcare devices or medical devices, thereby ensuring that a medical device performs to deliver on tried and tested outcomes. With self-learning algorithms, the approach has shifted. In response, the Food and Drug Administration in the U.S. has released a Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device. This is aimed at ensuring that any software medical device with an embedded AI solution that could evolve through model training and tuning should be able to demonstrate analytical and clinical validation. Some other frameworks, like the World Economic Forum’s Chatbots RESET, provides a framework for governing responsible use of conversational AI in healthcare.
- HR and recruitment: The EU’s AI Act has classified recruitment as a high-risk AI system. Recently, New York City Council passed a local law in relation to automated employment decision tools, a regulation that directly targets a rapidly growing market of AI solutions providers in the recruitment space. As per NYC’s law, companies using automated solutions would have to notify candidates if an automated tool was used to make a hiring decision and vendors would have to undergo a ‘bias audit’ before their tool can be permitted for use in the market.
III. THE ROAD AHEAD
As the landscape of AI governance shifts and evolves, stakeholders should explore the following methods and issues to ensure AI governance delivers on the dual goal of minimizing the risks of AI systems while allowing for it to benefit end users.
- Sandboxes and evidence-based pilots. As agile regulation continues to evolve to keep pace with the developments in the AI space, regulatory sandboxes in AI and pilots with evidence duly captured along with the processes and steps involved in conformity assessments and certifications can deliver the dual benefits of trust as well as clarity to researchers and the private sector. For example, in the UK, a detailed proposal has been released by the Ada Lovelace Institute for the use of an algorithmic impact assessment for data access in a healthcare context – the UK National Health Service (NHS)’s proposed National Medical Imaging Platform (“NMIP”).
- International alignment on governance. Countries and regions will always have some local laws. However, global alignment on AI governance can help bring some level of uniformity and fairness for vendors operating across regions – thereby ensuring the right balance between innovation and regulatory compliance and also facilitating ease of doing business, while protecting the rights and interests of consumers/end users.
- Public awareness and education. While regulation can help safeguard the interests of end-users and mitigate risk associated with AI systems, a critical enabler in this journey is public awareness and consumer education. Awareness and education can help consumers make more informed decisions during their interaction with AI agents or bots and also understand consumer rights in this context. This is especially important when the AI system has any level of automated decision-making capabilities.
- Being regulation-ready and responsible AI by design. In this new era of emerging technologies, trust and trustworthiness are important parameters for businesses, especially in consumer-facing industries. Companies should look beyond the current fiduciary and regulatory requirements to ensure responsible AI is not a compliance function but inherent to the core values and well-integrated into products, right from design stage. As has been highlighted in numerous publications and articles, building multi-disciplinary AI teams, and ensuring appropriate metrics around explainability, fairness, robustness, transparency etc. can help deliver trustworthy AI products to the market. For adopters of AI solutions, especially sensitive use cases, appropriate internal processes should be developed, to ensure that there is a human in the loop, so that AI systems can augment decision-making. AI governance shouldn’t be seen as detrimental to business growth, rather as an opportunity for companies to build Responsible AI practices and demonstrate trustworthy leadership.
- AI governance start-ups and reg-tech solutions: The evolving AI governance space also presents opportunities for businesses, as can be witnessed by the emerging start-ups in the ethical and responsible AI space as well as a number of big-tech providers like IBM, who have rolled out AI fairness assessment and explainability tools like AI Fairness 360 and AI Explainability 360. Such start-ups could help companies adhere to regulatory and certification requirements my monitoring the quality of data and algorithms which form the basis of the AI solution and avoiding disparate outcomes in sensitive use-cases.
While AI continues to be a race across countries and regions, harmonizing approaches on governance can help accelerate the market for AI based on trust and the right safeguards in place. Companies will have to revamp their AI development and deployment practices while governments will have to ensure that high-risk AI use cases are subject to appropriate laws with due legal options and recourse in-case of disparate outcomes. This will ultimately help in developing and deploying AI systems which are human centered and keep the interest of society at their core.
[1] Manager, Global AI Action Alliance, World Economic Forum.