“I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish. I mean with artificial intelligence we’re summoning the demon.”
– Elon Musk at MIT’s AeroAstro Centennial Symposium
Dear Readers,
We are living in an increasingly AI-driven world. As is evident from the above, even proponents of AI like Elon Musk agree that regulatory oversight is needed for AI in its various facets.
Yet the question is very complex. AI is not just one thing, and it permeates an increasing number of businesses. Firms from social media to consumer finance are integrating AI to the core of their operations. This raises myriad regulatory (not to mention ethical) issues across a number of domains, including antitrust, privacy, public sector transparency, credit regulation and many others.
Legislatures, courts and regulators around the world are grappling with these issues in real time, as AI deployment continues. The pieces in this Chronicle address the state of the art in these regulatory challenges from a number of perspectives.
From a regulatory perspective, as is not uncommon, the EU institutions are leading the charge. A piece by Katerina Yordanova explores the main features and evolution of the proposal for an EU AI Act, and critically assesses some shortcomings that still need to be addressed. It concentrates on regulatory sandboxes and standardization and explores them in the context of the AI Act and queries whether they effectively protect EU fundamental rights and the public interest.
From a firm perspective, Benjamin Cedric Larsen & Yong Suk Lee outline distinct approaches to AI governance and regulation and discuss their implications and management in terms of adopting AI and ethical practices. In particular, they explore the tradeoffs between enhanced AI ethics or regulation and the diffusion of the benefits of AI. In a similar vein, Mona Sloane & Emanuel Moss identify current trends in AI regulation and map out a Practice-Based Compliance Framework (“PCF”) for identifying existing principles and practices that are already aligned with regulatory goals. These therefore can serve as anchor points for compliance and enforcement initiatives.
Finally, from the public sector perspective, Jerry Ma explores the possibility of a “non-dispositive, human-first AI agenda.” This agenda would recognize the simultaneous limitations of standalone “black-box” AI and the potential of AI technology to empower humans. It proposes a form of AI that “rides shotgun” with human experts sitting in the driver’s seat.
In sum, as the philosopher Gray Scott asks, “[t]he real question is, when will we draft an artificial intelligence bill of rights? What will that consist of? And who will get to decide that?” The pieces in this Chronicle make a valuable contribution to this discussion.
As always, many thanks to our great panel of authors.
Sincerely,
CPI Team