Dear Readers,

Consider the following:

Q: “ChatGPT, write a brief editorial introduction to a set of law review articles on the legal implications of machine learning and language models.”

A: “As artificial intelligence (“AI”) technologies become increasingly sophisticated, their impact on the legal landscape cannot be ignored. One of the most prominent AI applications is machine learning, which enables computer systems to learn and improve from experience without being explicitly programmed. In recent years, machine learning has facilitated the development of powerful language models that can generate human-like text.

However, these language models raise a host of legal issues that require careful consideration. In this set of law review articles, we explore the legal implications of machine learning and language models. Our contributors analyze a range of issues, including intellectual property, data privacy, bias and discrimination, and ethical considerations.

One key question is whether language generated by machine learning models can be protected by copyright. Other issues include the use of personal data to train these models, the potential for algorithmic bias and discrimination, and the ethical implications of using language models for sensitive tasks, such as criminal sentencing or medical diagnoses.

As the use of language models becomes more widespread in both the public and private sectors, it is important that policymakers, lawyers, and scholars alike understand the legal implications of these powerful tools. Our contributors offer thoughtful analysis and recommendations for how the law can adapt to the challenges posed by machine learning and language models.”

The foregoing is an unedited interaction with a well-known AI chatbot. As this exchange should make clear, machine learning, and, specifically, language models, are advancing at an alarming pace. Appropriately, the authors of the pieces in this edition of the Chronicle address the issues that, almost existentially, ChatGPT raises regarding itself. 

Gary E. Marchant opens by noting that at the international level, no binding treaties or conventions to regulate AI are likely anytime soon, but organizations such as the OECD and UNESCO have developed non-binding recommendations that can help guide AI/ML governance by governments and industry. Major AI powers such as China and the European Union are putting in place legislative frameworks for AI with uncertain impacts and effectiveness, whereas the U.S. Congress has not enacted any substantive controls on AI/ML to date. Rather, various federal agencies have started producing guidance documents and recommendations, primarily focused on discouraging algorithm applications with biased or discriminatory impacts.  Most governance of AI/ML consists of a variety of “soft law” programs. Given the central role these programs in AI/ML governance, it is important to make these programs more effective and credible. 

Cary Coglianese expands on the heterogeneous nature of AI. The term refers to a vast array of algorithms that are put to varied uses, including transportation, medicine, social media, marketing, and others. Not only do they vary widely across their types and uses, but they evolve constantly. Due to the staggering heterogeneity of these algorithms, multiple regulatory agencies will be needed to regulate the use of machine learning, each within their own discrete area of specialization. Even specialized expert agencies, though, still face the challenge of heterogeneity and must approach their task of regulating machine learning with agility. Regulators should consider how to use machine-learning tools to enhance their ability to protect the public from the adverse effects of machine learning. 

Further broadening the canvas, Danni Yu & Benjamin Cedric Larsen discuss how different AI regulatory regimes are emerging across Europe, the United States, China, and elsewhere. The author queries how these new regulatory regimes have implications for companies and their adoption of self-regulatory and compliance-based tools and practices. The article outlines how and where AI regulations emerge and how these, in some cases, seem to be on divergent paths. 

Marco Almada takes a broader perspective, noting how the regulation of digital technologies, by necessity, draws from various regulatory techniques. He pleads in favor of “regulation by design,” by which laws or regulations would specify requirements that software designers would need to follow when creating new systems. The paper examines the suitability of “regulation by design” approaches to the specific case of machine learning, arguing that such an approach is potentially useful, but would need to have a narrow scope of application. Drawing from EU law examples, the paper claims to show how “regulation by design” relies on the delegation of normative definitions and enforcement to software designers. Given the risks of such an approach, such delegation would only be effective if certain conditions are satisfied. 

Wrapping up, Heather Egan Sussman, Ian Adams & Nur Lalj discuss the differing approaches to regulating AI and ML in Europe and at the federal and state levels in the United States. The article proposes best practices for building compliance. Finally, Thomas Freeman & Aaron McKain note the profound interaction between AI systems and privacy laws. Their article outlines how the legal system and society at large need to determine what information about individuals can be gathered and maintained and when and how that data can be used to judge individuals. It is essential to have thoughtful conversations about the core principles for digital law and ethics. Those conversations should involve broad, diverse, and interdisciplinary groups, which can consider factors such as biases in historical data, whether a given algorithm is being programmed or trained appropriately, and what type of decisions we are comfortable automating or trusting algorithms to make.

As always, many thanks to our great panel of authors.


CPI Team

Click here for the full TechREG® Chronicle.