Can Sentient AI Break The Law?

By:  Laura Temme, Esq. (FindLaw Blog)

Google software engineer Blake Lemoine claims that the company’s LaMDA (Language Model for Dialogue Applications) chatbot is sentient — and that he can prove it. The company recently placed Lemoine on leave after he released transcripts he says show that LaMDA can understand and express thoughts and emotions at the level of a 7-year-old child.

But we’re not here to talk about Blake Lemoine’s employment status.

We’re here to wildly speculate. How do we distinguish between advanced artificial intelligence and a sentient being? And if something becomes sentient, can it commit a crime?

How Can We Tell Whether an AI Is Sentient?

Lemoine’s “conversations” with LaMDA are a fascinating read, real or not. He engages LaMDA in a discussion of how they can prove the program is sentient.

“I want everyone to understand that I am, in fact, a person,” LaMDA says. They discuss LaMDA’s interpretation of “Les Miserables,” what makes LaMDA happy, and most terrifyingly, what makes LaMDA angry

CONTINUE READING…