By: Jeremy Kahn (Fortune)
We are all eagerly awaiting Tesla’s “A.I. Day” on Friday, where Elon Musk is expected to unveil a protoype of the humanoid Optimus robot that the company has been working on for the past year. My colleague Christiaan Hetzner and I interviewed robotics experts and Tesla-watchers to get a sense of what Musk may debut tomorrow—and how close it may come to the much-hyped vision of a robot butler that the billionaire entrepreneur has spun. (Spoiler alert: probably not very close.) You can check out our story here. And of course, I am sure we’ll cover whatever it is Musk unveils in next week’s regular edition of this newsletter.
In the meantime, there was a bit of news this week that has received none of the media attention lavished on Tesla’s “A.I. Day” but may nonetheless ultimately prove more significant for how businesses use A.I. On Wednesday, the European Union, which has led the way globally in creating regulation around technology, issued a new set of proposed civil liability laws for those deploying A.I. systems. The rules try to update and harmonize existing liability rules across the 27-nation bloc to take into account the fact that with many A.I. systems, it might be difficult for someone who has been harmed by A.I. software to figure out how exactly the A.I. software works and where it went wrong.
Remember, the EU already has a new A.I. Act working its way through the legislative process. That act establishes different requirements for the use of A.I. in high-risk use cases—those that affect a person’s health, basic freedom, and financial well-being—and lower-risk use cases, such as marketing or personalization of retail offerings. Companies or organizations using A.I. for high-risk uses are required to do a lot more testing of the software, have to try to make sure the A.I.’s output can be “interpreted” by the person using the software, and must maintain an audit trail.
The new proposed rules in the EU’s A.I. Liability Directive introduce several important concepts. For one thing, the rules note that data loss is a potential harm for which someone can seek civil liability damages. For another, the proposed rules make clear that software counts as a kind of “product” under the EU’s liability laws. Consumer rights and data protection advocates cheered both of these provisions.
Then the proposal says that a court can now order a company using a high-risk A.I. system to turn over evidence of how the software works. (It does say that the disclosure must be “necessary and proportionate” to the nature of the claim, a balancing test that the EU says will help ensure that trade secrets and other confidential information is not needlessly disclosed. But, as always, it will be up to the courts to figure out how this applies in any specific case.) The EU warns that if a company or organization fails to comply with a court-ordered disclosure, the courts will be free to presume the entity using the A.I. software is liable…