A federal lawsuit claims OpenAI trained its ChatGPT tool using millions of people’s stolen data. The suit, filed Wednesday (June 28) in U.S. District Court in San Francisco by the Clarkson law firm, accuses the multi-billion dollar artificial intelligence (AI) company of carrying out a strategy to “secretly harvest massive amounts of personal data from the internet.”
This data, the suit alleges, included private information and conversations, medical data and information about children, without owners’ knowledge or permission.
“Without this unprecedented theft of private and copyrighted information belonging to real people,” the suit says, OpenAI and ChatGPT “would not be the multi-billion dollar business they are today.”
The lawsuit asks the court for a temporary freeze on commercial use of OpenAI’s products. Also named in the suit is Microsoft, which has invested more than $10 billion in OpenAI. PYMNTS has contacted both companies for comment but has not yet gotten a reply.
Read more: OpenAI Picks London For 1st International Office
OpenAI made ChatGPT open to the public in late 2022, and quickly saw the tool explode in popularity due to its ability to provide human-sounding responses to prompts. Since then, companies around the world have begun incorporating generative AI into countless products, leading to — as the lawsuit puts it — an “AI arms race.”
As PYMNTS wrote earlier this month, OpenAI’s “original mission was to build safe AI technology for the benefit of humanity.”
The company changed its organizational structure in 2019 to allow it to raise billions of dollars — primarily from Microsoft — and the firm generates revenue by charging a subscription for access to ChatGPT and other tools, licensing its large language models (LLMs) to businesses.
The change from non-profit to tech giant was noted in the lawsuit, which argues the company “abandoned its original goals and principles, electing instead to pursue profit at the expense of privacy, security, and ethics.”
The suit also devotes a lot of space to potential dangers of AI, noting a belief among experts for the technology’s potential to “act against human interests and values, exploit human beings without regard for their well-being or consent, and/or even decide to eliminate the human species as a threat to its goals.”
That danger has been noted by none other than Sam Altman, CEO of Open AI.
“I think if this technology goes wrong, it can go quite wrong. And we want to be vocal about that,” he testified in a recent Senate hearing. “We want to work with the government to prevent that from happening.”