The heads of Meta and OpenAI have shown support for government artificial intelligence (AI) regulations.
Meta CEO Mark Zuckerberg and Sam Altman, chief executive of OpenAI, voiced their support for state-sponsored AI guidance following discussions with European Commissioner Thierry Breton, Bloomberg News reported Friday (June 23).
Breton said he and Zuckerberg were “aligned” on the European Union’s (EU) AI regulations, with the two agreeing on the EU’s risk-based approach and to measures like watermarking.
Altman, meanwhile, said he looks forward to working with the EU on AI regulations. Bloomberg noted that the discussions were part of Breton’s tour of tech companies. Following his meetings, he said Meta seemed prepared to meet Europe’s new AI rules, though the company will undergo a stress test of its systems in July.
Read more: Meta Says Generative AI & The Metaverse Will Evolve Together
Breton also met earlier this year with Google CEO Sundar Pichai, who also agreed to a need for voluntary rules around AI.
Earlier this month, the European Parliament approved a draft law known as the A.I. Act, considered the world’s first set of comprehensive AI rules. The final law is expected to be approved early next year, if not by the end of 2023.
The EU’s proposed legislation would limit some uses of the technology and would classify AI systems according to four levels of risk, from minimal to unacceptable. This approach will focus on applications that present the largest potential risk for human harm, similar to the drug approval process.
The AI systems in these sectors — which include critical infrastructure, education, human resources, public order and migration management — will face strict requirements such as transparency and accuracy in data usage.
Companies that violate the regulations could face fines of up to €30 million ($33 million) or 6% of their annual global revenue.
Last week brought reports that OpenAI successfully lobbied for changes to the act to reduce the regulatory burdens the company would have faced.
For example, the company reportedly successfully argued that its general-purpose AI systems should not be included in the A.I. Act’s high-risk category.