By Pymnts
While American companies lead the artificial intelligence (AI) development charge, U.S. policymakers risk falling behind.
The dangers inherent to the abuses of AI technology, including inequitable discrimination and algorithm bias, disinformation and fraud, among others, make it imperative that governments move to regulate the technology appropriately, and fast.
China is this week (May 10) moving on from the consultation period of its second round of generative AI regulation. The proposed framework builds on pre-established rules agreed to in 2022 meant to regulate deepfakes.
The bulk of the breakthroughs in generative AI technology have happened within the U.S., but China leads America in consumer adoption of the technology, and market observers believe that leaders in Beijing are hoping faster-paced AI regulation efforts will drive even further uptake.
Microsoft’s China-focused AI chatbot, Xiaoice, has a user base that is almost double the size of the American population.
The European Union (EU) has also already made advances in establishing rules and regulations to oversee both AI’s impact and development, reaching a provisional political deal on an Artificial Intelligence rulebook scheduled for a deciding vote Thursday (May 11).
The U.K. Competition and Markets Authority (CMA) said last week (May 4) that it would look at the underlying systems and foundational models behind AI tools, including the use of large language models (LLMs) in what observers believe to be a pre-warning to the emergent sector.
Per the government statement, the CMA will review how the innovative development and deployment of AI can be supported against five key principles: safety, security and robustness; appropriate transparency and explainability; fairness; accountability and governance; and contestability and redress.
The CMA will publish its full findings later this fall, in September 2023.
Meanwhile, the U.S. is just getting started.