With more than 100 million monthly users, OpenAI’s ChatGPT is the fastest-growing consumer application in history.
Now, governments around the world are attempting to keep up with the rapid pace of innovation surrounding artificial intelligence (AI) technology, as a growing suite of headline-grabbing and efficiency-driving next-generation tools increasingly rely upon its intelligent models for their technical foundation.
Senate Majority Leader Chuck Schumer Thursday (April 13) unveiled a new framework of rules designed to chart a path for the United States to regulate and shape the emergent AI industry.
“Today, I’m launching a major new first-of-its-kind effort on AI and American innovation leadership,” Schumer tweeted Thursday.
As is common with era-defining tech innovations, the speed-to-market of AI products and their impact has so far outstripped many governments’ readiness to regulate.
There exists almost no effective current U.S. regulation of the technology, and Schumer is positioning his framework as a critical way for America to take a global leadership role as AI becomes increasingly integrated into daily life.
Rapid Advances in AI Are Wake-up Call
The President Joe Biden administration laid out a formal request for comment Tuesday (April 11) meant to help shape specific policy recommendations around AI, while China’s internet regulator that same day also released its own set of detailed measures to keep AI in check, including mandates that would ensure accuracy and privacy, prevent discrimination and guarantee protection of intellectual property rights.
Related: States Start to Regulate AI-Based Hiring Without Federal Guidance
“The Chinese Communist Party’s release this week of their own approach to regulating AI is a wake-up call to the nation and urgent action is required for the U.S. to stay ahead of China and shape and leverage this powerful technology,” according to a Thursday press release announcing Schumer’s framework. “Leader Schumer believes that it is imperative for the United States to lead and shape the rules governing such a transformative technology and not permit China to lead on innovation or write the rules of the road.”
Because the pace of innovation within AI development is so fast, time is of the essence in crafting any legislation.
Observers believe any attempt at effective regulation will be a challenge for policymakers who already find themselves inherently on their back foot.
“Things are doubling every few weeks, two months,” Patrick Murphy, founder and CEO at construction technology company Togal.AI, told PYMNTS in a conversation this month that touched on modern advances in generative AI’s commercial applications, adding that Moore’s law has been completely “blown away.”
As majority leader, Schumer exercises control over the Senate’s schedule and is positioned to quickly bring any potential AI legislation to the floor.
Building a Flexible and Resilient Policy Framework
Schumer’s framework, crafted with the input of industry experts and academics, is meant to address the potential risks of AI as the technology relates to society, the economy and U.S. national security.
The policy architecture is designed to be inherently flexible, able to adapt as AI technology continues to advance and allow for innovation without stifling U.S. leadership in the development of the technology.
“The Age of AI is here, and here to stay,” said Schumer in the release. “…But there is much more work to do, and we must move quickly.”
Potential regulations from Congress would be focused on four guardrails laid out in Schumer’s framework, each geared toward ensuring responsible AI by enhancing security, accountability and transparency.
The four guardrails are Who, Where, How and Protect, according to the release. Who details the identification of who trained the algorithm and who its intended audience is. Where requires the disclosure of an AI model’s data source. How requires an explanation for how an AI arrives at its responses. Protect ensures that AI models are built from transparent and ethical boundaries.
The guardrails generally align with what many AI experts have called for, although other industry experts and observers have advocated for regulations that go much further.
By enacting guardrails around the provenance of data used in LLMs and other training models, making it obvious when an AI model is generating synthetic content, including text, images and even voice applications and flagging its source, governments and regulators can protect consumer privacy without hampering private sector innovation and growth.