Generative artificial intelligence (AI) tools like ChatGPT could be too good to be true — literally.
A recent consumer alert from the Federal Trade Commission (FTC) is warning that scammers are using AI to replicate voices as part of high-tech phone call scams; while the U.S. Copyright Office continues to have its hands full as generative AI image systems scrape the web for pictures to feed their models, rights-free or not.
This as President Joe Biden met Tuesday (April 4) with a council of science and technology advisers to discuss the risks, as well as the opportunities, that recent and rapid advancements in AI development may pose for both individual users and national security.
The meeting indicates the level of attention the administration is paying to AI, which burst onto the scene last fall with headline-grabbing capabilities that captured both public imagination as well as the attention of businesses increasingly looking to do more with less and eke out efficiencies in today’s challenging economic environment.
Related: UK Unveils New ‘Agile’ AI Regulations
Per a report by Reuters, the president used the meeting to reiterate a call for Congress to pass bipartisan privacy legislation that puts limits on the personal data technology companies collect, bans advertising targeted at children, and prioritizes health and safety in product development.
When asked whether he considered AI to be dangerous, the president replied, “It remains to be seen. Could be.”
The speed at which AI is radically transforming global economies has regulators scrambling to keep up and contain it.
As ChatGPT’s popularity has exploded, global concerns have been mounting about the AI industry’s lack of regulation, and lawmaker scrutiny around AI technology has increasingly shone a spotlight on an emerging regulation-innovation tug of war.
Just last week, Italy became the first Western nation to ban the Microsoft-backed OpenAI ChatGPT-4 chatbot after the nation’s Data Protection Authority announced a probe of the AI solution’s alleged breach of General Data Protection Regulation (GDPR) privacy rules, as well as its age-verification practices.
The Italian regulator claimed that there was “an absence of any legal basis” justifying the massive data collection and storage of personal information used to “train” the GPT-4 chatbot, and issued a temporary ban on its use in the country.
ChatGPT is already blocked in a number of countries, including China, Iran, North Korea and Russia.