The US Federal Trade Commission (FTC) issued a report to Congress last week, on June 16, warning about using artificial intelligence (AI) to combat online harm and urging policymakers to exercise “great caution” when relying on AI as a policy solution. The Commission voted 4-1, with the Republican Commissioner Philips issuing a dissenting statement.
“Our report emphasizes that nobody should treat AI as the solution to the spread of harmful online content,” said Samuel Levine, Director of the FTC’s Bureau of Consumer Protection. “Combatting online harm requires a broad societal effort, not an overly optimistic belief that new technology — which can be both helpful and dangerous — will take these problems off our hands.”
In 2021, Congress directed the FTC to draft a report examining ways in which AI “may be used to identify, remove or take any other appropriate action necessary to address” several specified “online harms.” The harms that Congress was particularly concerned about included online fraud, impersonation scams, fake reviews and accounts, bots, media manipulation, illegal drug sales and other illegal activities, sexual exploitation, hate crimes, online harassment and cyberstalking and misinformation campaigns aimed at influencing elections. Some of these concerns fall outside the FTC’s jurisdiction, as the agency pointed out in the report.
The 82-page report shows a rather negative view when it comes to using AI as a tool to control other forms of AI, at least to fix certain online problems. The report says that “it is crucial to understand that these tools remain largely rudimentary, have substantial limitations, and may never be appropriate in some cases as an alternative to human judgment.” The key conclusion for the FTC is that governments, platforms and others must exercise great caution in either mandating the use of, or over-relying on, these tools even for the important purpose of reducing harm.
The FTC argues that if AI is not the answer, as it seems to imply, and if the scale of online harm makes human oversight infeasible, other ways, either regulatory or otherwise, must be found to address the spread of these harms.
For the FTC, the main problem of these tools is that the datasets supporting them are often not robust or accurate enough to avoid false positives or false negatives. Sometimes the systems are trained with data that is biased or are unrepresentative datasets, the agency argues. Other times, the algorithm has inherent design flaws and produces mistaken outcomes. The limitations of these tools can go well beyond merely inaccurate results and cause other harms, like discrimination, censorship or invasive commercial surveillance, according to the agency.
These are some of the reasons why the FTC recommends exercising extreme caution before policymakers or regulators mandate the use of these tools to address harmful content online. Instead of solely using laws or regulations as a solution, however, the FTC advocates for a different direction: increased transparency and accountability for the companies that are developing this AI technology.
“Seeing and allowing for research behind platforms’ opaque screens may be crucial for determining the best courses for further public and private action. It is hard to craft the right solutions when key aspects of the problems are obscured from view,” said the FTC in its report.
Given that major tech platforms are already using AI tools to address online harms, lawmakers should consider focusing on developing legal frameworks that would ensure that AI tools do not cause additional harm, the FTC said.