By: Jason Pilkington (Truth on the Market)
The European Commission this week published its proposed Artificial Intelligence Regulation, setting out new rules for “artificial intelligence systems” used within the European Union. The regulation—the commission’s attempt to limit pernicious uses of AI without discouraging its adoption in beneficial cases—casts a wide net in defining AI to include essentially any software developed using machine learning. As a result, a host of software may fall under the regulation’s purview.
The regulation categorizes AIs by the kind and extent of risk they may pose to health, safety, and fundamental rights, with the overarching goal to:
- Prohibit “unacceptable risk” AIs outright;
- Place strict restrictions on “high-risk” AIs;
- Place minor restrictions on “limited-risk” AIs;
- Create voluntary “codes of conduct” for “minimal-risk” AIs;
- Establish a regulatory sandbox regime for AI systems;
- Set up a European Artificial Intelligence Board to oversee regulatory implementation; and
- Set fines for noncompliance at up to 30 million euros, or 6% of worldwide turnover, whichever is greater.