The European Union has published its draft legislation on AI regulation, which is set to be the key framework for AI providers and distributors looking that work in the EU.
In the legislation, the EU sorts AI systems into three risk categories: unacceptable-risk, high-risk, and limited or minimal risk. For the most part, AI systems in the limited or minimal risk category will be able to operate as they did previously, with the EU legislation specifically tackling AI systems that could put EU citizen’s security or privacy at risk.
“Artificial Intelligence is a fantastic opportunity for Europe and citizens deserve technologies they can trust,” said President of the European Commission, Ursula Gertrud von der Leyen. “Today we present new rules for trustworthy AI. They set high standards based on the different levels of risk.”
AI systems of minimal or low risk include chatbots, spam filters, video and computer games, and inventory management systems, along with most other non-personal AI systems that are already deployed in the world.
High-risk AI systems include most artificial intelligence that is deployed with real-world effects, such as consumer credit scores, recruitment, and safety-critical infrastructure. While these are not banned, the EU legislation aims to ensure that there are more stringent requirements and oversight into these systems, along with more costly fines for those that fail to properly secure data.
The EU intends to review the high-risk list annually, either to add new AI systems to it or to downgrade some AI systems that were high-risk but have become either normalized in society or do not have the same risk factor as in previous years.