The lawmakers spearheading the work on the AI Act launched the idea of an AI Office to streamline enforcement and solve competency disputes on cross-border cases.
Last week, the European Parliament’s co-rapporteurs Brando Benifei and Dragoș Tudorache circulated a new batch of compromise amendments, seen by EURACTIV.
The new compromises focus on the governance of the AI Act, an EU draft regulation to impose rules on Artificial Intelligence proportionate to the risk of causing harm. The original proposal put competent national authorities in the driving seat, with an AI Board to provide coordination.
Related: Revising the Competition Law Rulebook for Digital Markets in Europe: An Update
The MEPs went in a much more centralised direction, proposing replacing the AI Board with an AI Office, which would be a little short of a new EU agency with its own legal personality, funding and staff. The body would be independent but accountable to the European Parliament and Council.
While there seems to be a consensus among policymakers on the need to introduce centralised elements to ensure effective enforcement, the idea of an AI agency has in the past seen pushbacks from conservative MEPs, for budgetary reasons.
Mandate
The main task of the AI Office would be to provide a centralised body for the regulatory tasks under the AI Act by supporting, advising and coordinating the work of the competent national authorities and the European Commission, notably on cross-border cases.
In particular, in case of severe disagreements between different authorities, the AI agency would have to issue a binding decision on the enforcement competencies within three months to ensure that the AI rulebook is consistently applied across the bloc.
Additional support would take the form of technical expertise for the national authorities, including via training programmes and information provision on matters related to non-compliance with regulation. The EU body could also propose amendments to the AI definition under the regulation.
The Office would issue opinions and recommendations on matters like technical standards and sandboxes, isolated testing environments. An annual report would evaluate the implementation of the AI Act and its impact on economic operators and include recommendations related to the list of high-risk categories, prohibited practices and codes of conduct.
Twice a year, the AI Office would have to organise stakeholder consultations with representatives from businesses and civil society to assess the AI Act’s implementation status and identify any regulatory loopholes and technological trends.