Over the past years, regulatory pressure on tech companies to identify and mitigate the adverse impact of AI systems has been steadily growing. In 2022, we can expect this pressure to grow even further with transnational, national, federal, and local AI regulation kicking in. Many of these regulatory frameworks target both the design and the use of AI systems, often with a sector focus. AI practitioners and regulators alike are in need of new approaches that allow them to effectively respond to these regulations, and to enforce them competently. In this contribution, we will map out a Practice-Based Compliance Framework (“PCF”) for identifying existing principles and practices that are already aligned with regulatory goals, that therefore can serve as anchor points for compliance and enforcement initiatives.

By Mona Sloane & Emanuel Moss[1]

 

I. INTRODUCTION

Over the past years, regulatory pressure on tech companies to identify and mitigate the harm AI systems can cause has been steadily growing. Facial recognition leading to wrongful arrest,[2] cover-ups of research[3] into the psychological toll social media inflicts on teenagers, wildly disparate error rates[4] from AI products for members of different racial groups, and a seemingly endless succession of privacy breaches[5] have ensured this pressure is well-earned. In 2022, we can expect this pressure to grow even further with transnational, national, federal, and local AI regulation being proposed at an a

...
THIS ARTICLE IS NOT AVAILABLE FOR IP ADDRESS 216.73.216.185

Please verify email or join us
to access premium content!