The EU passed The Artificial Intelligence Act on Wednesday which is a draft law that establishes restrictions on the uses of artificial intelligence (AI). As one of the first legislative moves by a controlling body to regulate AI, the act may serve as a model for policymakers across the globe.
“[T]he EU is committed to strive for a balanced approach,” the proposal reads. While it acknowledges that AI can be leveraged to provide a bounty of societal benefits, it also explains the fast-moving nature of the technology introduces several risks. According to the proposal, The AI Act’s suggested framework aims to:
Ensure that AI systems placed on the Union market and used are safe and respect existing law on fundamental rights and Union value; ensure legal certainty to facilitate investment and innovation in AI; enhance governance and effective enforcement of existing law on fundamental rights and safety requirements applicable to AI systems; [and] facilitate the development of a single market for lawful, safe and trustworthy AI applications and prevent market fragmentation.
To do so, the proposed act categorizes AI applications based on risk. Applications that pose an unacceptable risk will be prohibited, including those that violate fundamental rights, employ manipulative or exploitive techniques, and engage in social scoring. High-risk applications, such as resume-scanning tools and other technologies that may introduce undue bias, will be subject to mandatory requirements and an ex-ante conformity assessment. Applications that pose a low or minimal risk, however, will still be permitted without limitations. The proposed bill was accompanied by annexes that further clarify the types of applications intended for capture in each risk category.
The AI Act comes at a time when nations across the world grapple with the governance of these highly innovative technologies. In recent months, China passed similar legislation, while Canada launched an investigation into AI chatbot ChatGPT, which Italy effectively banned. G-7 world leaders also collectively recognized the urgent need for international standards to regulate AI technology.