At the end of April, members of the European Parliament came to a provisional political deal regarding the Artificial Intelligence (AI) Act — a piece of EU legislation that would closely regulate the use of AI-powered tools in both the private and public sector.
The act has generated quite a bit of buzz over the past few months; now that a provisional deal has been reached, a key committee vote is scheduled for May 11, with a plenary vote expected about a month later. Media outlets have dubbed the AI Act a piece of “landmark” legislation and the “world’s first Artificial Intelligence rulebook.” Proponents of the act have even likened it to the General Data Protection Regulation, for its potential to serve as a worldwide standard for how governments regulate the use of AI.
“In light of the speed of technological change and possible challenges, the EU is committed to strive for a balanced approach,” the act reads. “It is in the Union interest to preserve the EU’s technological leadership and to ensure that Europeans can benefit from new technologies developed and functioning according to Union values, fundamental rights and principles.”
Essentially, the AI Act regulates any company that uses AI in its services or products, in addition to dictating the ways in which AI can be used by certain government agencies, such as law enforcement. It also deems some forms of AI to be “unacceptable,” essentially banning them if the act were to go into effect. However, proponents of the act emphasize that any form of AI not deemed to be either unacceptable or high-risk, will be “largely left unregulated.”
As an example of “unacceptable” AI, proponents note that AI-powered social scoring done by the government (for example, like that done in China) would be banned. Meanwhile, AI tools like resume scanning tools would be deemed “high-risk,” restricting the way that they’re used in the job application process.