The EU has recently published their plans to regulate Artificial Intelligence (or AI), going so far as to propose a ban on some of the most concerning applications of the technology. Under the proposed legislation, firms in the EU will be banned from using AI for facial-recognition surveillance and human-behaviour manipulation. Furthermore, governments will not be able to maintain social-credit systems as seen in China. With the ‘Brussels effect’ (EU laws becoming de facto international laws) being so dominant in the tech industry, many non-EU countries may soon fall into step with similar regulation.
While only the most dangerous applications will be banned, many others will fall into the category of “high-risk” usage, depending on who uses them. The social-credit and facial-recognition use cases previously mentioned fall under this specification when used by companies. Other high-risk technologies include predictive policing and biometric categorisation for race, gender and sexual orientation.
All companies developing high-risk software will have to undergo an assessment by their government, or they can elect to perform their own assessment, in accordance with the guidelines under the proposed plans. This is a potentially hampering addition to the legislation, but we will have to wait to see how this assessment would be performed. Breaches of the rules may result in fines of up to €30 million or 6% of global revenues, whichever is higher.
Even if an AI system escapes the high-risk classification, they may still be subject to transparency obligations if they fall into the “limited-risk” category. This would include technology that interacts with humans or generates or manipulates media such as “deep fakes”. Consumers would have to be given fair warning that such AI is being used, similar to GDPR notifications, though crucially without the option to opt-out. If you are not comfortable with the AI, the company will not have to provide an alternate version of their service. Finally, any AI that poses marginal-risk such as spam filters will be free from regulation in the proposed law.
Legal frameworks often harm smaller firms the most, and so part of the guidelines include the idea of creating “sandboxes” for start-ups. These would be environments where new technologies can be tried out without the fear of hefty fines. This would also help to frame Europe as a centre of development, and keep software companies from straying to less-regulated waters.
These plans have sparked anger from both tech firms and data-protection institutions, saying that the legislation goes too far and not far enough respectively. “There’s a real question-mark over whether the regulatory framework is robust enough” according to Sarah Chander of European Digital Rights. Many have cited the self-assessment and broad phrasing of the bill as weak aspects in need of improvement.
On the other hand, Benjamin Mueller of the Amazon/Apple-backed lobby group Centre of Data Innovation believes that this will “limit the areas in which AI can realistically be used”. Indeed, one could argue that that is partially the point of the legislation. All of this is subject to change, as the law will have to pass through EU legislators and all individual members, before coming into action. This process will change details of the framework, and will take at least two years from now to create the final version. While we cannot know whether the legislation will end up stronger or weaker than it is now, we can be certain that this is a step in the right direction for the EU.
Add comment
Comments