Only have 1 minute? Here are 3 takeaways from the piece:
- The European Parliament voted in favor of the EU’s AI Act, marking a significant step towards setting a European rulebook on AI.
- The rulebook vows to prohibit AI systems with an unacceptable level of risk to people’s safety, such as AI-powered facial recognition.
- While researchers are praising the proposed act, startups are voicing concerns over its enforcement and the actual benefits it would bring for the industry.
The vote was passed with 499 votes in favor, 28 against and 93 abstentions, with negotiations on the final form of the law now as the next steps. According to the co-rapporteurs of the proposals, the EU is making history with the vote and has set the way for a dialogue that will eventually engage the rest of the world as well.
“We have made history today – we have set the way for the dialogue that we will need to start having with the rest of the world on how we can build responsible AI for our globe and the systemic risks that this can entail. But also thinking of everyday citizens, consumers, businesses, and institutions that need to be supported in having an intake of AI so that they can get the best out of the AI. Also, they can be sure that they can trust that the institutions have built a system of safeguards that can identify the real risks,” co-rapporteur Brando Benifei from the Progressive Alliance of Socialist and Democrats group said.
Bans on intrusive and discriminatory AI use
Facial recognition proved to be the most contentious issue of the vote, as MEPs argued about the extent to which biometric surveillance should be limited. The MEPs also rejected a proposal made by the EPP group that stated that the risks posed by real-time biometrics in public spaces could be outweighed during “extraordinary circumstances”.
“We have won in the Parliament to maintain a clear safeguard to avoid any risk of mass surveillance, and at the same time to maintain the possibility with the no real-time biometric identification to pursue criminals and any risks that we have in society,” Benifei added.
As a result, the list of bans will now include intrusive and discriminatory uses of AI such as:
- “Real-time” remote biometric identification systems in publicly accessible spaces;
- “Post” remote biometric identification systems, with the only exception of law enforcement for the prosecution of serious crimes and only after judicial authorization;
- Biometric categorisation systems using sensitive characteristics (e.g. gender, race, ethnicity, citizenship status, religion, political orientation);
- Predictive policing systems (based on profiling, location or past criminal behavior);
- Emotion recognition systems in law enforcement, border management, the workplace, and educational institutions;
- Untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases (violating human rights and right to privacy).
For co-rapporteur Dragos Tudorache from the Renew Europe group in the EP, one of the most important aspects of the text has clear protections over such contentious use of AI.
“There are protections about how seriously we have taken the prohibitions of use of artificial intelligence in the text, how seriously we have looked at the high risk applications and the mechanisms to also make sure that we’re not limiting unnecessarily how high risk applications would be forced into compliance,” Tudorache said.
The rulebook also states that generative AI systems based on models such as OpenAI’s ChatGPT or Google’s Bard would have to comply with transparency requirements. There should be a disclosure that the content was AI-generated, also helping distinguish so-called deep-fake images from real ones and ensuring safeguards against generating illegal content.
“We are also serving the agenda of promoting innovation, not hindering creativity, and deployment and development of AI in Europe, which is something that is an objective, just as important as the one to protect our citizens,” the Romanian MEP stated.
A positive step in the right direction or a concern in the making?
The EU’s AI Act is now becoming the first real global effort to regulate AI, and a positive signal that first and foremost focuses on safety and security concerns, researchers claim.
“It is quite incredible how by now we have not figured out the necessity of such a framework. Another positive thing is that it is not only addressing AI made in the EU, but rather it is also addressing the outsourced parts of the AI systems, which is a huge thing. It could be said that it goes even beyond the borders of the EU. The need of course is justified with EU’s fundamental rights protection and safety and security in the Union,” Italy-based AI researcher Viktor Miloshevski tells The Recursive.
However, there are still challenges ahead, especially when it comes to its enforcement, Miloshevski adds.
“The keywords surrounding the whole AI Act experience are prohibited artificial intelligence practices, classification of AI systems as high-risk, standards, conformity assessment, certificates, registration, and governance. A quite big one is also the keyword “enforcement”. And here is where things could become a bit tricky. At this stage of design on the Union level, the proposal establishes a European Artificial Intelligence Board (the ‘Board’), composed of representatives from the Member States and the Commission. In my opinion, the enforcement of the act would be quite challenging in the first couple of years with a growing trend of further adaptation,” the AI researcher argues.
For Marko Porobija, managing director at Croatia-based law firm Porobija & Spoljaric, the proposed regulation prioritizes safety and ethical considerations, something which is essential to ensure that the “benefits of AI are shared fairly and that potential negative consequences are minimized.”
“I appreciate the focus on transparency and accountability, as this will help build trust in AI systems and ensure that they are developed and used responsibly. Overall, I think that the EU Artificial Intelligence Act is a positive development that will promote the responsible use of AI and protect the well-being of people,” Porobija tells The Recursive.
However, there are also those that have concerns over the act. According to David Menger, CEO of Prague-based conversational AI startup Wingbot.ai, in its current form, the AI Act is “a patchwork law”.
“The only thing that makes sense is the ban on facial recognition in public places. The law completely overlooks the truly serious issues, like AI being used for fraud, deep fakes and scams, and privacy and copyright in terms of training data. Many things, like laws for fraud and data copyright, are barely enforceable. It’s just another legal hurdle for companies, not a solution for real issues. Could really spoil AI startups in the EU,” Menger tells The Recursive.
Depending on the law’s final form, Menger is also concerned as to what the regulation would bring for the end users of the various AI products and services out there.
“I am worried that the time and money we’d like to spend building better products will have to go towards compliance with the law, and our users won’t see any real benefit,” he adds.
For Raluca Apostol, co-founder and Chief Product Officer at people intelligence platform Nestor, when it comes to the privacy and security risks, there was guidance in this area, even before the rise of generative AI tools.
“In Europe, under the GDPR law, compliance policies and the secure handling of personal data are stipulated very clearly, with provisions in place even before 2018. While the GDPR does encompass certain aspects that are relevant to generative AI, such as data protection, consent, and transparency, there are additional considerations specific to generative AI that may require additional regulations or guidelines. Still, there is a framework,” Apostol tells The Recursive.
However, there are indeed many challenges and concerns that need to be addressed.
“These are real concerns that need to be addressed further by comprehensive regulations and standards. Unfortunately, what we have to balance here is a double-edged sword. On one hand, as a company you don’t want to see your sensitive data go into untrusty systems. On the other hand, blocking your employees’ access to AI tools can leave you behind your competitors. So I don’t think there’s an easy answer to these challenges nor a magical solution to solve these problems anytime soon,” she points out.
What comes next for the AI Act?
The European Parliament now needs to initiate further negotiations on the final text in three-way talks with the Commission and EU member countries.
According to Tudorache, this will be a process that needs to include everybody and one that will take a coordinated effort.
“Apart from writing rules, our governments and us here at the European level as well, we have to engage in a very serious and coordinated effort to explain and to accompany our citizens in this transformation. That means that we have to invest massively in education and reskilling our current workforce to be able to integrate AI into their work because otherwise, we would have forces in our societies that will resist and will look at this transformation with a bad eye – and that’s something that we do not want.” Tudorache concluded.