Only have 1 minute? Here are 3 key takeaways:
- The EU AI Act was approved on Friday, December 9 to ensure the safety of AI systems in the EU.
- The legislation employs a risk-based approach, categorizing AI systems based on potential impact.
- Three tech lawyers from Romania, Croatia, and Switzerland explained for The Recursive what the Act means for AI startups in CEE.
After extended debates on the EU AI Act, the European Parliament, the Council of the European Union, and the European Commission agreed on a deal that is meant to make AI safer.
This flagship legislative initiative is set to ensure that AI systems placed on the European market and used in the EU are safe and respect fundamental rights and EU values.
According to the European Council’s official press release, the provisional agreement introduces important measures to tackle challenges posed by AI. It outlines rules for high-impact general-purpose AI models that could pose systemic risks and regulates high-risk AI systems. The agreement expands the list of prohibitions while permitting the use of remote biometric identification by law enforcement in public spaces. Notably, it emphasizes stronger rights protection by requiring those deploying high-risk AI systems to conduct a fundamental rights impact assessment before putting an AI system into use.
Dragos Tudorache, the Romanian Chair of the Special Committee on Artificial Intelligence, assures that “(The EU AI Act) offers strong safeguards for our citizens and our democracies against any abuses of technology by public authorities. It protects our SMEs, and it strengthens our capacity to innovate and to lead in the field of AI. And it protects vulnerable sectors of our economy.”
In Central and Eastern Europe, as in other regions, there are varying opinions regarding the EU AI Act. The Act’s emphasis on safety and ethics aims to ensure widespread AI benefits while minimizing negative impacts. However, there are concerns about the Act’s limitations and the practical challenges of implementing computationally demanding requirements. Additionally, there is a valid concern that the Act may place European companies at a competitive disadvantage compared to counterparts in less-regulated regions unless similar measures are adopted globally.
However, the act as it is now is a draft and will change in the next two years, when it should enter into force.
The Recursive talked with three AI lawyers on the topic, the latest developments, and the Act’s applicability for AI startups in CEE.
Andrei Hancu, Tech Lawyer from Romania
General Counsel at SeedBlink, specialized in tech-related legal matters, Corporate/M&A, Capital Markets, E-Payments and Data Privacy
“What I really like about the EU AI Act is the risk-based approach. The lawmakers are making sure that not all AI systems are put in the same basket (and strongly regulated), by providing several risk levels with associated rules ranging from no obligations in case of minimal risk to prohibition in case of unacceptable risk. I know that “prohibition” sounds harsh and such should not be on the agenda of any lawmaker, but some use-cases of AI technology really fall under “unacceptable risk”, such as cognitive behavioral manipulation or social scoring.
It took longer for the authorities to reach an agreement because there’s a lot of pressure from EU companies who are worried that the AI act may jeopardize their competitiveness. There was an open letter to the European Commission signed by more than 150 executives, so the difficulty of reaching an agreement is understandable.
It’s a classic case of money vs. principles: European lawmakers want a regulated workspace with strong rules around data quality, transparency, accountability, and human oversight, while EU tech companies want to be free to compete with non-EU companies, without any additional burden.
Not least, even big tech companies outside of the EU are lobbying against what they see as “overregulation that stifles innovation” because the EU market is an important one and they don’t want to face another “GDPR” which is a big pain for US big tech such as Meta and Alphabet.
Most startups in CEE will not be impacted, since they will most likely fall under minimal risk. The impact will be felt by those developing foundation models, but I don’t know of anyone doing this in the CEE.”
Nicoleta Cherciu, AI Lawyer form Romania
Managing Partner at Cherciu&Co, experienced in technology & investments
“Having a deal on the AI Act it’s a political win for the EU. Once finally adopted, the AI Act will become the EU’s wide law, requiring developers of AI systems to implement safeguards for the protection of individuals’ fundamental rights, while prohibiting, at the same time, certain AI use cases. Thus, we can also think of it as an act that promotes AI governance and safety by design and by default, not only in the EU, but globally as well.”
Marco Fehr, AI Lawyer from Switzerland
Founder & AI Lawyer at Fehr Legal, with expertise in the usage and implementation of AI
“In my opinion, the EU AI Act cannot be judged in a vacuum. The Act will form part of an already existing regulatory framework covering various aspects of the digital economy in the EU. I am referring here to the General Data Protection Regulation (GDPR), the Digital Markets Act (DMA), and the Digital Services Act (DSA). The EU AI Act adds more pages to the EU’s „digital rulebook”. AI systems have the potential to become one of, if not the most powerful technology in existence. For this reason, some form of regulation seems unavoidable and the EU is taking on a pioneering role here, similar to data protection with the GDPR.
What I object to is the lack of nuances within the Act and the associated legal uncertainties. The Act follows a risk-based approach. This means that a distinction is made between “prohibited”, “high-risk”, “limited-risk” and “minimal-risk” AI systems. The risk qualification depends, in a nutshell, on the industry in which an AI system operates and how an AI system is used. However, the distinction between the individual risk categories is blurred and the relevant terminology requires more precise interpretation.
Regrettably, startups and SMEs are not spared from the Act either. This will slow down the innovation process in the EU and exacerbate the already existing problem of innovative companies and founders moving to the USA and other jurisdictions with less restrictive laws. Furthermore, in the global battle for resources, especially capital, which is extremely scarce in the current funding climate, EU AI startups and scale-ups will have an additional competitive disadvantage compared to non-EU AI companies.
Startups providing or using sophisticated AI systems as part of their core business are advised to check whether they will fall under the material and territorial scope of the EU AI Act. Startups within the Act’s scope must promptly determine the risk categories, as defined by the Act, that apply to the AI systems they are developing or utilizing. In case these systems fall under the „high-risk“ category, the next steps would be to get familiar with the extensive rulebook and different requirements that will be enforced in the future. Future compliance with the Act ideally begins during the planning and design phase of an AI system; if this phase is complete, the best time to start is now.
Additionally, startups might need to consider the pros and cons of relocating to a different jurisdiction or entering non-EU markets before launching their AI-based products and services in the EU. AI founders who are planning to establish their companies soon must carefully evaluate the feasibility of adhering to the EU AI Act’s requirements and how these regulations might affect investor interest in their startups.
For a well-informed decision, startup board members, executives, and founders must involve a legal and compliance professional right from the start in these discussions and consult with potential investors before finalizing their decision.”
Marko Porobija, Tech Lawyer from Croatia
Managing Partner, Porobija & Špoljarić, focused on LegalTech Application, Commercial Law, M&A, Investment & Financing
“I believe the AI Act will be behind the times from the moment it is published. Although I do salute the regulation of fundamental issues, the wording of the Act is being created at the pace of politics, and AI is being developed at the pace of technology. It is clear which is more advanced.
It took longer for the authorities to reach an agreement because most of the lawmakers needed to have the fundamentals explained to them from scratch. And if you don’t understand it, you usually fear it. I sincerely believe that they have the Union’s residents’ best interests in mind when they approach the matter through risk aversion.
For AI startups, if they have gone far off into the development cycle, they will need a good audit of their system by someone who understands both tech and law, specifically the new AI Act. The idea will be to institute compliance-by-design into their AI, and to prevent redo as much as possible. Once the Act enters into force, they should avoid finding themselves on the wrong side of the mandatory operational requirements.”