Strong AI governance is needed in order to mitigate risks posed by these technologies, including from job displacement, bias, explainability, and data security. Yet the upcoming European Union’s AI Act, which aims to regulate AI applications based on their societal risk, prompted mixed responses from stakeholders in CEE and the rest of Europe alike, regarding its potential negative impact on innovation and market competitiveness.
As part of our research for The Recursive report on the State of AI in CEE, we talked to many industry experts, founders, and policy makers and gathered different perspectives on how this regulation should look like. One common trend emerged emerged – the call for AI legistlation that doesn’t hinder innovation. For the full picture, you can download the report from the previous link but today we present a part of our discussion with Cezara Panait, Head of Government Affairs & Public Policy Google Romania and Moldova.
Cezara is a human rights lawyer and digital policy professional with an extensive understanding of legal, policy, and regulatory issues. She has received international awards for her work in developing policy proposals on AI ethics, online content moderation, and data protection during policy hackathons. Cezara’s ambition is to facilitate the creation of an open and transparent framework for debate, involving all parties in the decision-making process, to bolster democratic discourse and policy development.
The Recursive: From your perspective, how should policymakers properly regulate AI given it’s evolving so fast?
Cezara Panait: Generally, technology is evolving faster than regulation is, and this is why I believe that any rules we design now for AI should be future-proof and there will be flexibility to adjust to the latest technological advancements. Thus, regulatory requirements need to be sufficiently broad, flexible, and adaptable.
We welcome and encourage efforts by policymakers around the world to develop proportional, risk-based regulations that promote reliable, robust, and trustworthy AI applications, while still enabling innovation and the promise of AI for societal benefit.
From your stance, where’s the balance between enabling innovation and reducing risks when it comes to regulating general- purpose AI?
Today a new generation of more capable and versatile AI systems has emerged, and the nomenclature has evolved accordingly – we now talk of “general purpose AI” (GPAI) and “foundation models”, with “generative AI” as a thematic subset. But all remain essentially multipurpose AI systems, and most will seldom, if ever, be used in high-risk settings.
Certain multipurpose models may need added precautions, and we welcome efforts to clarify how GPAI, foundation models and generative AI should be treated within the context of the AI Act. However, it’s vital to keep a sense of proportionality on any general restrictions and avoid being overly broad in scope or overly prescriptive in ways that could limit development of tools for societally beneficial applications. In practice, this will require a clear focus on high-risk applications. It is important to mention that generative AI is not “high risk” in and of itself — it would only become so if used in a specific context deemed high risk by the AI Act.
The regulation of GPAI should focus only on the most capable foundation models when they are deployed for high-risk uses and requirements for Generative AI should be proportionate and to apply to those best-placed to implement them.
How can AI startups in CEE make their position heard and be part of the dialogue?
I believe that CEE startups could enhance their voice and presence by creating local coalitions to represent their perspectives and joining other regional initiatives to support their vision. The regulatory impact on startups will be definitely addressed by decision-makers and it is highly recommended that startups, SMEs and other stakeholders raise positions and manage to clarify the concrete effects on their work and businesses.