Search for...

ETH Zurich, INSAIT, and LatticeFlow Launch First-Ever Compliance Evaluation Framework for Generative AI Under the EU AI Act

Left to right: Martin Vechev (INSAIT) & Petar Tsankov (LatticeFlow)
Image credit: Left to right: Martin Vechev (INSAIT) & Petar Tsankov (LatticeFlow AI)

In a nutshell

  • ETH Zurich, INSAIT, and LatticeFlow AI announce the release of the first evaluation framework of the EU AI Act for Generative AI models, COMPL-AI.
  • It includes the first technical interpretation of the EU AI Act, mapping regulatory requirements to technical ones, together with a free and open-source framework to evaluate Large Language Models (LLMs).
  • The launch also features the first compliance-centered evaluation of public foundation models from organizations such as OpenAI, Meta, Google, Anthropic, and Alibaba against the EU AI Act technical interpretation.

First technical interpretation of the EU AI Act

The European AI Act (AIA) is one of the most important pieces of regulation for the AI ecosystem and it is expected by many to shape regulation worldwide, the so-called Brussels effects.  However, the Act outlines high-level regulatory requirements without providing detailed technical guidelines for companies to follow. To bridge this gap, the European Commission has launched a consultation on the Code of Practice for providers of general-purpose Artificial Intelligence (GPAI) models, aimed to supervise the implementation and enforcement of the AI Act’s regulations for GPAI.

Today, ETH Zurich, INSAIT, and LatticeFlow AI have introduced the first evaluation framework for Generative AI models under the EU AI Act, COMPL-AI. This framework provides a technical interpretation of the Act, translating regulatory requirements into technical criteria

Thomas Regnier, the European Commission’s spokesperson for digital economy, research, and innovation, commented on the release: “The European Commission welcomes this study and AI model evaluation platform as a first step in translating the EU AI Act into technical requirements, helping AI model providers implement the AI Act.”

AI researchers and practitioners invited to collaborate on COMPL-AI framework

“We invite AI researchers, developers, and regulators to join us in advancing this evolving project,” said Prof. Martin Vechev, Full Professor at ETH Zurich and Founder & Scientific Director of INSAIT in Sofia, Bulgaria. The Institute for Computer Science, Artificial Intelligence and Technology (INSAIT) was established 2 years ago in partnership with two of the world’s leading technology universities ETH Zurich and EPFL Lausanne, structured as a special unit of the Sofia University “St. Kliment Ohridski”.

Read more:  A dream come true: How Bulgaria got its MIT

“We encourage other research groups and practitioners to contribute by refining the AI Act mapping, adding new benchmarks, and expanding this open-source framework. The methodology can also be extended to evaluate AI models against future regulatory acts beyond the EU AI Act, making it a valuable tool for organizations working across different jurisdictions,” shared Vechev, who is a tech entrepreneur, a professor at ETH Zurich, and an award-winning researcher. His efforts are in the field of building secure and fair Artificial Intelligence.

The COMPL-AI release can also benefit the GPAI working groups, which can use the technical interpretation document as a starting point for their efforts.

An open-source framework for evaluating LLMs on regulations

In addition to the technical interpretation, COMPL-AI includes a free and open-source framework built upon 27 state-of-the-art benchmarks that can be used to evaluate LLMs against these technical requirements.

This launch also includes a free, open-source tool to assess Large Language Models (LLMs) and features the first compliance-focused evaluation of foundation models from major organizations like OpenAI, Meta, Google, Anthropic, and Alibaba. This is the first time these models have been comprehensively assessed against an actionable interpretation of the EU AI Act.

The evaluation reveals key gaps — several high-performing models fall short of meeting regulatory requirements, with many scoring only around 50% across cybersecurity and fairness benchmarks. On the positive side, most models performed well in terms of harmful content and toxicity requirements, showing that companies have already optimized their models in these areas. 

“With this framework, any company — whether working with public, custom, or private models — can now evaluate their AI systems against the EU AI Act technical interpretation. Our vision is to enable organizations to ensure that their AI systems are not only high-performing but also fully aligned with the regulatory requirements such as the EU AI Act,” said Dr. Petar Tsankov, CEO and Co-Founder at LatticeFlow AI.

Read more:  Biotech AI startup Lifebit raised $60M to accelerate biomedical research and innovation

Tsankov is a researcher and a lecturer at ETH Zurich. Also , a co-founder and CEO of LatticeFlow AI, a company helping machine learning teams build and deploy trustworthy AI.

Help us grow the emerging innovation hubs in Central and Eastern Europe

Every single contribution of yours helps us guarantee our independence and sustainable future. With your financial support, we can keep on providing constructive reporting on the developments in the region, give even more global visibility to our ecosystem, and educate the next generation of innovation journalists and content creators.

Find out more about how your donation could help us shape the story of the CEE entrepreneurial ecosystem!

One-time donation

You can also support The Recursive’s mission with a pick-any-amount, one-time donation. 👍

https://therecursive.com/author/teodoraatanasova/

Teodora Atanasova is a News Editor at The Recursive. She covers everything around funding rounds, exits, startups expanding to international markets, big tech opening R&D in CEE, meaningful for the ecosystem partnerships.