The EU AI Act is set to become the world’s first major regulation focused entirely on artificial intelligence. It aims to ensure that AI systems used in Europe are safe, fair, and respect people’s rights. But as the rules start to take shape, many companies—especially startups and small businesses—are wondering: what’s the current status, and what do we need to do to prepare?
This is what we know about the implementation timeline from the European Commission: The AI Act officially entered into force on 1 August 2024, but it will be rolled out in phases. From 2 February 2025, the first provisions—like the bans on certain unacceptable AI practices and the obligation to promote AI literacy— entered into force. Next, the governance framework and the rules for general-purpose AI models will kick in on 2 August 2025. The bulk of the law, including the main requirements for high-risk AI systems, will apply from 2 August 2026. For high-risk AI systems embedded into regulated products like medical devices or cars, the deadline is extended to 2 August 2027.
A new report by Dr. Robert Kilian, Linda Jäck, and Dominik Ebel offers an inside look at how the AI Act is being implemented, with a focus on one of its most important (and complicated) parts: technical standards. Their conclusions are a result of in-depth qualitative interviews with 23 leading European AI companies, including Mistral and Helsing.
The AI Act sets out strict rules for “high-risk” AI systems—like those used in healthcare, hiring, finance, or critical infrastructure. But instead of giving step-by-step instructions, the law stays quite general. That’s where technical standards come in.
These standards are like instruction manuals that explain how companies can meet the law’s requirements. If a company follows them, it’s assumed to be in compliance. This makes them essential for product development, legal certainty, and market access in the EU.
Where are we now?
The EU asked a group of experts (under the CEN-CENELEC JTC 21 committee) to draft around 35 technical standards that will support the AI Act. These cover areas like risk management, data quality and bias, transparency for users, human oversight, cybersecurity and accuracy.
But most of these standards are still being written. Originally, they were supposed to be ready by April 2025, but delays have pushed that deadline to August 2025—and even that looks optimistic.
Once finished, the standards will need to be reviewed and officially published, probably in early 2026. That gives companies just a few months to implement them before the AI Act starts applying to high-risk systems in August 2026.
Challenges ahead
The report identifies several key challenges for companies preparing for compliance. First, the timeline is tight. Having only six to eight months to digest, implement, and validate dozens of new technical standards is a major concern, particularly for smaller teams without legal or compliance departments.
Second, there’s the issue of cost. Buying access to all relevant standards can easily run into the thousands of euros, which can be a serious burden for startups and SMEs. Even knowing which standards apply to your AI system can be a challenge in itself.
Another major concern is access to the standards. Until recently, many harmonized standards were not publicly available, even though companies were expected to comply with them. A recent ruling from the EU Court of Justice in the Malamud case addressed this by confirming that such standards must be freely accessible. While this is a positive development, it has triggered pushback from international standardization bodies that rely on selling access to these documents.
Finally, there’s the issue of who gets a seat at the table. The standardization process has been dominated by large tech and consulting firms—many based outside the EU. Meanwhile, smaller European players, academic institutions, and civil society groups have struggled to participate due to limited time, funding, or expertise. This imbalance could result in standards that reflect the priorities of big industry players rather than the broader AI ecosystem.
What can be done?
The authors of the report suggest several ways to improve the process:
- Give companies more time to comply.
- Make the standards freely available and easier to understand.
- Provide financial and technical help, especially for startups and SMEs.
- Ensure a wider range of voices are included in writing the standards.
- Develop digital tools (like “smart standards”) to help automate compliance.
If you’re building or deploying AI systems in the EU, it’s worth reviewing whether your use cases fall under the “high-risk” category outlined in the AI Act. Keeping track of the evolving standardization process is also important, as these documents will form the backbone of future compliance.
Lastly, it’s a good time to start thinking about your internal processes: How transparent are your models? What kind of data governance do you have in place? Are your systems tested for bias, accuracy, and cybersecurity?