Search for...

The EU AI Act’s Replayability Test: If You Can’t Reconstruct a Decision, You Can’t Ship

The EU AI Act’s Replayability Test: If You Can’t Reconstruct a Decision, You Can’t Ship, TheRecursive.com
https://therecursive.com/author/petrmalyukov/

Petr Malyukov is the Co-founder & CEO of dTelecom, a decentralised real-time communications (dRTC) infrastructure. Petr is an IT entrepreneur with over 17 years of experience across telecommunications, blockchain, Web3, and artificial intelligence. He has built and scaled global technology companies and leads dTelecom, a grant-backed infrastructure company supported by Google, Solana Foundation, ElevenLabs, and peaq.
~

Today, the narrative around high-risk AI systems is taking a new turn. Since the EU AI Act is set to come into full force on 2 August 2026, by that moment, companies will be obliged to back their AI decisions with evidence. Superficial explanations won’t work anymore. Even if Brussels ends up delaying parts of the high-risk regime to December 2027, that doesn’t invert the dynamic.

So what does this mean in practice? I think companies are now at a crossroads.

Either they build their AI operations so they can prove what the system does in a real case — down to the model version, the data used, the checks applied, and when a human could step in — or they look for markets where this level of proof isn’t required yet. There’s no comfortable middle ground. Let’s dig into the details and see whether this is as dire as it may seem.

Replayability is now a product feature

Back in the day, many EU firms could embed an AI feature, watch how it worked, then explain the logic behind it if something went wrong. They’d show a short deck, point to a model card, mention fairness tests, fix a few things, and move on.

Why did it feel so simple? Because most AI decisions were still easy to undo, the impact was either almost or completely absent, and there was no strict rulebook that could make teams keep a full, case-by-case record of what the system did.

This model started showing its weaknesses quickly. In 2020 in the UK, the exam grading algorithm was dropped after a backlash: results were downgraded in ways many students couldn’t contest, case by case, and the government reverted to teacher-assessed grades. And it’s just one example.

After a few more high-stakes “black box” moments, regulators started to tighten the screws. Yes, these systems did bring benefits, like speed and scale, but many were released too early, with weak controls and with little to none explanation of the outcomes in real cases.

Read more:  Using AI ≠ Using It Well: What Research of 3,200 Regional SMEs Revealed

And that’s exactly what the EU AI Act is set to fix. For high-risk AI, that means a switch from ‘trust our intent’ to ‘show your work.’ I’d call it replayability: the ability to replay a decision after the fact and show how it was reached, with evidence.

But what if firms refuse to build for that replayability? Then they run into enforcement and buying walls.

The cost of “we can’t explain this”

From where I stand, refusing to build replayability in Europe once the EU AI Act is in force is a path to market exit. In high-risk AI, the market will punish firms that can’t show what their systems did — first through incidents, then through enforcement, and finally through blocked revenue.

When a high-risk AI model fails in a serious way, your firm is expected to provide a detailed report to authorities, according to Article 73 of the upcoming Act. So if a company doesn’t have a clear trail of why a failure happened, the team just wastes time arguing over guesses. After all, customers want answers, the partner requires a fix, and the regulator wants more than a dry “we’re investigating.”

Then comes the part most businesses fear — losses, penalties, fines. Under Article 99, penalty ceilings reach up to €35 million or 7% of global turnover for some breaches, and up to €15 million or 3% for others. Large enterprises may pull through that, but many small firms won’t; as for them, a serious compliance finding could freeze sales, cause contract terminations, and trigger months of remediation work just to stay afloat.

As for revenue, it’s the final choke point. In high-risk use, sales depend on due diligence, audits, and incident-reporting clauses. If you can’t show what happened, that’s a dead end. You lose the deal, and that’s what “lost revenue” is about.

The compliance path that gets easier over time

It may look like the way out is binary: either comply or exit. Even so, there’s also a third path, where you don’t comply in the sense of one big project, but build proof into the product in small, repeatable moves, and you make it part of daily work.

  • Start with a proof spine. Integrate the ability of the AI model to reply to basic questions fast, such as which model version ran, what inputs were used, what checks were applied, what output was produced. If you can’t produce that evidence for at least one real decision, everything else doesn’t make sense.
  • Treat incident response as a product feature. Decide in advance who can pause the system, when you roll back, what triggers a review, and what evidence gets pulled first. High-risk AI isn’t judged in calm times; this happens on the worst day, when something breaks, and everyone wants answers.
  • Design for “where it runs.” Data locality and privacy pressures will push more AI toward on-device, on-prem, or regional setups. The complexity only multiplies. Still, the way out is consistency, which means applying the same access rules, policies, and logs across environments.
Read more:  Top 10+ Largest Funding Rounds in Slovakia in H1 2023

Overall, I don’t see the EU AI Act as a predicament.

So, yes, the bar is rising, but running from it isn’t a way out. The only workable (and, I think, wise) path is to ship systems that can explain themselves in real cases, because in Europe, that’s what high-risk AI is soon to mean in practice.

Help us grow the emerging innovation hubs in Central and Eastern Europe

Every single contribution of yours helps us guarantee our independence and sustainable future. With your financial support, we can keep on providing constructive reporting on the developments in the region, give even more global visibility to our ecosystem, and educate the next generation of innovation journalists and content creators.

Find out more about how your donation could help us shape the story of the CEE entrepreneurial ecosystem!

One-time donation

You can also support The Recursive’s mission with a pick-any-amount, one-time donation. 👍