Search for...

The EU AI Act’s Cybersecurity Gamble: Hackers Don’t Need Permission

The EU AI Act’s Cybersecurity Gamble: Hackers Don’t Need Permission, TheRecursive.com
https://therecursive.com/author/romaneloshvili/

Roman Eloshvili is a founder and CEO of XData Group, a B2B software development company. As a serial entrepreneur he developed a keen eye for trends and opportunities in internet banking. He embarked on his journey in finance over 20 years ago and as XData Group is on a mission to revolutionize the banking landscape.
~

As AI development advances, its use in cybersecurity is becoming inevitable – it can help detect and prevent cyber threats in unprecedent ways.

But, of course, there is the other side: bad actors can also use AI to develop more sophisticated attack methods, empowering their illicit activities. And criminals generally don’t bother adhering to any constraints on how to utilize this tech.

As the EU forges ahead with the AI Act, many people in the industry find themselves wondering: will this regulation actually help, making Europe more secure? Or will it become an obstacle, dropping new challenges on businesses that are trying to leverage artificial intelligence to protect themselves?

Here’s my take on this topic.

The AI Act’s cybersecurity measures

The EU AI Act is the first major regulatory framework to set clear AI development and deployment rules. Among its many provisions, the AI Act directly addresses cybersecurity risks by introducing measures to ensure AI systems are secure and used responsibly.

It does so by introducing a risk-based classification of AI applications, each class having different compliance requirements. Naturally, the more high-risk systems—the ones that could negatively affect people’s health and safety—are subject to stricter security and transparency demands.

Additionally, AI systems must undergo regular mandatory security testing to identify vulnerabilities and reduce the chances of them being exploited by cybercriminals. And, at the same time, it establishes better transparency and reporting obligations. These are solid first steps in bringing structure to this industry and legitimizing it.

But when it comes to cybersecurity, this approach has its share of complications and downsides.

Requiring AI systems to undergo so many checks and certifications means that, in practice, the release of security updates gets slowed down considerably. If each modification to AI-based security measures needs a long approval process, it gives attackers plenty of time to exploit known weaknesses while the target businesses are tied up in red tape and left vulnerable for it.

Read more:  Hottest Trends and Predictions for AI in Finance: A Glimpse into 2024

The issue of transparency is also a double-edged sword, depending on how you look at it. The AI Act requires that developers disclose technical details about their AI systems to government bodies so as to ensure accountability. A valid point, admittedly, but this introduces another critical vulnerability: if this kind of information gets leaked, it could fall into the hands of bad actors, effectively handing them a map of how to exploit AI systems. This violates one of the basic tenets of security: security through obscurity.

Compliance as the source of vulnerability?

There’s another layer of risk that we need to take a harder look at: the compliance-first mindset.

The stricter regulation becomes, the more security teams will focus on building systems that meet legal checkboxes rather than real-world threats. There is a very high chance of this resulting in AI systems that are technically compliant but operationally brittle.

Systems built for compliance will inevitably share patterns, and once malicious actors get their hands on the knowledge of those patterns, it will be that much easier for them to engineer exploits around them. End result? Similarly built systems are left equally defenceless.

The EU AI Act’s Cybersecurity Gamble: Hackers Don’t Need Permission, TheRecursive.com
Credit: Canva

Furthermore, since the Act requires human oversight of AI decisions, there’s a possible avenue for exploitation via social engineering. Attacks can target the human reviewers themselves, who, over time, may start approving decisions made by AI systems automatically. This is especially true for high-volume environments like transaction monitoring — we are already seeing signs of this in banking compliance, where oversight fatigue can easily lead to lapses in judgment.

Another example of security roadblocks inadvertently caused by the AI Act would be biometric tech. While restrictions on facial recognition are meant to protect citizens’ privacy, they also limit law enforcement’s ability to track and apprehend criminals using advanced surveillance methods.

And this also affects dual-use technologies — systems that are developed for both civilian and military applications. While military AI systems are formally excluded from the Act, the surrounding ecosystem that contributes to their development is now sorely constricted. That puts a damper on the development of next-gen defense tools that could have broad civilian benefits.

Read more:  Amazon Backs Bulgaria’s INSAIT with $1M for Research Program Development

The challenges businesses will face

Going back to the business side of things, we have to accept that the AI Act presents hurdles when it comes to complying with these rules. For SMEs, in particular, this can be a tall order, as they often lack the resources of large corporations to devote to compliance.

Security testing, compliance audits, legal consultations — all these require substantial investments. This risks causing a scenario where many companies are forced to scale back AI adoption, hindering this sector’s advancement. It’s all too likely that they will leave the EU altogether, choosing to advance their operations in other, friendlier jurisdictions.

Cybersecurity-wise, such a rollback would be very dangerous. I don’t think it really needs saying, but criminals obviously don’t care about compliance — they are free to innovate with AI use at whatever speeds they wish, which, at this rate, will quickly outstrip legitimate businesses.

The way I see it, it won’t be long before the process of discovering and exploiting vulnerabilities could be done in a matter of hours, if not minutes. Meanwhile, the defending parties would be stuck re-certifying their systems for days or weeks before security updates can go live.

Social engineering is also poised to become more dangerous than ever before. With the power of AI on their side, attackers could mine employee data from public profiles, then craft targeted phishing messages or even generate real-time deepfake phone calls to exploit the human side of security systems. These aren’t hypothetical scenarios — deepfakes are already being increasingly weaponized.

How can businesses integrate AI Act’s guidelines – without losing ground?

So, as we can see, there are plenty of challenges ahead. And yet, despite its imperfections, the AI Act isn’t something businesses can just ignore. So what can be done, then? The way I see it, compliance needs to grow smarter, not harder. A more proactive approach would be to build AI systems with regulations in mind from day one rather than retrofitting later.

Read more:  Czech AI-Powered E-Commerce Guide Outfindo Secures €900K Seed Round to Help You Buy the Right Products Online

That includes leveraging AI-based tools to automate compliance monitoring and engaging with regulatory bodies on a regular basis to stay informed. It also makes sense to participate in industry-wide events, sharing best practices and emerging trends in cybersecurity and compliance in general.

Ultimately, the AI Act aims to bring order and responsibility to AI development. But when it comes to cybersecurity, it also introduces serious friction and risk. If the goal is to keep Europe secure, then regulation must evolve just as quickly as the technology it seeks to govern.

Because right now, the defenders are playing catch-up.

Help us grow the emerging innovation hubs in Central and Eastern Europe

Every single contribution of yours helps us guarantee our independence and sustainable future. With your financial support, we can keep on providing constructive reporting on the developments in the region, give even more global visibility to our ecosystem, and educate the next generation of innovation journalists and content creators.

Find out more about how your donation could help us shape the story of the CEE entrepreneurial ecosystem!

One-time donation

You can also support The Recursive’s mission with a pick-any-amount, one-time donation. 👍