This July, hackers breached Tea, a popular social app designed to help women vet the safety of men they’ve matched with on dating apps. It was nothing short of disastrous, with hackers accessing and subsequently leaking some 72,000 images submitted by women as part of the sign-up process. Tea had claimed these verification images would be deleted immediately after being uploaded and confirmed. Instead, photos of women holding up ID cards next to their faces proliferated through the internet. It was the kind of honeypot break-in a malicious actor could only dream of. And it all could have been avoided.
We’re living in a digital world where cybercrime is rampant and getting worse. Between January and June of this year, the UK reported that a record-breaking 217,000-plus cases of fraud risk were filed to its National Fraud Database, with major spikes in public sector and gambling-related identity fraud. Increasingly sophisticated AI deepfake technology is no doubt partially to blame. A recent TRM Lab report counted a 456% rise in generative AI-enabled scams between May 2024 and April 2025, and one Feedzai survey of financial professionals found that only 8% reported never seeing generative AI being used by criminals.
Downstream of these developments is souring public sentiment. The average person’s data appears to be more at risk than ever. A company says it won’t store your identification verification selfie, and a few months later it’s floating around the internet. You join a betting site for the championship game, and suddenly someone’s in your account that’s connected to your bank. The natural reaction is to not want to upload any sensitive information at all, which may explain some of the backlash to new online safety laws in the UK.
This new reality also lays bare a bigger issue. Poorly secured data honeypots are an obvious problem. Then again, those honeypots shouldn’t exist in the first place, and they wouldn’t if a better online verification system was widely available.
New rules, new digital verification tech
We cannot do away with digital verification, particularly as AI deepfakes make it easier than ever to fabricate an ID card or a selfie. Being able to prove we are who we say we are is critical to keeping a huge amount of online services operating safely and effectively. People can’t just opt out and continue to use the internet normally. Still, they’re correct in thinking that face scans and full photos of their passport or driving license entail a huge privacy risk.
Better technology can enable people to prove their identity without letting the internet regress into a Wild West of deepfakes and constant privacy breaches. Verifiable credentials stored in digital identity wallets are among the most practical ways to do selective disclosure. Since electronic IDs with zero-knowledge proof capabilities are more secure than many existing KYC methods, they can fill the cybersecurity space opened by better AI deepfakes.
Zero-knowledge proofs (ZKPs) of identity allow individuals to share one identity-verifying piece of information with a website without any sensitive or irrelevant information changing hands. A site that needs to check your age doesn’t need your full home address, but it’s going to receive it if the verification process involves uploading a photo of a plastic driving license. With the zero-knowledge proof, the process doesn’t need birthdate or any other personal information. The user’s eID contains the exact birthday, and thus it ‘knows’ that the user is over 18. Using a ZKP, the eID then sends information that the legal age statement — ‘This user is 18 or older’ — is true directly to the website. It doesn’t disclose the actual month, day, or year of the user’s birthday. It just sends verifiable proof that they were born either before or on this day 18 years ago.
Just as this can be done with age, it can be done with sex, home country, marital status, and so on. This isn’t some futuristic vision. This zero-knowledge proof technology exists, and we ought to be enabling and encouraging its use.
These eIDs also need to be issued and backed by governments, with full guarantees of decentralization and privacy-first infrastructure. Government-issued credentials are essential because they guarantee trust and data quality. Private wallets can complement them, but only if users have genuine choice over which to use.
Zero-knowledge proofs to all the people
Digital privacy’s public image is not in a particularly good place. Besides better implementation and adoption of ZKP-enabled eIDs, we also need a well-considered program of public communication on this technology. People won’t use eIDs if they think they are just the latest verification technology bound to put their data at risk.
Central to this communication is the basic premise that zero-knowledge proofs significantly minimize disclosure. They have the potential to enable a truly privacy-first internet and, by extension, a genuinely safer internet. But this is all dependent on a combination of efforts from a range of actors.
In governments, there needs to be a newfound political will directed toward widespread interoperable eID issuance. Issuance must be accompanied by a comprehensive public-facing campaign around eIDs’ safety and ease of use. Regulated businesses need to be taught how they stand to benefit, and begin accepting them for digital verification. This move will naturally encourage public participation by virtue of being public-facing in its own right.
Coordinating this confluence of political will, public awareness, and business-side technological change is a difficult maneuver. That’s no excuse. The enormous surge in AI fraud and worsening public perception of data safety demands action. The innovation part is already done, after all.