Why client-side scanning isn’t the answer

Law enforcement agencies generally don’t like end-to-end encryption because it blocks them from accessing the private communications of individual citizens. After decades of trying to make tech companies add a “backdoor” to encryption, they’re now shifting strategies and focusing on other technological solutions that they claim would allow for scanning in end-to-end encrypted environments without breaking encryption.

One of the technologies they refer to is called “client-side scanning”. However, client-side scanning does not preserve privacy in the way its proponents claim. In fact, it could enable new and powerful forms of mass surveillance and censorship. Client-side scanning would put free speech and everyone’s security at great risk without producing any great benefits for law enforcement.

Client-side scanning doesn’t technically break encryption because it doesn’t need to: By the time your data is encrypted, authorities have already seen it. 

Why authorities came up with client-side scanning

To understand client-side scanning, you need to understand end-to-end encryption. 

Privacy-focused tech companies like Proton use end-to-end encryption to make your data inaccessible to anybody but you. Not even Proton can see your information because it’s encrypted before ever leaving your device. Only you or the person you’re talking to can decrypt the data. This prevents Proton’s servers from accessing your data, but it also blocks out hackers and governments. 

End-to-end encryption technology was developed in part by the US government to increase security online. It enables the internet as we know it, used for everything from online banking to protecting dissidents from oppressive regimes. But law enforcement agencies claim that it’s preventing them from doing their jobs. 

The FBI and other agencies have called this “going dark” to suggest that they’re blind to criminal activity because of encryption. In the past, they have advocated requiring tech companies to purposely create a vulnerability in their encryption — a so-called backdoor for law enforcement. But as Proton and other privacy advocates have pointed out, there is no such thing as a backdoor that only lets the good guys in. If there is a key that opens the private communications of millions of people, hackers will steal it.

Policymakers in most countries have generally sided with the privacy advocates, and there is no backdoor law.

So lately, authorities that want more access have begun to focus on client-side scanning as a new silver bullet, an alternative they say will protect user privacy and does not undermine end-to-end encryption. But if you know how client-side scanning works, it’s easy to understand why it’s potentially worse than a backdoor.

What is client-side scanning, and how does it work

The term client-side scanning refers to several technical methods to analyze the contents of a person’s messages on their device. This can include images, videos, and text messages. Typically, the content is checked against a database of prohibited content and flagged for moderation if there’s a match. 

The European Commission published a legislative proposal(new window) in May 2022 that would mandate companies to actively search for child sexual abuse material on their services. While tech companies would be given some latitude to choose how to comply with this requirement, client-side scanning is often presented as the best technology available by law enforcement officials.

The problem with that, as the Electronic Frontier Foundation pointed out(new window), is that the most privacy-preserving methods of client-side scanning are almost impossible to implement. 

So the most likely solution would be local hash matching, in which the digital fingerprints of your messages are compared against the digital fingerprints of prohibited content stored in a database on your device. In theory, the database would contain the hashes for child sexual abuse material or materials related to terrorism, but in practice the database could contain anything. And you would have no way of knowing.

That means whenever you download an app, even if it’s end-to-end encrypted like Signal or WhatsApp, you would be downloading a toolkit designed to inspect all your pictures and text messages and potentially report that data to the developer or the government. 

This is why client-side scanning is even worse than a backdoor. Law enforcement wants to use backdoors in encryption to scan the content shared with others, but client-side scanning would allow them to look at the content you store on your device, whether you share it or not.

Ways client-side scanning can go wrong

Client-side scanning for the purpose of monitoring people’s communications is tailor-made for abuse. Here are just a few of the ways it can go wrong:

Increased attack surface for hackers

Widespread adoption of client-side scanning would open up new opportunities for hackers to monitor people’s communications. And because the attacks would most likely take place on individual users’ devices, app developers would be less able to intervene to protect their users. 

In 2021, a group of well-known computer scientists released a paper(new window) describing the technical failings of client-side scanning as a law enforcement solution. One of their key findings is that client-side scanning destroys trust: “The software provider, the infrastructure operator, and the targeting curator must all be trusted. If any of them — or their key employees — misbehave, or are corrupted, hacked or coerced, the security of the system may fail. We can never know when the system is working correctly and when it is not.”

Government censorship and persecution

If regulators begin requiring client-side scanning for the world’s tech companies, there is nothing that prevents it from a technical standpoint from being used to monitor any type of content. Though it may be initially used to look for child abuse or terrorism, a successive government could use it to target broader categories of content, raising a greater risk of false positives and attacks on freedom. 

It’s easy to see how some governments could use this technology to target political opponents, journalists, and anyone else they deem objectionable. It would be left to private app developers to either comply with such orders or shut down their service. 

Either way, as soon as client-side scanning is deployed, users who rely on private and secure apps would have no way to trust their communications are protected.

False positives

Even in the best-case scenario, in which responsible governments and well-meaning developers implement a narrow form of client-side scanning, the technology is simply not sophisticated enough to identify only the targeted material. There are already cases of false positives that have ruined people’s lives, such as the father who sent a picture of his son to their doctor and was flagged as a predator(new window) and reported to the police.

In a letter(new window) opposing the European proposal, over 100 privacy advocacy organizations expressed specific concerns about client-side scanning. Child-abuse survivors describing their trauma to a trusted adult could be flagged for monitoring and reported. Anyone who sends an intimate picture could have that picture mistakenly flagged and examined by a third party. 


Despite the way client-side scanning is optimistically portrayed as a magical solution, it’s not a privacy-preserving alternative to encryption backdoors. And in many ways, it’s worse. Authorities would gain the technical ability to scan everyone’s unencrypted data at all times. Using any chat app or social media platform would mean installing spyware on your device.

Meanwhile, the flood of false positives would likely overwhelm app developers and law enforcement agencies alike. Suddenly tasked with combing through people’s private pictures and messages, resources will be diverted away from actually preventing criminal activity.

We believe there are many ways to prevent abuse without breaking encryption or using mass surveillance. In fact, mass surveillance is notoriously ineffective(new window) at preventing crime. And we have previously written about how we address abuse on our platform

Law enforcement agencies should be empowered to stop real criminals using proven methods without destroying security and freedom for everyone.

Protect your privacy with Proton
Create a free account

Related articles

People and companies are generally subject to the laws of the country and city where they are located, and those laws can change when they move to a new place. However, the situation becomes more complicated when considering data, which can be subjec
Your online data is no longer just used for ads but also for training AI. Google uses publicly available information to train its AI models, raising concerns over whether AI is even compatible with data protection laws. People are worried companies
iPhone stores passwords in iCloud Keychain, Apple’s built-in password manager. It’s convenient but has some drawbacks. A major issue is that it doesn’t work well with other platforms, making it hard for Apple users to use their passwords and passkeys
There are many reasons you may need to share passwords, bank details, and other highly sensitive information. But we noticed that many people do this via messaging apps or other methods that put your data at risk. In response to the needs of our com
Large language models (LLMs) trained on public datasets can serve a wide range of purposes, from composing blog posts to programming. However, their true potential lies in contextualization, achieved by either fine-tuning the model or enriching its p
is Google Docs secure
Your online data is incredibly valuable, particularly to companies like Google that use it to make money through ads. This, along with Google’s numerous privacy violations, has led many to question the safety of their information and find alternative