ProtonBlog(new window)

Law enforcement agencies generally don’t like end-to-end encryption because it blocks them from accessing the private communications of individual citizens. After decades of trying to make tech companies add a “backdoor” to encryption, they’re now shifting strategies and focusing on other technological solutions that they claim would allow for scanning in end-to-end encrypted environments without breaking encryption.

One of the technologies they refer to is called “client-side scanning”. However, client-side scanning does not preserve privacy in the way its proponents claim. In fact, it could enable new and powerful forms of mass surveillance and censorship. Client-side scanning would put free speech and everyone’s security at great risk without producing any great benefits for law enforcement.

Client-side scanning doesn’t technically break encryption because it doesn’t need to: By the time your data is encrypted, authorities have already seen it. 

Why authorities came up with client-side scanning

To understand client-side scanning, you need to understand end-to-end encryption. 

Privacy-focused tech companies like Proton use end-to-end encryption to make your data inaccessible to anybody but you. Not even Proton can see your information because it’s encrypted before ever leaving your device. Only you or the person you’re talking to can decrypt the data. This prevents Proton’s servers from accessing your data, but it also blocks out hackers and governments. 

End-to-end encryption technology was developed in part by the US government to increase security online. It enables the internet as we know it, used for everything from online banking to protecting dissidents from oppressive regimes. But law enforcement agencies claim that it’s preventing them from doing their jobs. 

The FBI and other agencies have called this “going dark” to suggest that they’re blind to criminal activity because of encryption. In the past, they have advocated requiring tech companies to purposely create a vulnerability in their encryption — a so-called backdoor for law enforcement. But as Proton and other privacy advocates have pointed out(new window), there is no such thing as a backdoor that only lets the good guys in. If there is a key that opens the private communications of millions of people, hackers will steal it.

Policymakers in most countries have generally sided with the privacy advocates, and there is no backdoor law.

So lately, authorities that want more access have begun to focus on client-side scanning as a new silver bullet, an alternative they say will protect user privacy and does not undermine end-to-end encryption. But if you know how client-side scanning works, it’s easy to understand why it’s potentially worse than a backdoor.

What is client-side scanning, and how does it work

The term client-side scanning refers to several technical methods to analyze the contents of a person’s messages on their device. This can include images, videos, and text messages. Typically, the content is checked against a database of prohibited content and flagged for moderation if there’s a match. 

The European Commission published a legislative proposal(new window) in May 2022 that would mandate companies to actively search for child sexual abuse material on their services. While tech companies would be given some latitude to choose how to comply with this requirement, client-side scanning is often presented as the best technology available by law enforcement officials. 

The problem with that, as the Electronic Frontier Foundation pointed out(new window), is that the most privacy-preserving methods of client-side scanning are almost impossible to implement. 

So the most likely solution would be local hash matching, in which the digital fingerprints of your messages are compared against the digital fingerprints of prohibited content stored in a database on your device. In theory, the database would contain the hashes for child sexual abuse material or materials related to terrorism, but in practice the database could contain anything. And you would have no way of knowing.

That means whenever you download an app, even if it’s end-to-end encrypted like Signal or WhatsApp, you would be downloading a toolkit designed to inspect all your pictures and text messages and potentially report that data to the developer or the government. 

This is why client-side scanning is even worse than a backdoor. Law enforcement wants to use backdoors in encryption to scan the content shared with others, but client-side scanning would allow them to look at the content you store on your device, whether you share it or not.

Ways client-side scanning can go wrong

Client-side scanning for the purpose of monitoring people’s communications is tailor-made for abuse. Here are just a few of the ways it can go wrong:

Increased attack surface for hackers

Widespread adoption of client-side scanning would open up new opportunities for hackers to monitor people’s communications. And because the attacks would most likely take place on individual users’ devices, app developers would be less able to intervene to protect their users. 

In 2021, a group of well-known computer scientists released a paper(new window) describing the technical failings of client-side scanning as a law enforcement solution. One of their key findings is that client-side scanning destroys trust: “The software provider, the infrastructure operator, and the targeting curator must all be trusted. If any of them — or their key employees — misbehave, or are corrupted, hacked or coerced, the security of the system may fail. We can never know when the system is working correctly and when it is not.”

Government censorship and persecution

If regulators begin requiring client-side scanning for the world’s tech companies, there is nothing that prevents it from a technical standpoint from being used to monitor any type of content. Though it may be initially used to look for child abuse or terrorism, a successive government could use it to target broader categories of content, raising a greater risk of false positives and attacks on freedom. 

It’s easy to see how some governments could use this technology to target political opponents, journalists, and anyone else they deem objectionable. It would be left to private app developers to either comply with such orders or shut down their service. 

Either way, as soon as client-side scanning is deployed, users who rely on private and secure apps would have no way to trust their communications are protected.

False positives

Even in the best-case scenario, in which responsible governments and well-meaning developers implement a narrow form of client-side scanning, the technology is simply not sophisticated enough to identify only the targeted material. There are already cases of false positives that have ruined people’s lives, such as the father who sent a picture of his son to their doctor and was flagged as a predator(new window) and reported to the police.

In a letter(new window) opposing the European proposal, over 100 privacy advocacy organizations expressed specific concerns about client-side scanning. Child-abuse survivors describing their trauma to a trusted adult could be flagged for monitoring and reported. Anyone who sends an intimate picture could have that picture mistakenly flagged and examined by a third party. 

Conclusion

Despite the way client-side scanning is optimistically portrayed as a magical solution, it’s not a privacy-preserving alternative to encryption backdoors. And in many ways, it’s worse. Authorities would gain the technical ability to scan everyone’s unencrypted data at all times. Using any chat app or social media platform would mean installing spyware on your device.

Meanwhile, the flood of false positives would likely overwhelm app developers and law enforcement agencies alike. Suddenly tasked with combing through people’s private pictures and messages, resources will be diverted away from actually preventing criminal activity.

We believe there are many ways to prevent abuse without breaking encryption or using mass surveillance. In fact, mass surveillance is notoriously ineffective(new window) at preventing crime. And we have previously written about how we address abuse on our platform(new window)

Law enforcement agencies should be empowered to stop real criminals using proven methods without destroying security and freedom for everyone.

Proteggi la tua privacy con Proton
Crea un account gratuito

Articoli correlati

chrome password manager
en
You likely know you should store and manage your passwords safely. However, even if you are using a password manager, there’s a chance the one you’re using isn’t as secure as it could be. In this article we go over the threats some password managers
sensitive information
en
We all have sensitive personal information we’d all rather not share, whether it’s documents, photographs, or even private video. This article covers how to handle sensitive information or records, and what you can do to keep private information priv
en
Social engineering is a common hacking tactic involving psychological manipulation used in cybersecurity attacks to access or steal confidential information. They then use this information to commit fraud, gain unauthorized access to systems, or, in
is whatsapp safe for sending private photos
en
WhatsApp is the world’s leading messaging app, trusted by billions of people around the globe to send and receive messages. However, is WhatsApp safe for sending private photos? Or are there better ways to share photos online privately? Let’s find ou
passwordless future
en
With the advent of passkeys, plenty of people are predicting the end of passwords. Is the future passwordless, though? Or is there room for both types of authentication to exist side-by-side?  At Proton, we are optimistic about passkeys and have int
en
At Proton, we have always been highly disciplined, focusing on how to best sustain our mission over time. This job is incredibly difficult. Everything we create always takes longer and is more complex than it would be if we did it without focusing on