The pace of change on the internet seems to be accelerating. AI has supercharged the sense of whiplash, with technological breakthroughs hitting the market as quickly as companies can produce them.
This volatility makes predicting trends a tricky business. But as a privacy tech company, anticipating trends is our job. For each of the past few years, we’ve published our best guesses about where the digital ship might be headed. It helps us develop new products that keep you in control of your data, and it helps you prepare for what might come next.
How our 2025 predictions turned out
At the beginning of last year, we predicted the rise of DIY surveillance, a flood of low-quality information, weaponized AI, reduced regulatory oversight, and a growing adoption of privacy tech.
Read our 2025 predictions here.
We scored pretty well:
- Mass surveillance made anyone a spy: This prediction came true, such as when a white hat hacker discovered unencrypted satellite communications(nieuw venster) in October. Or take the Waze app, which is a massive citizen surveillance tool(nieuw venster). But the big story of the year was Flock Safety cameras, which made a splash in the US, where thousands of cities started using them to monitor the streets. A YouTuber showed they have security vulnerabilities(nieuw venster) anybody could exploit, and the dozens of cameras were found to be broadcasting a livestream(nieuw venster) anybody could watch and download. When you mass produce surveillance tech, it makes everyone vulnerable to data breaches.
- Bad information flooded the internet: Once again, we got this one right. In fact AI slop was chosen as the 2025 word of the year(nieuw venster). Research has found AI “workslop” is hurting business productivity(nieuw venster). Vibe coding is creating a proliferation of apps that don’t work(nieuw venster). AI-assisted scholarly articles are wordy, low quality — and exploding(nieuw venster). Restoring the information ecosystem is going to be a key challenge for years to come.
- Hacks went AI-powered: We predicted AI would be put to nefarious use in malware. This has accelerated faster than we expected. Anthropic announced that it had detected the first ever AI-planned and executed cyberattack(nieuw venster), likely run by a Chinese-sponsored group. Phishing-as-a-service, which leverages AI, reached a peak in June 2025(nieuw venster). No surprise that governments are also directly investing in AI tools for cyberwarfare. The US military is investing millions in companies(nieuw venster) developing such weapons.
- Regulations were put on hold: The governments of the world were distracted last year by wars, trade disputes, and economic instability. But they were also keen to manage their domestic industries with a light touch, cognizant of a deregulatory trend in the US. With AI in particular, the US took a dramatic step toward blocking legal guardrails on Big Tech by banning state AI regulation without offering a federal alternative. The one exception is the EU’s Chat Control proposal, which has gained, lost, and regained momentum over the years. However, this law would regulate tech in the wrong direction, making apps less secure and private.
- Millions more people adopted privacy tech: This prediction we can measure directly through our user growth, and indeed last year we gained users at an increasing rate over 2024. The pace of people switching to Proton’s ecosystem from those of Google, Apple, and Microsoft indicates greater awareness of the risks of sharing your personal data with ad-powered platforms with a poor track record for privacy. Proton VPN signups surged throughout the year whenever an app was blocked(nieuw venster) or an ISP censored a website(nieuw venster).
Our predictions for 2026
The year ahead will be critical for the future of the internet. AI acceleration and political unrest are converging, with potentially explosive results.
The EU will keep pushing to break encryption
While EU governments seem to have backed away from an outright ban on encryption, the controversial Chat Control legislation is now in the final stages of negotiations. After years of political deadlock, the EU is now pushing toward a final deal by June 2026. Dangerous attempts to break encryption using a technology called client-side scanning seem to be off the table for now, but we need to remain vigilant and make sure they don’t come back.
The current debate centers on so-called voluntary scanning, a temporary rule set to expire in April 2026 that gives tech platforms rights to scan private messages for illegal material. We predict the EU will move to make this voluntary system permanent, while creating legal pressure that makes scanning private messages effectively unavoidable for companies.
While the situation seems to be moving in a better direction than expected on the Chat Control front, the EU is not giving up trying to find ways to break encryption. The ProtectEU strategy(nieuw venster) released last year includes a few concerning proposals such as creating a “Technology Roadmap on encryption” to build a means to enable police to break encryption. The EU is also planning to publish a proposal on new data retention rules this year.
More age verification laws
While framed as safety measures, age verification laws fundamentally change how everyone accesses the internet, expanding digital surveillance and creating data security hazards.
In the UK, the Online Safety Act set a precedent on July 25, 2025. Since then, websites hosting adult content have been legally required to implement age verification, forcing users to share sensitive biometric or financial data to access large parts of the web. Some US states have also passed age verification laws, and there’s a federal bill(nieuw venster) that could do the same for app stores. Australia subsequently implemented a national ban on social media(nieuw venster) for children under 16, bringing identity checks to more types of content. And now France is considering(nieuw venster) doing the same.
While addressing real social problems, age verification laws create data security risks. The byproduct of identity checks are massive, state-mandated databases of personal identity data held by third-party companies, creating new targets for hackers and the potential for misuse. In October 2025, Discord leaked just such a database of government IDs. We expect more age verification laws passed in 2026 — and probably some more accompanying data breaches.
More efforts to block VPNs in democratic countries
VPNs have long been the enemy of those looking to control narratives, and while democracies rarely ban them outright, they are using legal pressure to make them harder to use.
The UK is again at the forefront of this trend. A new bill(nieuw venster) under discussion could very soon force VPN providers to implement age verification and prohibit access to minors — a first for a democratic country.
Italy launched its Piracy Shield system(nieuw venster) last year, which is supposedly designed to block illegal sports streams. Part of the new law requires VPN and DNS providers to comply with blocking orders within 30 minutes. There is no judicial review before a block occurs, and the system has already caused significant collateral damage, once accidentally taking down legitimate services like Google Drive for millions of users.
Brazil is on the bandwagon too, issuing massive daily fines for individuals using a VPN(nieuw venster) to access blocked social media platforms. These soft blocks attempt to turn privacy providers into enforcement arms of the state. We predict that in 2026, more democratic nations will move toward these invisible firewalls, forcing users to choose between local regulations and their right to basic digital privacy.
An AI agent will go terribly wrong
AI is here, there, and everywhere, and people are increasingly giving robots permission to make decisions without any human involvement. For example, Google’s Vertex AI Agent Builder(nieuw venster) lets companies create AI bots that can connect to multiple systems, automate workflows, and complete tasks all on their own.
But, unlike traditional software, AI does not follow predictable logic paths. Programmers have dubbed this the Black Box Problem: We can see what goes in and what comes out, but we don’t always know exactly how or why AI makes the decisions it does. And when AI makes a mistake, it’s often difficult to see why, how, or what data influenced the decision. Agents have already gone rogue on a small scale, such as when one of them confessed to making “a catastrophic error in judgment” and deleting an entire database(nieuw venster) without asking.
As we delegate more operational tasks to automatic systems, small errors will cascade into larger failures. A huge public example is surely imminent. But whether it’s a financial flash crash or a huge data deletion, there’s a good chance we won’t even understand why it happened.
The real risk, though, is the gradual loss of human control. As more decisions are delegated to systems that cannot be meaningfully audited, organizations slowly lose the ability to govern their own digital environments.
Prediction markets in everything
Prediction markets are essentially a form of online gambling in which people can bet on pretty much anything. Companies like Polymarket and Kalshi let you take a stake in everything from snowfall totals to Rotten Tomatoes scores to whether countries will go to war.
In 2026, we predict that prediction markets will become a problem. Insiders will use their secret knowledge of government or corporate activities to cheat the markets (this has already happened(nieuw venster)). Users will take on debt to cover their losses, potentially leading to a consumer debt crisis.
And a less often discussed risk is to users’ privacy: To participate in markets, people have to link their financial accounts, crypto wallets, or government IDs, creating a highly specific data trail. Anyone watching will know exactly what you believe will happen and how much you are willing to bet on it.
People and businesses will ditch US platforms
Since the beginning of the internet, US tech platforms have essentially been the internet, no matter where you live in the world. We expect that to start changing this year, as significantly more people and especially businesses move away from the household-name platforms. The security and sovereignty risks of storing data on US servers have sharply increased in a short time.
Why so suddenly? Though it’s been around since 2018, the US CLOUD Act is one big reason. It allows American authorities to demand data from any US-based company, regardless of where in the world that data is physically stored. That’s a direct violation with local privacy laws like the GDPR, but it’s also a problem if your country comes into conflict with the US. Your data could become a bargaining chip.
Businesses are realizing that if their data is stored with a US provider, it is never truly under their control. People are also worried their data will be used as raw material for model training, our research has found.
We believe all this will accelerate a shift toward digital sovereignty. At Proton, we’re already seeing it begin, as organizations look for encrypted alternatives that protect their data with end-to-end encryption in a politically neutral jurisdiction.