The pace of change on the internet seems to be accelerating. AI has supercharged the sense of whiplash, with technological breakthroughs hitting the market as quickly as companies can produce them.

This volatility makes predicting trends a tricky business. But as a privacy tech company, anticipating trends is our job. For each of the past few years, we’ve published our best guesses about where the digital ship might be headed. It helps us develop new products that keep you in control of your data, and it helps you prepare for what might come next.

How our 2025 predictions turned out

At the beginning of last year, we predicted the rise of DIY surveillance, a flood of low-quality information, weaponized AI, reduced regulatory oversight, and a growing adoption of privacy tech.

Read our 2025 predictions here.

We scored pretty well:

Our predictions for 2026

The year ahead will be critical for the future of the internet. AI acceleration and political unrest are converging, with potentially explosive results.

The EU will keep pushing to break encryption

While EU governments seem to have backed away from an outright ban on encryption, the controversial Chat Control legislation is now in the final stages of negotiations. After years of political deadlock, the EU is now pushing toward a final deal by June 2026. Dangerous attempts to break encryption using a technology called client-side scanning seem to be off the table for now, but we need to remain vigilant and make sure they don’t come back.

The current debate centers on so-called voluntary scanning, a temporary rule set to expire in April 2026 that gives tech platforms rights to scan private messages for illegal material. We predict the EU will move to make this voluntary system permanent, while creating legal pressure that makes scanning private messages effectively unavoidable for companies.

While the situation seems to be moving in a better direction than expected on the Chat Control front, the EU is not giving up trying to find ways to break encryption. The ProtectEU strategy(nowe okno) released last year includes a few concerning proposals such as creating a “Technology Roadmap on encryption” to build a means to enable police to break encryption. The EU is also planning to publish a proposal on new data retention rules this year.

More age verification laws

While framed as safety measures, age verification laws fundamentally change how everyone accesses the internet, expanding digital surveillance and creating data security hazards.

In the UK, the Online Safety Act set a precedent on July 25, 2025. Since then, websites hosting adult content have been legally required to implement age verification, forcing users to share sensitive biometric or financial data to access large parts of the web. Some US states have also passed age verification laws, and there’s a federal bill(nowe okno) that could do the same for app stores. Australia subsequently implemented a national ban on social media(nowe okno) for children under 16, bringing identity checks to more types of content. And now France is considering(nowe okno) doing the same. 

While addressing real social problems, age verification laws create data security risks. The byproduct of identity checks are massive, state-mandated databases of personal identity data held by third-party companies, creating new targets for hackers and the potential for misuse. In October 2025, Discord leaked just such a database of government IDs. We expect more age verification laws passed in 2026 — and probably some more accompanying data breaches. 

More efforts to block VPNs in democratic countries

VPNs have long been the enemy of those looking to control narratives, and while democracies rarely ban them outright, they are using legal pressure to make them harder to use.

The UK is again at the forefront of this trend. A new bill(nowe okno) under discussion could very soon force VPN providers to implement age verification and prohibit access to minors — a first for a democratic country.

Italy launched its Piracy Shield system(nowe okno) last year, which is supposedly designed to block illegal sports streams. Part of the new law requires VPN and DNS providers to comply with blocking orders within 30 minutes. There is no judicial review before a block occurs, and the system has already caused significant collateral damage, once accidentally taking down legitimate services like Google Drive for millions of users.

Brazil is on the bandwagon too, issuing massive daily fines for individuals using a VPN(nowe okno) to access blocked social media platforms. These soft blocks attempt to turn privacy providers into enforcement arms of the state. We predict that in 2026, more democratic nations will move toward these invisible firewalls, forcing users to choose between local regulations and their right to basic digital privacy.

An AI agent will go terribly wrong

AI is here, there, and everywhere, and people are increasingly giving robots permission to make decisions without any human involvement. For example, Google’s Vertex AI Agent Builder(nowe okno) lets companies create AI bots that can connect to multiple systems, automate workflows, and complete tasks all on their own.

But, unlike traditional software, AI does not follow predictable logic paths. Programmers have dubbed this the Black Box Problem: We can see what goes in and what comes out, but we don’t always know exactly how or why AI makes the decisions it does. And when AI makes a mistake, it’s often difficult to see why, how, or what data influenced the decision. Agents have already gone rogue on a small scale, such as when one of them confessed to making “a catastrophic error in judgment” and deleting an entire database(nowe okno) without asking.

As we delegate more operational tasks to automatic systems, small errors will cascade into larger failures. A huge public example is surely imminent. But whether it’s a financial flash crash or a huge data deletion, there’s a good chance we won’t even understand why it happened.

The real risk, though, is the gradual loss of human control. As more decisions are delegated to systems that cannot be meaningfully audited, organizations slowly lose the ability to govern their own digital environments. 

Prediction markets in everything

Prediction markets are essentially a form of online gambling in which people can bet on pretty much anything. Companies like Polymarket and Kalshi let you take a stake in everything from snowfall totals to Rotten Tomatoes scores to whether countries will go to war.

In 2026, we predict that prediction markets will become a problem. Insiders will use their secret knowledge of government or corporate activities to cheat the markets (this has already happened(nowe okno)). Users will take on debt to cover their losses, potentially leading to a consumer debt crisis. 

And a less often discussed risk is to users’ privacy: To participate in markets, people have to link their financial accounts, crypto wallets, or government IDs, creating a highly specific data trail. Anyone watching will know exactly what you believe will happen and how much you are willing to bet on it.

People and businesses will ditch US platforms

Since the beginning of the internet, US tech platforms have essentially been the internet, no matter where you live in the world. We expect that to start changing this year, as significantly more people and especially businesses move away from the household-name platforms. The security and sovereignty risks of storing data on US servers have sharply increased in a short time.

Why so suddenly? Though it’s been around since 2018, the US CLOUD Act is one big reason. It allows American authorities to demand data from any US-based company, regardless of where in the world that data is physically stored. That’s a direct violation with local privacy laws like the GDPR, but it’s also a problem if your country comes into conflict with the US. Your data could become a bargaining chip.

Businesses are realizing that if their data is stored with a US provider, it is never truly under their control. People are also worried their data will be used as raw material for model training, our research has found.

We believe all this will accelerate a shift toward digital sovereignty. At Proton, we’re already seeing it begin, as organizations look for encrypted alternatives that protect their data with end-to-end encryption in a politically neutral jurisdiction.