Big Tech companies built trillion-dollar empires by turning our personal lives into a commodity. Artificial intelligence is supercharging their surveillance-capitalism business model by integrating into the tools we use every day — often without giving us clear explanations or the ability to opt out.
Powerful large language models (LLMs) collect even more detailed data and infer more about you than was previously imaginable. AI is being integrated into a wide range of platforms(nouvelle fenêtre), from search to photo storage and even your device’s operating system, making it more difficult to keep track of, much less consent to, this data collection. This in turn allows companies like Google, Meta, and Microsoft to build more comprehensive personal profiles, profit more off the data, and exert more control over the information and narratives you see.
These companies can share or sell your private information with data brokers and other third parties. Governments can tap into this data through subpoenas(nouvelle fenêtre) or even more surreptitious forms of surveillance. And personal data can be exposed in a data breach, as has happened over(nouvelle fenêtre) and over(nouvelle fenêtre) again with services like ChatGPT.
The result is rapid and large-scale data collection, with even fewer controls on companies to obtain consent or consider the consequences of the technology. Data protection laws surrounding AI are notoriously far behind the industry(nouvelle fenêtre).
We urgently need a private alternative, which is why we built a new confidential AI assistant, Lumo(nouvelle fenêtre), that doesn’t keep any record of your conversations. Your saved chats are available only to you, protected by zero-access encryption. When it comes to your data, you have a choice — but more people need to understand the risks of the status quo.
AI makes Big Tech even more powerful
AI enables Google and Meta to scale and entrench their existing monopolies, especially in advertising and personal data exploitation, but at a much faster and larger scale.
Meta, for example, is deeply integrating AI to revitalize its ad business, scanning your posts and even adding Meta AI(nouvelle fenêtre) to its end-to-end encrypted chat products like WhatsApp in an apparent attempt to siphon more personal data from the platform. Google has gone AI-first with its advertising business(nouvelle fenêtre), so every ad is optimized to manipulate you personally. And Google’s Gemini is now deeply integrated into its entire ecosystem, giving its AI a free pass to collect data from Android phones and Google apps.
AI is similar to search, but far more intimate and deeply rooted in your life. While search tracking uses simple queries and clicks to predict your interests, AI uses natural language processing and even image recognition to predict the brands, messages, emotions, and imagery that will resonate most deeply with you. Some people are even becoming emotionally invested(nouvelle fenêtre) in AI chatbots because of how closely they can imitate human connection — something that never happened with search engines.
One ad agency boasts(nouvelle fenêtre) that AI-powered advertising is a “seismic shift” that gives clients the power to “gauge sentiment and preferences” and even “process visual data, such as images and videos, to identify brand logos and product usage, enabling context-specific ad targeting.” Imagine if beauty products could target your child at the moment they express anxieties to a friend, or if politicians could deploy AI-optimized ads to prey on voters’ private fears.
Big Tech is already more powerful than governments — imagine what they could accomplish with unbridled AI.
Your chats are leaking
Your privacy is at stake, but so is your safety. AI creates a massive, centralized pool of intimate information that is increasingly vulnerable to being exposed in a variety of ways.
This has already happened:
- In July 2025, reporters found over 100,000 conversations in ChatGPT(nouvelle fenêtre) were indexed by Google and made searchable. Users clicking the “share” button to send a conversation to friends or colleagues almost certainly didn’t realize their private conversations would be visible to everyone on the internet.
- In 2024, researchers used a prompt injection in Slack AI(nouvelle fenêtre) to reveal content shared in private channels.
- In 2023, a different team of researchers extracted actual software credentials(nouvelle fenêtre) (known as “secrets”) from GitHub’s Copilot AI assistant, which is trained on billions of lines of code. Some of that training code included credentials, and the researchers convinced the AI to spit out over 200 of them.
- In January 2025, after the splashy launch of AI startup DeepSeek, a research team found a public database(nouvelle fenêtre) that included a huge volume of chat logs, secrets, and other sensitive data.
- In August 2025, contractors working for Meta AI said they read personal conversations(nouvelle fenêtre) in which people shared sensitive data with the chatbot.
Whenever personal data is stored without end-to-end or zero-access encryption on a company’s servers, that data is vulnerable to being leaked. Chat logs are an incredibly attractive target for hackers — and governments. A US court has already ordered OpenAI to retain all user chat logs(nouvelle fenêtre) (which the company is currently fighting). That data would then be available to the government upon demand. The US already has a variety of methods to secretly spy on users, from warrantless wiretaps to data requests that Big Tech companies are required to obey.
The more data AI systems absorb, the more they risk revealing, whether through accidental leaks, biased outputs, or government pressure.
It’s not just data – it’s influence
This probably goes without saying: The companies and political operatives targeting you with ads aren’t looking out for your best interests.
AI takes that manipulation to a whole new level. Unlike mass media, which you can choose to tune out, AI is woven into your everyday life — answering your questions and offering in-app suggestions you didn’t even want. It’s not just reactive; it’s proactive. And when it’s built by companies or exploited by governments with specific agendas, that makes it incredibly powerful and dangerous.
We’re already seeing what that looks like. In China, DeepSeek has been shown to avoid or erase politically sensitive topics. Ask about the Tiananmen Square protests and you’ll hit a wall(nouvelle fenêtre) — not because the AI doesn’t know, but because its jurisdiction means it’s not allowed to tell you.
This isn’t a problem unique to China. Elon Musk’s Grok chatbot raised eyebrows for taking clear political stances(nouvelle fenêtre) and offering wildly different outputs depending on how its internal “dial” was set. It’s also been found to consult Musk’s own opinions(nouvelle fenêtre) on a topic before it shares its response. This shows how AI guidance can be shaped to push certain views, subtly or not, which will become increasingly problematic as we rely more on AI for education and information without checking primary sources.
So if AI is shaping what you see, what you think, and how you feel, who decides what version of the truth you get?
AI built for people, not profit
If we want an AI that is not shaped by Big Tech or authoritarian politics and that protects your data from hackers and leaks, we need to build something different. That means supporting private, independent AI tools without ulterior motives or hidden agendas.
Lumo is a step in that direction. It’s built in Europe, without Silicon Valley investments or foreign surveillance, and overseen by the nonprofit Proton Foundation, which is required by its charter to further the privacy of our community. Since our company was founded by scientists who met at CERN in 2014, we have been exclusively funded by our users, not investors or advertisers. This ensures our values and mission remain aligned with the people we serve.
We designed Lumo to be private by default, meaning it doesn’t keep chat logs, and your chats are stored with zero-access encryption, so only you can see them. Lumo is also built on open-source models, and your data is never used to train the AI.
Lumo is just the beginning. If we want a future where AI serves people — not profit, not power — we must demand it. By choosing private alternatives today, you help shape an internet that’s more transparent, more democratic, and more respectful of your rights. The tools are here. The choice is yours.