OpenAI has released ChatGPT Atlas, a new browser that builds ChatGPT directly into every interaction people have on the web. The company describes it as a step toward a digital assistant that moves through websites with you, interprets what you see, and completes tasks you request.
Currently available only on Mac, Atlas promises extraordinary convenience. Yet it also introduces new privacy and security risks with unknown consequences. OpenAI has not yet answered several key questions about how the product protects user data.
We already know enough, though, to examine its main benefits and the serious privacy risks that come with them.
How ChatGPT Atlas works
Atlas introduces two core systems that change how browsing functions. The first, browser memories, records which sites you visit and how you interact with them, aiming to make ChatGPT’s answers more personal. The second, agent mode, allows the AI to open pages, fill forms, or carry out tasks inside the browser window.
OpenAI says these features are optional. You can disable them, erase their data, or browse privately. The company also says your browsing content is excluded from model training unless you opt in.
For all its talk of reinvention, however, ChatGPT Atlas still runs on Chromium(new window) — the same open-source engine behind Chrome and Edge.
That framework relies on user settings and habits. Ample research shows that most people do not change their default settings(new window). And earlier AI browsers show how fragile such controls can be under real-world conditions.
AI browsers’ security risks: What researchers have uncovered so far
One of the first AI browsers was Perplexity’s Comet, released in July 2025. A vulnerability in the browser’s AI system, first reported by Time(new window), revealed how AI browsing could open new attack vectors. Researchers at LayerX discovered a vulnerability called CometJacking(new window), which allowed malicious links to hide instructions inside URLs. When clicked, Comet’s AI interpreted those prompts as real commands.
Tests showed the browser could pull data from Gmail and calendars, download malicious files, and in some cases even attempt purchases on scam websites. iTnews later detailed(new window) similar findings from Guardio researchers, who described Comet as an overeager assistant — quick to act and slow to question suspicious instructions. Kaspersky’s analysis(new window) went further, warning that integrating AI directly into a browser gives malicious web content a direct channel to manipulate it.
Why Atlas is risky by design
Search has always been surveillance. AI search makes it intimate surveillance. Atlas makes it total surveillance.
Traditional engines like Google capture isolated questions — a medical symptom, a recipe, a legal query. Conversational AI turns those fragments into stories. It asks you for clarification, encourages follow-ups, and records context. Over time, these exchanges create detailed portraits of your private life, building a narrative about your intent, vulnerabilities, and decision-making patterns.
This is already risky when confined to the ChatGPT apps. With Atlas, that same mechanism lives inside your browser, offering OpenAI total surveillance of every interaction you have online.
OpenAI’s own documentation(new window) confirms that Atlas can view the pages you visit, remember their content through browser memories, and act on your behalf through agent mode. Each layer increases visibility. Atlas doesn’t just register your queries; it observes what you read, how long you stay, and what you do next.
The result is a single, comprehensive record of intent and behavior. Even when OpenAI says this content isn’t used for training by default, it’s still processed and analyzed for personalization. Through inference, Atlas can connect ordinary actions to build revealing narratives — like linking searches for anxiety symptoms with therapist directories and medication research to form a picture of a person’s mental health.
Privacy controls exist but demand constant vigilance. Users can toggle visibility or delete memories, yet most will forget to manage those settings. You can get your data, or delete it. But the models have already been trained on what you did.
Atlas extends surveillance beyond what Google achieved by combining search and browser data. OpenAI has merged AI conversation, web interactions (including those outside the search engine), and personal data harvesting into a single interface that understands context and acts on it.
Privacy and data exposure concerns
TechCrunch reports(new window) that Atlas keeps a record of browsing activity to personalize answers. Kaspersky warns(new window) that an AI integrated at this level has full visibility into web traffic and files on the device. That visibility can include private material such as subscriptions, work documents, or financial data.
AI browsers mark a shift from passive data collection to continuous behavioral mapping. Every page visited, every prompt written, every delegated task becomes another signal in a feedback loop designed to predict and influence behavior.
OpenAI points to user settings as safeguards: privacy toggles, data deletion, and incognito browsing. But these are surface controls. Once an AI connects the dots, removing one piece of data does not erase the story it’s already built. Atlas may forget discrete entries; the inferences, however, remain.
This model unites the web’s two most powerful data-collection engines — the search index and the browser — and overlays them with AI capable of reasoning about what it observes.
The hook is that it’s helpful. A tool that organizes grocery lists also maps financial behavior. A tool that helps you research therapy also infers your emotional state. What appears as personalization is data extraction with empathy as its mask.
Earlier surveillance capitalism relied on user apathy: people being too lazy to update their privacy settings. Atlas depends on engagement: It’s so smart and convenient, you can’t help but trust it.
Why Atlas isn’t ready for sensitive use
Atlas is a bold step toward hands-free browsing, but it isn’t built for trust. The same design choices that make it powerful also make it unsafe. Security researchers and testers have reached a consistent conclusion about AI browsers: they’re remarkable demonstrations, but unreliable for daily life.
If you try Atlas, treat it like a test environment. Keep banking, work, and personal accounts elsewhere. Don’t assume its safeguards will withstand real-world threats.
OpenAI will likely improve Atlas’s security, but today, using an AI browser means granting the company direct visibility into your online behavior and hoping that access remains protected. The risk is structural, not a bug. These companies built surveillance into their software on purpose.
There is another approach, and it’s already being used by millions of people.
Lumo, Proton’s private AI assistant(new window), is built to prove that intelligence and privacy can coexist. It operates under a strict no-logs policy. Chat history is protected with zero-access encryption, meaning not even Proton can read it. Conversations are never used for training. Both the code and models are open source, allowing anyone to verify what happens under the hood. Users own their data outright. And because Lumo is funded by the community, not advertisers, there’s no commercial incentive to exploit personal information.
That’s the difference between surveillance AI and privacy AI. One is built to collect data; the other is built to protect it.