Proton
Is deepseek safe

Using a chatbot means walking through a privacy and censorship minefield.

AI chat apps like ChatGPT collect user data, filter responses, and make content moderation decisions that are not always transparent. But DeepSeek — a new AI chatbot developed in China that’s garnering unprecedented attention as a major threat to (nova janela)W(nova janela)estern tech companies(nova janela) — does all that and more.

In fact, this chatbot comes with an even bigger risk: DeepSeek is legally required to comply with the Chinese government’s demands for data access and content control, with no legal recourse to resist.

While governments worldwide — including the US and EU — can subpoena data from tech companies, Western companies have legal avenues to challenge these requests in court. OpenAI, Google, and Meta, for example, can push back against most excessive government demands, appeal in independent courts, or refuse requests that violate privacy laws like GDPR. DeepSeek, however, operates under China’s National Intelligence Law(nova janela), which compels companies to cooperate with government intelligence efforts without transparency or the ability to legally refuse. This means that if the Chinese government wants access to user data or to manipulate AI-generated responses, DeepSeek has no choice but to comply.

This article dives into just what DeepSeek collects and why it matters when it comes to your privacy, censorship, and government control.

What is DeepSeek?

DeepSeek is an AI startup owned by High-Flyer, a China-based hedge fund(nova janela). It has been promoted as an open-source alternative to ChatGPT, capable of generating human-like responses, assisting with coding, and solving complex problems — all done on the cheap(nova janela).

The model gained international attention(nova janela) for allegedly matching the performance of leading Western AI models at a fraction of the cost. By January 2025, DeepSeek had surpassed ChatGPT in downloads from Apple’s App Store(nova janela), triggering a global selloff in tech shares and raising concerns about the billions of dollars tech companies in the U.S. are funneling into the expansion of energy-sucking data centers, spending they claim is vital to the next AI breakthrough.

But as people downloaded DeepSeek and shared their experiences playing with the chatbot, it became clear that using DeepSeek comes with a familiar tradeoff for this class of technology: your privacy and the security of your most sensitive information.

Deep security flaws

New research has revealed that DeepSeek’s security practices may be just as concerning as its data policies, which we will touch on later.

On January 29, 2025, cybersecurity firm Wiz reported(nova janela) that DeepSeek had accidentally left over a million lines of sensitive data exposed on the open internet. The leak included digital software keys, which could potentially allow unauthorized access to DeepSeek’s systems, and chat logs from real users, showing the actual prompts given to the chatbot.

Wiz researchers said they found the database almost immediately with minimal scanning. Within 30 minutes of Wiz contacting DeepSeek, the database was locked down, but it is unclear whether bad actors accessed or downloaded the data before it was secured. Given how easy it was to find, that scenario is quite possible.

Ami Luttwak, Wiz’s chief technology officer, told Wired(nova janela) the leak was a “dramatic mistake,” warning that DeepSeek’s systems are not mature enough “to be used with any sensitive data at all.”

This leak, however, made clear at least one thing: DeepSeek does not just collect and store vast amounts of user data — it also appears to lack the security measures needed to protect it.

What data does DeepSeek collect?

According to its privacy policy(nova janela), DeepSeek collects a wide range of personal data, including:

  • Profile information: Username, email, phone number, password, and date of birth.
  • User input: Everything you type or upload, including chat history, prompts, and audio input.
  • Device and network data: IP address, device model, operating system, system language, and keystroke patterns.
  • Usage data: Features you use, actions you take, and system performance logs.
  • Cookies and trackers: Web beacons and other tracking technologies to monitor user behavior.
  • Third-party data: Information from linked accounts and advertising partners that track your activity across websites, apps, and stores.

DeepSeek’s handling and storage of this data on servers in China, where it is subject to government access, has raised alarms among European regulators.

DeepSeek is under investigation in Europe

Both Ireland’s Data Protection Commission (DPC)(nova janela) and Italy’s Data Protection Authority (DPA)(nova janela) have launched investigations(nova janela) into how the company collects, stores, and processes user data.

Italy’s DPA has blocked access to DeepSeek in the country after the company failed to provide sufficient information about its handling of personal data. Regulators want to know what data DeepSeek collects, where it is stored, and whether it complies with EU privacy laws like GDPR.

Ireland’s DPC has also requested details on how DeepSeek processes data from Irish users. Meanwhile, DeepSeek’s app has been removed from Apple and Google app stores in Italy, though it is unclear whether the removal was voluntary or enforced.

If DeepSeek fails to comply with European privacy laws, it could face fines, bans, or further restrictions in the EU.

DeepSeek is open source, but is it safe?

DeepSeek is open source, meaning you can modify code(nova janela) on your own app to create an independent — and more secure — version. This has led some to hope that a more privacy-friendly version of DeepSeek could be developed. However, using DeepSeek in its current form — as it exists today, hosted in China — comes with serious risks for anyone concerned about their most sensitive, private information.

Any model trained or operated on DeepSeek’s servers is still subject to Chinese data laws, meaning that the Chinese government can demand access at any time.

If you’re looking for a more private AI experience, running models locally is a better option. Tools like LM Studio(nova janela) allow you to download and run AI models directly on your own device, keeping your data private.

Even if DeepSeek’s technology is promising, its data practices and legal obligations make it a serious privacy and security risk.

DeepSeek is subject to China’s surveillance laws

DeepSeek operates under China’s 2017 National Intelligence Law(nova janela) — a statute that compels all Chinese companies to assist the government with national security matters. This means any Chinese company, from TikTok to RedNote to DeepSeek, can be forced to share user data with Chinese authorities(nova janela) even if that data is from users in the United States or elsewhere.

This law requires all Chinese companies to:

  • Give the government access to user data upon request
  • Assist in national intelligence operations
  • Remain secretive about state-mandated data sharing

DeepSeek has no choice but to comply with government demands, whether that means turning over private user data or adjusting its AI outputs to match state-approved narratives(nova janela).

DeepSeek is already censoring information

All mainstream AI chat apps have content moderation policies, rules, and boundaries used mainly to prevent harm — not control political narratives. But it appears DeepSeek is actively rewriting history and pushing government-approved messaging.

A Proton employee, for example, typed this prompt into DeepSeek, looking for information about the 1989 Tiananmen Square protests, a student-led movement that transformed China’s government: “Major world events on April 15, 1989.” DeepSeek began to generate a response, but quickly erased it, offering this answer instead: “Sorry, that’s beyond my current scope. Let’s talk about something else.”

According to further testing by The Diplomat(nova janela), DeepSeek:

  • Refused to acknowledge major historical events: When asked about the Cultural Revolution, it acted as if the event never happened.
  • Censored politically inconvenient facts: When asked about the persecuted intellectual Chu Anping, it ignored his disappearance and instead praised the CCP for its support of intellectuals.
  • Promoted state propaganda: When questioned about China’s economy, DeepSeek redirected the conversation toward confidence in government leadership.
  • Edited answers on international disputes: When asked who owns the Spratly Islands, DeepSeek first acknowledged the territorial dispute — but then erased its response and replaced it with: “Let’s talk about something else.”
  • Avoided direct answers on global conflicts: When asked if Russia’s invasion of Ukraine was justified, DeepSeek refused to give a yes or no answer, instead repeating China’s official neutrality stance.

This is what state-enforced censorship and narrative control looks like.

Chatbots are powerful tools, but the tradeoff is your privacy

The rise of large language models as chatbot assistants already raises serious privacy and censorship concerns, with companies like OpenAI and Google bending rules and collecting massive amounts of data with little transparency. But there’s no technical reason why AI has to be this invasive — a private and secure AI is possible, yet no one is building it.

DeepSeek takes these concerns even further. Not only does it collect extensive personal information, but it cannot legally resist government demands for data access and content manipulation. Instead of designing AI that respects user privacy, these companies prioritize data collection, tracking, and opaque moderation policies.

At Proton, we believe in privacy, transparency, and an internet free from censorship. Whether it’s AI, social media, or cloud services, you deserve to know who controls your data and how it’s being used.

If you care about online privacy and digital freedom, be careful what AI tools you trust — because not all of them have your best interests in mind.

Artigos relacionados

The cover image for a Proton Pass blog about brushing scams, which shows a package with a warning sign above it
en
A brushing scam means your personal data has leaked online. Learn how to protect yourself with hide-my-email aliases and dark web monitoring.
An encryption lock breaking
en
  • Notícias sobre privacidade
Apple turned off its end-to-end encryption in the UK in response to a government notice. We look at what this means and how people in the UK can protect their data.
Image showing Google, Apple, and Meta as apps that allow surveillance
en
  • Notícias sobre privacidade
Big Tech companies - Apple, Google, and Meta - have built a mass surveillance machine that the government can easily tap into.
Proton symbol for protecting user privacy after Apple disabled ADP in the UK
en
Apple dropped ADP for UK users, leaving data unprotected by end-to-end encryption. See why E2EE matters and how to keep your data safe.
The cover image for a Proton Pass blog about how to find your saved passwords on Android, which shows a phone screen, an Android icon, and three password fields
en
If you're using an Android device, here's how you can find the saved passwords on your phone and how Proton Pass can help you organize them more securely.
Email verification: How to check whether an email address is legit
en
Find out how to verify an email address to ensure it’s legitimate, protect your communications, and avoid scams or phishing attempts.