The US administration is finalizing its 2026 National Defense Authorization Act (NDAA), which must pass by the end of this year. One provision of the bill would impose a 10-year moratorium(nouvelle fenêtre) on state-level regulation of AI. This would effectively halt any state’s ability to control how AI tools collect data or generate responses for their residents.

The same moratorium was rejected in July 2025(nouvelle fenêtre) as part of the One Big Beautiful Bill Act. But this time the administration is ready with a backup plan: If the moratorium doesn’t pass as part of the NDAA, the Trump administration has already drafted an executive order that would try to force states to scrap any AI regulations of their own.

At Proton, we believe regulation is an effective tool to protect people, and a federal standard is the right way to go. In a fiercely divided political climate, a single federal law would streamline AI regulation, preventing fragmented protections across states, and making the rules much more clear. But erasing state laws before proposing any federal alternative would be worse. Big Tech would be able to operate without guardrails, putting everyone’s safety and privacy at risk.

The administration’s plan to accelerate AI

As part of its AI Action Plan(nouvelle fenêtre), the administration laid out its plan to accelerate AI innovation, build American AI infrastructure, and lead in international AI diplomacy and security.

To push this agenda forward, the federal government has signaled it is determined to prevent state-level regulation. Adding the moratorium to the NDAA is a clever way to push a moratorium through Congress because the defense spending bill must pass. If it fails to make its way into the final budget law, however, President Trump prepared an executive order(nouvelle fenêtre) as plan B.

The draft stipulates that:

  • American AI companies must be free to innovate without regulation, meaning state legislatures must not create a regulatory patchwork that would impede AI growth.
  • Within 30 days of the date of the order, the attorney general will create an AI Litigation Task Force whose sole purpose would be to challenge state AI laws.
  • Within 90 days of the date of the order, the secretary of commerce will publish an evaluation of existing state laws that “require models to alter their truthful outputs, or […] compel AI developers or deployers to disclose or report information in a manner that would violate the First Amendment or any other provision of the Constitution”.
  • Within 90 days of the date of the order, the secretary of commerce will issue a policy notice stating the conditions under which states may be eligible for remaining funding under the Broadband Equity Access and Deployment (BEAD) program.
  • Any states with AI laws that conflict with the evaluation listed above will have their discretionary grant programs and non-deployment funds assessed.

The executive order and the AI Action Plan suggest a clear agenda for the administration: In order to unleash American dominance in the AI industry, the US should regulate AI with a single federal law. But given the government’s incentive to help AI companies outpace their competitors — and Congress’s inability to agree on almost anything — pushing for a single regulation could become equivalent to no regulation at all.

We need an AI industry that isn’t afraid to be regulated

In the short time since AI tools have emerged, they have distinctly changed our reality. They’re part of our work days and our personal lives, helping us write emails, sound out ideas, research topics we’re interested in, and make plans. ChatGPT alone reached more than 100 million users(nouvelle fenêtre) in its first few months.

After the upset caused by DeepSeek, accelerating development in the AI sector has become a grave concern for the industry and the government. Sam Altman, CEO of OpenAI, has warned against regulations(nouvelle fenêtre) that could slow down the US in the AI arms race with China.

American businesses, like other businesses around the globe, have the right to innovate in their sectors and seek to outdo their competitors. But they don’t have the right to go unregulated. Enhancing America’s global AI dominance and ensuring that the AI sector remains insufficiently regulated aren’t in the interest of human beings.

Big Tech has shown us time and time again that flouting data regulations and selling personal data is their fastest route to profit. A profit-driven approach to AI means businesses will prioritize profit or competitive geopolitical gains, even against the interests of the people using their tools. In recent years, we’ve seen the costs of insufficient regulation:

Without adequate regulation, AI companies will be free to pursue the goals they deem necessary for growth. As we’ve seen, these innovations will likely be at the expense of the people using their tools.

Choose private AI tools

Once this executive order comes into force, consumers will become more vulnerable to bad practices in the AI space. Without the right protections in place, AI tools become a digital surveillance apparatus. They can leak information(nouvelle fenêtre) about our personal lives, leak sensitive business documents(nouvelle fenêtre), and encourage us to rely on them emotionally(nouvelle fenêtre). If American AI tools go effectively unregulated, they have a high risk of becoming unsafe for citizens and businesses alike.

In the absence of sufficient government regulation, the best option we have for our own safety and to encourage safer practices in the industry is our tech choices. We must prioritize AI tools that serve people first and that don’t keep conversation logs or train their models with user conversations. Even if Big Tech isn’t regulated sufficiently, there are secure alternatives to their tools.

After our community expressed an interest in private AI alternatives, our engineers began exploring technology to make this possible. In July this year we introduced Lumo(nouvelle fenêtre), an AI assistant designed to help anyone around the globe break away from US-centric tools. Instead of hiding from regulation and scrutiny, we published our security model and codebase so anyone can verify how Lumo operates. It keeps no logs of conversations, allowing you to trust that you’re speaking confidentially. Zero-access encryption protects every conversation and ensures that advertisers and governments can’t access them. Chat histories can be deleted at any time, because we apply GDPR principles to all our users, both inside or outside the EU, as a responsible data practice.

Instead of flouting regulations, Proton actively advocates(nouvelle fenêtre) for stringent privacy laws and against surveillance both online and offline. Our goal has always been to create an internet where privacy comes first — that means creating alternatives to Big Tech tools like AI, and helping to reduce the European market’s reliance on US tech. We’ll always advocate for people over profits.