The AI “alignment problem” posits that as AI systems become more intelligent, it will become more difficult to align their goals with those of their developers. An AI system could, in theory, “go rogue”, devising undesirable and unpredictable strategies to achieve an objective.

The paperclip maximizer(nové okno) is a thought experiment that explores this concept: If a superintelligent AI was tasked with the single purpose of creating paperclips, it might commandeer the world’s resources to accomplish its goal at all costs, turning everything into paperclips. Eventually it might even realize humans, trying to stop the AI, were a distraction and eliminate us: clearly a misalignment between AI and human interests.

Though obviously outlandish, the paperclip problem has become a pop culture touch point for AI. In real life, constraints and rules prevent systems from going beyond their intended purposes. But many Big Tech founders and CEOs use this hypothetical to create buzz around their large language models (LLMs). Do chatbots really have the power to make humanity extinct? No. But it makes them sound impressive and, more importantly, it conceals the real AI alignment problem that’s harming us right now: Big Tech LLMs are invading our privacy, stealing our data, and devaluing our information economy. It’s both a sales strategy and misdirection, designed to keep us looking into the future instead of looking at the control we’re ceding in the present.

Are we on track for a dystopian future?

When founders make broad, speculative statements about what their products “could” do, this is primarily a method to create business opportunities and create value for their shareholders. LLMs and generative AI are business products. It’s in the interests of their owners that you believe their tool is revolutionary.

Media outlets are keen to propel sensational stories, such as the potential for AI tools to help users create bioweapons(nové okno) or lie to developers(nové okno), as though the gen AI has a mind of its own (it doesn’t). It’s easy to get clicks with headlines about extreme or unfounded opinions, but such views aren’t an accurate portrayal of the AI technology that exists today.

Instead, LLMs are simply word prediction machines. They are computer programs that digest massive amounts of text, “learning” patterns of human language, and then build probabilistic responses to user inputs. They don’t have thoughts of their own, and they don’t even understand language. In fact, they must first convert words to numbers before they can process information. The limitations of LLMs have been well-documented by Gary Marcus(nové okno) and many other researchers.

New tools for surveillance capitalism

LLMs won’t cause the apocalypse, but they’re quite good at something else: collecting extremely nuanced information about you. That’s why LLMs are a data gold mine for businesses and an attractive mark for criminals.

LLMs power chatbots such as ChatGPT, Copilot, Claude, and Grok, which function like search engines. Ostensibly, you can use these chatbots for anything you like: They can help you create grocery lists, organize your calendar, write emails, or write code. They’re your personal assistant, your teacher, your confidante, or whatever else you ask them to be.

Hundreds of millions of people globally are already sharing their questions, ideas, thoughts, and deepest secrets with chatbots. They’re encouraged to do so by the companies running them: Sam Altman, CEO of Open AI, noted recently(nové okno) that Gen Z “don’t really make life decisions without asking ChatGPT what they should do.” Calling out and encouraging this level of trust and reliance on a for-profit and unreliable chatbot is deeply irresponsible. Big Tech wants us mediating our lives and thoughts through their services because then they get a say in how we spend our money, what we do, and what we think.

The businesses behind LLMs also sell their products as productivity solutions for governments and businesses, promising to save time and effort for their users while introducing new opportunities for you to be surveilled. In the UK, the Ministry of Justice’s AI system “predicts” the reoffending risk(nové okno) of prisoners. Police officers are using chatbots(nové okno) to write up crime reports. Combining state powers with biased algorithms(nové okno) is dangerous and reinforces existing prejudices.

Exploiting your data for training purposes

LLMs run on AI models trained by enormous datasets, and in order to improve their services they require more and more data. Unfortunately, this data comes from you: your chats, your photos, your web searches. Companies won’t always ask your permission or make it clear what they’re collecting from you.

For example, Meta AI wants to scan (nové okno)photos(nové okno) in your library that you haven’t even uploaded to its platforms to analyze your facial data. Many users were disturbed by the announcement that Microsoft’s Copilot AI(nové okno) would take screenshots of their devices every few minutes. Your photos, your messages to your friends and loved one, and your most personal thoughts are valuable data points that the companies want.

Looking at Open AI’s privacy policy(nové okno), we see that ChatGPT will find and store:

  • Identifiers, such as your name, contact details, IP address, and other device identifiers
  • Commercial information, such as your transaction history
  • Network activity information, such as content and how you interact with ChatGPT
  • The general location from which you access ChatGPT

This information can be disclosed to governments, vendors, affiliates, service providers, and other third parties.

Essentially, an LLM gets to know you. Much like a search engine, it comes to understand your habits, your preferences, your interests, and every aspect of you that it can glean from your behavior. Big Tech will always prioritize finding new ways to acquire your personal data because it’s profitable. And they’re hoping you won’t notice.

Just think of the trends that have swept social media in recent years: custom Studio Ghibli style animations(nové okno) aped the distinctive style of Hayao Miyazaki against his will and custom AI packaged dolls(nové okno) inspired by classic toys such as Barbie created visual insights into people’s personal tastes. This social media trend may have felt like harmless fun, but the AI tools used to create these images were able to harvest photos and information about people that their parent companies are able to use for advertising purposes and potentially sell to third parties. These are just two of many examples of Big Tech quietly ingesting your personal data in return for participation in a fleeting trend.

There’s a better solution for LLMs

After reading about all these risks, you might come to think that LLMs are inherently dangerous. But that’s not the case at all. Big Tech companies built their AI systems to be invasive and data-hungry on purpose. It’s possible to build AI that protects people’s privacy and keeps data secure by default. We know because we did it with Lumo, our privacy-first AI assistant(nové okno).

Here’s how Lumo solves the real AI alignment problem:

  • Lumo doesn’t keep logs of your conversations. Every chat is deleted from our servers as soon as the model is done processing your query and response.
  • Chat histories are stored with zero-access encryption. By locking your data with your secret key, Proton never has access and can therefore never share or accidentally leak your data.
  • We don’t train the models with your chats. Using conversations to train AI models places your data at risk of resurfacing in future outputs. Conversations with Lumo are yours alone.
  • Lumo is open source and uses only open source models. Because our code base is public, anybody can verify that our apps do exactly what we claim.
  • We’re based in a privacy-respecting jurisdiction. Unlike US-based Big Tech AIs that are subject to invasive surveillance laws, Lumo is based in Europe and protected by strong privacy laws.

At every level, Lumo is designed to provide the same utility as other LLMs without the risky externalities. Big Tech could build their AIs the same way. They just choose not to because your data is their currency.

Is AI alignment really a threat?

The truth is simply less exciting than the Big Tech CEOs would like us to believe. It seems less likely that an AI will empower people to create bioweapons or expend all of our global resources making paperclips than it will simply follow the directives that Big Tech already follows: Stealing and exploiting private data.

Don’t believe the hype about AI alignment. It’s not the threat that any of us should be focusing on. Surveillance, exploitation of individuals and smaller businesses, and a compromised information ecosystem are the pressing concerns we should actually be paying attention to when it comes to AI.