Meta is reportedly preparing to capture US-based employee behavior — including mouse movements, clicks, keystrokes, and screen snapshots — to help train its AI systems to navigate software the way humans do.
Dubbed the Model Capability Initiative (MCI), according to Reuters(ventana nueva), the program is part of Meta’s broader push to build AI agents that can autonomously perform computer-based tasks, from navigating dropdown menus to using keyboard shortcuts.
Meta said it will protect sensitive information but didn’t clarify what data qualifies as sensitive material, how that protection would work, or whether it would extend to third-party information that employees may handle on the job.
The move comes as Meta prepares to cut 10% of its workforce starting May 20, with more layoffs reportedly expected later this year.
The new AI gold rush is behavioral data
AI companies have already burned through huge amounts of public internet data, and Meta’s MCI is an example of these companies going deeper in their pursuit of the latest AI training fodder: behavioral data(ventana nueva).
Behavioral data refers to the digital traces people leave behind as they move through systems: the clicks, keystrokes, pauses, corrections, shortcuts, and navigation patterns that show how a task actually gets done. It is valuable to companies because it captures not just the output of work, but the process behind it — something that AI systems currently struggle to process and replicate.
Microsoft Recall follows the same logic by taking snapshots of what a person does on their computer, with Microsoft presenting it as a form of productivity tracking software, not an AI training pipeline. But it still shows how comfortable Big Tech has become with turning highly detailed behavioral traces into something systems can record and learn from. In workplace settings, features framed as optional can become hard to refuse when employers control company policy and shape the power dynamics around consent.
In Meta’s case, MCI seems like another building block in a broader push to capture more intimate and revealing forms of personal data. The company is already using all Meta AI interactions across Facebook, Instagram, WhatsApp, and the rest of its ecosystem for product improvement, AI training, and targeted ads in places that don’t have strong privacy protections like the GDPR.
Tracking employees can make work worse
Another problem with Meta tracking employees for AI training is that it can make work worse(ventana nueva). Like click tracking software, it treats keystrokes and mouse movements as meaningful signals. But even with more advanced AI layered on top, those are still poor stand-ins for actual performance, particularly in knowledge work where critical thinking, planning, connecting ideas across functions, and solving problems often look invisible from the outside.
Once workers know those signals are being captured — especially if they suspect they could one day be replaced by the AI agents they are helping refine — it creates a perverse incentive to optimize for looking busy, or even to deliberately distort their behavior instead of doing meaningful work. Surveillance and mistrust become the workplace norm.
A privacy-first infrastructure matters
Platforms built around extraction make every interaction start to look like data waiting to be monetized, optimized, or fed back into another system. When AI is involved, that system may be used to study you, imitate you, and eventually displace you.
Privacy-first services matter because the less data a company can access, the less room it has for this kind of mission creep. Strong protections like end-to-end encryption help limit what a platform can see in the first place, while open-source code adds transparency by allowing independent scrutiny of how those systems actually work. Together, they help protect the trust of both employees and customers.






