Meta is installing monitoring software on the work computers of employees and contractors based in the U.S. to track keystrokes, mouse clicks, movements, and screenshots. The aim is to use this data to train artificial intelligence models. The program, called Model Capability Initiative, was revealed in an internal note shared by a researcher on the company’s Super Intelligence Labs channel, according to Reuters.
The software will work with work-related applications and websites such as Gmail, GChat, and Meta's internal AI assistant for employees, Metamate. Phones used for work are not included in the monitoring.
Meta's CTO Andrew Bosworth confirmed that employees using work laptops do not have the option to opt out of monitoring.
Data Collected from Meta's Employee Computers
Meta explains that the monitoring system focuses on understanding how employees interact with their computers, capturing actions such as selecting options from drop-down menus and using keyboard shortcuts. The company states that this data will be used to help train AI models for tasks they cannot yet perform independently.
Everyone at Meta can contribute to improving our models by doing their daily work, the note stated. Bosworth expressed that the long-term goal is to develop AI agents that supervise and improve employees' work.
In a statement to CNET, Meta confirmed that the tool collects input data from specific applications and provides real examples of human-computer interaction to AI models. The company emphasized that the data will not be used in performance evaluations, will be closed off from managers, and that measures are in place to protect sensitive information.
Reactions from Employees and Privacy Experts
The internal reaction was described by Business Insider as employees being divided, with some strongly opposing it. One employee asked how they could opt out of monitoring on the internal communication platform, and Bosworth responded that there is no option to opt out of monitoring on work laptops. Reactions from staff included shock, crying, and angry emojis.
Eric Null, director of the Privacy and Data Project at the Center for Democracy and Technology, characterized the plan as one of the most aggressive forms of surveillance in the workplace. He also noted that it could cause real harm to disabled individuals and has the potential to reinforce structural biases when used for AI training.
Why is Meta Expanding This Monitoring Now?
The monitoring program is being introduced as Meta prepares to lay off approximately 8,000 employees, about 10% of its total 79,000 workforce. The company is investing over $135 billion in AI development this year and recently launched the first proprietary AI model, Muse Spark, created by the Super Intelligence Labs.
Meta did not provide details on data retention periods, what qualifies as sensitive content under protection measures, or how the collected interaction data will be used to train its models.
Comments
(1 Comment)