Elon Musk’s Department of Government Efficiency (DOGE) is using artificial intelligence to monitor the communications of U.S. federal workers. Sources say the AI is looking for anti-Trump sentiment. This has raised concerns about data security and transparency.
DOGE is using the Signal app for these surveillance activities. Critics say using an encrypted messaging service makes it hard to ensure data security and transparency. Insiders also claim that Musk’s team at DOGE has been avoiding standard government vetting processes.
They say the team has been operating secretly. This has led to more scrutiny and calls for greater oversight. Reporting from San Francisco and Washington has helped uncover these practices.
Journalists Joseph Tanfani, Valerie Volcovici, and Humeyra Pamuk have been key in revealing these developments. Jason Szep edited their work. The impact of DOGE’s activities on federal operations and employee privacy is expected to be heavily debated in the coming months.
Federal workers are increasingly worried about being monitored by artificial intelligence. This is supposedly part of an effort led by the “department of government efficiency.” Emails and reports suggest that meetings and communications in several agencies might be under surveillance. At the Department of Veterans Affairs (VA), a senior official warned employees by email that virtual meetings were being secretly recorded.
The message told staff to be careful about sharing their opinions. At the State Department, new monitoring software was reportedly installed on computers. This led some employees to use white noise machines or even open breakroom sinks to hide sensitive conversations.
Staffers at water management organizations tied to the Environmental Protection Agency (EPA) were warned by supervisors that AI tools might be monitoring their meetings and phone calls. Many federal employees have shared their fears. They describe a culture of anxiety and suspicion.
“It’s like being in a horror film where you know something out there wants to get you, but you never know when or how,” said an employee from the US Department of Housing and Urban Development. Over two dozen federal employees have revealed details of these concerns. Emails from agency officials and screenshots support their claims.
The workers described how the atmosphere in federal offices has changed. Many fear job losses after waves of layoffs and ongoing legal uncertainties. The US government has historically been open with federal employees about monitoring capabilities.
However, current concerns are heightened by the potential misuse of AI to track discussions on loyalty and controversial subjects like diversity, equity, and inclusion (DEI). The Trump administration has denied many of these claims.
AI surveillance concerns prompt scrutiny
An EPA spokesperson denied recording meetings but did not address the use of AI tools. A State Department spokesperson dismissed the idea of monitoring employees for loyalty as “ridiculous.”
A White House spokesperson called the reports “fake news.” They reiterated the administration’s commitment to cutting waste, fraud, and abuse. Despite official denials, internal emails and meetings show a widespread concern over AI surveillance.
At the Association of Clean Water Administrators (ACWA), staffers were told that their interactions with EPA staff might be monitored. Similar warnings came from the VA, where researchers have been placed on administrative leave amid accusations about distributing VA materials. At the now-defunct US Agency for International Development (USAID), employees found out that their private communications were being monitored after Trump’s inauguration.
This led many to stop using official channels and switch to more secure communication methods like Signal and WhatsApp. In February, IT staff at the State Department shared concerns about new monitoring software that tracked keystrokes. This caused employees to act as if they were “always on a hot mic.” It has led to some funny situations, like employees turning on sinks to hide their conversations.
A State Department spokesperson said employees have always been made aware of their privacy expectations. They said monitoring aims to protect national security. However, trust within the workforce has increasingly eroded.
Many doubt the motives behind these extensive monitoring efforts. At a Department of Veterans Affairs town hall in New England, officials emphasized the lack of privacy federal employees should expect. “There shouldn’t be any expectation of privacy,” an official told employees.
As AI’s role in government oversight grows, the tension between transparency, privacy, and security continues to increase. This leaves many federal workers uncertain about the future and the extent of surveillance they are under. Ireland’s privacy regulator has started an investigation into how the social media platform X has used Europeans’ personal data to train its artificial intelligence model, Grok.
This move targets the platform owned by tech billionaire Elon Musk. It is likely to increase tensions between the EU and U.S. over technology regulations. The investigation by Ireland’s Data Protection Commission (DPC) looks at whether personal data from “publicly-accessible posts” on X were processed to train Grok.
Grok is a group of AI models developed by Musk’s startup xAI. It powers applications like the AI chatbot available on the X platform. This is not the first time regulators have looked at how Grok uses EU data.
The Irish regulator previously raised concerns about the use of EU citizens’ data to train AI models. The DPC’s new investigation will determine if X has been following the EU’s General Data Protection Regulation (GDPR). It will assess whether data was processed lawfully and in line with transparency rules.
X did not immediately respond to a request for comment. The ongoing probe highlights the growing scrutiny of data privacy and AI practices in Europe. The EU continues to enforce strict regulations to protect personal data and maintain transparency in data processing activities.
Image Credits: Photo by Israel Andrade on Unsplash
Cameron is a highly regarded contributor in the rapidly evolving fields of artificial intelligence (AI) and machine learning. His articles delve into the theoretical underpinnings of AI, the practical applications of machine learning across industries, ethical considerations of autonomous systems, and the societal impacts of these disruptive technologies.























