OpenAI has banned multiple accounts linked to China after identifying their involvement in “coordinated influence operations.” The accounts used OpenAI’s models to generate and spread political narratives, automate disinformation campaigns, and assist in developing surveillance tools targeting Western organizations. OpenAI’s monitoring systems identified abnormal usage patterns, where certain accounts were producing systematic, large-scale messaging rather than organic interactions. The content appeared tailored for political influence efforts, aiming to manipulate discussions across multiple platforms.
The only thing new here is that it’s done by AI https://t.co/GxowvLvsyX
— Victor Shih (@vshih2) February 22, 2025
A security analyst noted that OpenAI’s findings indicate a more automated and structured approach to AI-generated influence campaigns than previously observed. OpenAI confirmed in a statement: “Threat actors sometimes give us a glimpse of what they are doing in other parts of the internet because of how they use our models.”
OUT TODAY: new threat report from @OpenAI’s investigators, with disruptions of:
Surveillance;
Covert influence ops;
Deceptive employment scheme;
Cyber activity;
Scamshttps://t.co/Dhxp4PHU6h— Ben Nimmo (@benimmo) February 21, 2025
While OpenAI’s bans highlight AI’s role in digital influence operations, a report from Google underscores the increasing use of AI in hacking, phishing, and data theft. The report found that state-backed cybercriminals automate attacks using AI-generated phishing emails, deepfake-based social engineering, and AI-enhanced malware development.
The crackdown comes amid growing concerns over DeepSeek R1, an AI system linked to China.
OpenAI published a February 2025 update on disrupting malicious uses of their models pic.twitter.com/94fSlaY6EG
— Tibor Blaho (@btibor91) February 21, 2025
OpenAI’s influence operation crackdown
Research into DeepSeek R1 has suggested that its dataset systematically omits politically sensitive topics, reinforcing concerns about AI being used for information control.
As concerns over DeepSeek AI mount, U.S. lawmakers have introduced efforts to ban its use, citing its potential threat to national security. The proposed legislation aims to prevent the use of DeepSeek AI in government agencies, critical infrastructure, and research institutions amid fears of data exposure and AI-driven disinformation threats. The rapid escalation of AI’s role in global security, cyber warfare, and political influence underscores the need for stronger AI governance frameworks.
OpenAI’s decision to block China-linked accounts signals that AI companies are more active in preventing misuse, but private-sector interventions alone are insufficient. Experts predict that upcoming AI regulations will focus on three key areas: increased transparency in AI training datasets, enhanced cybersecurity protocols, and stricter international cooperation on AI governance. However, the challenge remains in ensuring these regulations do not stifle innovation while preventing AI’s use for disinformation, espionage, and cyber threats.
Image Credits: Photo by Emiliano Vittoriosi on Unsplash
Noah Nguyen is a multi-talented developer who brings a unique perspective to his craft. Initially a creative writing professor, he turned to Dev work for the ability to work remotely. He now lives in Seattle, spending time hiking and drinking craft beer with his fiancee.























