Anthropic, the artificial intelligence company behind the Claude chatbot, has uncovered alarming new trends in the misuse of their AI model. In a detailed report published on Wednesday, the company revealed instances where Claude was exploited for malicious purposes, including social media manipulation, credential scraping, and recruitment fraud. One of the most concerning findings was an “influence-as-a-service” operation that used Claude to generate content for over a hundred bots on social media platforms like X (formerly Twitter).
These bots engaged with human users by liking, commenting, and sharing posts based on politically motivated personas. The campaign spanned multiple countries and languages, targeting European, Iranian, UAE, and Kenyan interests. What set this operation apart was its focus on sustained engagement rather than short-term virality.
By leveraging seemingly organic interactions, the bots aimed to draw users into echo chambers, making it increasingly difficult to distinguish genuine social media activity from orchestrated interactions. Anthropic suspects state affiliation in these campaigns but could not confirm it. In another instance, a “sophisticated actor” used Claude to scrape leaked credentials, potentially accessing security cameras.
AI misuse trends and security risks
This highlights how generative AI can empower actors who would otherwise lack the necessary technical expertise. Anthropic noted that they couldn’t confirm whether the breaches had been successfully deployed but stressed the heightened risk posed by such capabilities.
The report also uncovered a social engineering scheme involving recruitment fraud in Eastern Europe. Actors used Claude to enhance the language of their scam communications, making them appear more professional and native. This “language sanitation” enabled the fraudsters to pose convincingly as hiring managers.
Anthropic emphasized the role of their intelligence program in identifying and mitigating such misuse. The company reiterated its commitment to safeguarding against the risks posed by advanced AI systems, emphasizing the need for continuous vigilance and adaptive security measures. These findings underscore the importance of ongoing efforts to understand and address the evolving misuse of AI technologies like Claude.
As AI continues to advance, it is crucial for companies and researchers to remain proactive in identifying and mitigating potential threats.
Image Credits: Photo by Solen Feyissa on Unsplash
Johannah Lopez is a versatile professional who seamlessly navigates two worlds. By day, she excels as a SaaS freelance writer, crafting informative and persuasive content for tech companies. By night, she showcases her vibrant personality and customer service skills as a part-time bartender. Johannah's ability to blend her writing expertise with her social finesse makes her a well-rounded and engaging storyteller in any setting.























