Artificial Intelligence Company (AI) Anthropic has disclosed that unknown threatening subjects used their Claude Chatbot for “influence as a service” on interaction with valid accounts on Facebook and X.
It is said that the complex activity, called financially motivated, used its AI tool for the orchestration of 100 different persons on two social media platforms, creating a network “politically aligned accounts”, which was engaged in “10 thousand” authentic accounts.
Anthropic researchers who have now been destroyed, noted that prioritizing persistence and longevity over vital force and sought to strengthen the moderate political perspectives that supported or undermined Europeans, Iranian, United Arab Emirates (UAE) and Kenyan interests.
This included the promotion of the UAE as the highest business, criticizing the European regulatory framework, focusing on energy security stories for the European audience and stories about cultural identity for Iranian audience.
These efforts also pushed stories supporting Albanian figures and criticizing opposition figures in an uncertain European country, as well as advocating the development and political figures in Kenya. These actions of the influence are in line with state -owned companies, although the one who was behind them remains unknown, he added.
“Especially the novel is that this operation used CLUude not only to generate contents, but also to solve when the accounting records on Bot’s social media will comment, such as or again share messages from real social media users,” the company said.
“Claude was used as an orchestra that decides what actions in the social media should take on the basis of politically motivated characters.”
Despite the use of Claude as a tactical participation in decision-making, Chat Bot was used to create appropriate politically aligned answers in the voice and native language, as well as create clues for two popular image generation instruments.
The operation is believed to be the work of a commercial service that serves different customers from different countries. At least four different companies were detected using this software base.
“The operation has implemented a highly structured approach to JSON-based characters, allowing it to maintain continuity on the platforms and set consistent participation models that mimic true behavior,” said Ken Lebedev researchers, Alex Max and Jacob Klein.
“Using this software base, operators could effectively standardize and scale their efforts and ensure systematic tracking and updating of persons, interaction history and narrative topics in multiple accounts at the same time.”
Another interesting aspect of the company was that it “strategically” instructed automated accounts to respond to humor and sarcasm for allegations from other accounts that they could be bots.
Anthropic said the operation emphasizes the need for new frames to evaluate the impact of operations that revolve around the construction and integration of society. He also warned that such malicious activities could be widespread in years when AI reduces the barrier to carry out the company.
Elsewhere, the company noted that it banned the complex actors threatening, using its models to steal password leaks and users related to security cameras, and develop methods for rough purposes on the Internet using stolen credentials.
The threatening actor also used Claude to process messages from logging logs posted on a telegram, to create scraping scenarios with a URL with websites, and improve their own systems for better search functionality.
In March 2025, two more cases of abuse were given, noticed by anthropic in March – below –
- The Fraud Company is on the call
- A pioneer actor who used Claude to improve their technical capabilities to develop advanced malware outside his skill level to scan the dark network and create uncertain harmful loads that can avoid security control and maintain long-term resilient systems
“This case illustrates how the AI can potentially smooth the training curve for malicious subjects, allowing people with disabilities to develop complex instruments and potentially speed up their progress from low-level activity to more serious cybercriminals,” Anthropic said.