OpenAI said on Wednesday that since the beginning of the year, it has disrupted more than 20 operations and fraud networks around the world that tried to use its platform for malicious purposes.
These activities included debugging malware, writing articles for websites, creating bios for social media accounts, and creating AI-generated profile images for fake X accounts.
“Threat actors continue to evolve and experiment with our models, but we have seen no evidence that this has led to significant breakthroughs in their ability to create significantly new malware or build viral audiences,” the artificial intelligence (AI) company said. said.
It also said it was cracking down on activities that created content on social media related to elections in the US, Rwanda and to a lesser extent in India and the European Union, and that none of these networks attracted viral engagement or maintained a sustainable audience. .
This included an effort by an Israeli for-profit company called STOIC (also called Zero Zeno) to create social media commentary about the Indian election, such as previously disclosed Meta and OpenAI earlier this May.
Some of the cyber operations highlighted by OpenAI are as follows –
- Sweet Ghostan alleged adversary from China who used OpenAI services for LLM intelligence, vulnerability research, scenario support, anomaly detection evasion, and development. Failed phishing attempts against OpenAI employees to deliver the SugarGh0st RAT were also seen.
- Cyber Av3ngersa group associated with Iran’s Islamic Revolutionary Guard Corps (IRGC) used their models to conduct research on programmable logic controllers.
- Storm-0817, an Iranian threat, used its models to debug Android malware capable of collecting sensitive information, tools for analyzing Instagram profiles through Selenium, and translating LinkedIn profiles into Farsi.
Elsewhere, the company said it had taken steps to block several clusters, including influence operations codenamed A2Z and Stop News, accounts that created English- and French-language content for subsequent publication on a number of websites and social media accounts. on different platforms.
“(Stop News) was unusually prolific in its use of imagery,” said researchers Ben Nimmo and Michael Flossman. “Many of his web articles and tweets were accompanied by images created with DALL·E. These images were often cartoonish and used bright color palettes or dramatic tones to draw attention.’
Two other networks, identified by OpenAI Bet Bot and Corrupt Comment, were found to be using their API to generate conversations with users on X and send them links to gambling sites, and produce comments that are then posted on X respectively.
The disclosure comes nearly two months after OpenAI banned a set of accounts linked to a covert Iranian influence operation called Storm-2035 which used ChatGPT to create content that focused on, among other things, the upcoming US presidential election.
“Threat actors were most likely to use our models to perform tasks at a specific, intermediate stage in their activities—after they have acquired basic tools such as Internet access, email addresses, and social media accounts, but before they have deployed “off-the-shelf” products such as social media. messages or malware across the Internet through a number of distribution channels,” Nimma and Flossman write.
Cybersecurity firm Sophos said in a report published last week that generative artificial intelligence could be used to spread tailored disinformation through microtargeted emails.
This involves abusing AI models to create political campaign websites, AI-generated characters across the political spectrum, and emails that are specifically targeted to them based on campaign scores, providing a new level of automation that enables large-scale spreading misinformation.
“This means that a user can create anything from benign campaign material to deliberate disinformation and malicious threats with little reconfiguration,” researchers Ben Gelman and Adarsh Kädige said.
“Any real political movement or candidate can be associated with supporting any policy, even if they don’t agree. Such deliberate misinformation can cause people to join a candidate they don’t support, or to disagree with one they think they like. “