The US Department of Justice said it seized two Internet domains and searched nearly 1,000 social media accounts that Russian threat actors allegedly used to covertly spread pro-Kremlin disinformation at home and abroad on a large scale.
“The social media bot farm used elements of artificial intelligence to create fictitious social media profiles, often purporting to belong to individuals in the United States, which the operators then used to promote messages in support of Russian government goals,” the Justice Department said. said.
The botnet, which includes 968 accounts on X, is believed to be part of a complex scheme developed by an employee of Russian state media RT (formerly Russia Today), sponsored by the Kremlin and aided by an officer of the Russian Federal Service. Security Service (FSB), which created and led an unnamed private intelligence organization.
The development of the bot farm began in April 2022, when people purchased Internet infrastructure, anonymizing their identities and locations. The purpose of the organization, according to the Ministry of Justice, was to defend Russian interests by spreading disinformation through fictitious Internet personalities representing various nationalities.
The fake social media accounts were registered using private mail servers that relied on two domains – mlrtr(.)com and otanmail(.)com – that were purchased from domain registrar Namecheap. X has since suspended the bot’s accounts for violating the terms of service.
The information operation, which targeted the US, Poland, Germany, the Netherlands, Spain, Ukraine and Israel, was carried out using an artificial intelligence-based software package called Meliorator, which facilitated the “massive” creation and management of the said social network. media bot farm.
“Using this tool, RT affiliates spread disinformation in and about a number of countries, including the United States, Poland, Germany, the Netherlands, Spain, Ukraine, and Israel,” law enforcement agencies in Canada, the Netherlands, and the United States said. .
Meliorator includes an admin panel called Brigadir and a backend tool called Taras, which is used to monitor real accounts whose profile photos and biographical information have been created using an open-source program called Faker.
Each of these accounts had a distinctive identity or “soul” based on one of three bot archetypes: those promoting political ideologies favorable to the Russian government, such as messages already shared by other bots, and perpetuating disinformation spread by bots and non- robots. bot accounts.
While the software package was identified only on X, further analysis revealed the intent of the threat actors to expand its functionality to include other social media platforms.
Additionally, the system bypassed X’s security measures to authenticate users by automatically copying one-time access codes sent to registered email addresses and assigning AI-generated proxy IP addresses to individuals based on their presumed location.
“Bot accounts make clear attempts to avoid bans for violating the terms of service and avoid being seen as bots by blending into the larger social media environment,” the agencies noted. “Like real accounts, these bots subscribe to real accounts that reflect their political affiliations and interests as listed in their bios.”
“Farming is a favorite pastime for millions of Russians,” RT said is quoted as Bloomberg said in response to the allegations without directly denying them.
The development marked the first time the US has publicly pointed the finger at a foreign government for using artificial intelligence in foreign influence operations. No criminal charges have been released in the case, but the activities are still under investigation.
Doppelganger lives on
In recent months Google, Meta and OpenAI warned that Russian disinformation operations, including those organized by a network called Doppelganger, have repeatedly used their platforms to spread pro-Russian propaganda.
“The company is still active, as well as the network and server infrastructure responsible for distributing the content,” Curium and EU DisinfoLab says a new report released Thursday.
“Surprisingly, Doppelganger does not operate from a hidden data center in the Vladivostok fortress or from a remote military cave in Bat, but from newly established Russian ISPs operating in the largest data centers in Europe. Doppelganger works closely with cybercriminals and partner ad networks. .”
At the heart of the operation is a network of bulletproof hosting providers that includes Aeza, Evil Empire, GIR and SECURITYwhich also contained command and control domains for various malware families such as Stealc, Amadey, Agent Tesla, Glupteba, Raccoon Stealer, RisePro, RedLine Stealer, RevengeRAT, Lumma, Meduza, and Mystic.
Moreover, NewsGuard, which provides many tools to combat disinformation, recently discovered that popular AI chatbots tend to repeat “fabricated stories from government websites masquerading as local news outlets in one-third of their responses.”
Influence operations by Iran and China
It also comes as the US Office of the Director of National Intelligence (ODNI) said that Iran “is becoming increasingly aggressive in its efforts to exert foreign influence in an effort to sow discord and undermine confidence in our democratic institutions.”
The agency also noted that Iranian actors continue to improve their cyber and influence activities, using social media platforms and spreading threats, and that they are amplifying protests in Gaza in the United States by impersonating activists online.
Google, for its part, said it blocked more than 10,000 instances of Dragon Bridge (aka Spamouflage Dragon) activity in the first quarter of 2024, which is the name spammy but persistent network of influence related to China, on YouTube and Blogger, which promoted narratives portraying the US in a negative light, as well as content related to the Taiwan election and Israel and Hamas’ war against Chinese speakers.
By comparison, the tech giant has breached at least 50,000 such cases in 2022 and another 65,000 in 2023. In total, to date, it has prevented more than 175,000 incidents during the network’s lifetime.
“Despite their continued rich content production and the scale of their operations, DRAGONBRIDGE achieves little to no organic engagement from real viewers,” — Zach Butler, researcher at Threat Analysis Group (TAG). said. “In the cases where DRAGONBRIDGE content has attracted attention, it has been almost entirely invalid, coming from other DRAGONBRIDGE accounts rather than genuine users.”