Microsoft has shown that it is pursuing legal action against a “foreign threat group” for operating a hacking-as-a-service infrastructure to deliberately bypass security controls on its generative artificial intelligence (AI) services and create offensive and harmful content.
The tech giant’s digital crime unit (DCU) said it observed threat actors “developing sophisticated software that uses exposed customer credentials taken from public websites” and “trying to identify and illegally access accounts with certain generative artificial intelligence services and intentionally alter the capabilities of those services.”
Adversaries then used these services, such as Azure OpenAI Service, and monetized access by selling them to other attackers, giving them detailed instructions on how to use these special tools to create malicious content. Microsoft said it discovered the activity in July 2024.
The Windows maker said it has since revoked the threat team’s access, implemented new countermeasures, and strengthened defenses to prevent such activity in the future. He also said he had obtained a court order to seize a website (“aitism(.)net”) that was central to the group’s criminal activities.
The popularity of AI tools like OpenAI ChatGPT has also led to threats abusing them for evil intentionsranging from the production of prohibited content to the development of malware. Microsoft and OpenAI have repeatedly opened that nation-state groups from China, Iran, North Korea, and Russia use their services for intelligence, translation, and disinformation campaigns.
Court documents show it At least three unknown individuals are behind the operation, using stolen Azure API keys and Entra ID authentication information to hack into Microsoft systems and create malicious images using DALL-E in violation of the Acceptable Use Policy. Seven other parties are believed to have used the services and tools they provided for similar purposes.
How the API keys were collected is currently unknown, but Microsoft said the defendants engaged in “systematic theft of API keys” from multiple customers, including several US companies, some of which are located in Pennsylvania and New Jersey.
“Using stolen Microsoft API keys belonging to US Microsoft customers, the defendants created a hacking-as-a-service scheme accessible through infrastructure, such as the ‘rentry.org/de3u’ and ‘aitism.net’ domains, specifically designed to abuse the infrastructure and software provided by Microsoft Azure,” the company said in a statement.
According to A The GitHub repository is now removedde3u has been described as a “DALL-E 3 frontend with reverse proxy support”. The GitHub account in question was created on November 8, 2023.
The threat actors reportedly took steps to “clean their tracks, including attempting to remove some Rentry.org pages, the GitHub repository for the de3u tool, and part of the reverse proxy infrastructure” after the “aitism(.)net” takeover. .”
Microsoft noted that threat actors used de3u and a special reverse proxy service called oai reverse proxy to make Azure OpenAl Service API calls using stolen API keys to illegally create thousands of malicious images using text prompts. It is unclear what type of offensive images were created.
The oai reverse proxy service running on the server is designed to forward messages from de3u users’ computers through the Cloudflare tunnel to the Azure OpenAI service and relay responses back to the user’s device.
“The de3u software allows users to make Microsoft API calls to generate images using the DALL-E model through a simple user interface that uses Azure APIs to access the Azure OpenAI service,” explained Redmond.
“The responders’ de3u application communicates with Azure computers using Microsoft’s undocumented network APIs to send requests designed to mimic legitimate Azure OpenAPI Service API requests. These requests are authenticated using stolen API keys and other authentication information.”
It should be noted that the use of proxy services to illegally access LLM services was noted by Sysdig in May 2024. in connection with LLMjacking attack campaign targeting artificial intelligence offerings from Anthropic, AWS Bedrock, Google Cloud Vertex AI, Microsoft Azure, Mistral, and OpenAI using stolen cloud credentials and selling access to other participants.
“Defendants conducted the Azure Abuse Enterprise Cases through a coordinated and continuous pattern of unlawful activity to achieve their common unlawful goals,” Microsoft said.
“The defendants’ pattern of illegal activity is not limited to attacks on Microsoft. Evidence uncovered by Microsoft to date suggests that Azure Abuse Enterprise has targeted and victimized other AI service providers.”