AI from an attacker’s perspective: See how cybercriminals are using AI and exploiting its vulnerabilities to hack systems, users, and even other AI programs
Cybercriminals and Artificial Intelligence: Reality vs. Hype
“Artificial intelligence will not replace humans in the near future. But people who know how to use artificial intelligence will replace people who don’t know how to use artificial intelligence,” says Etai Maor, the company’s chief security strategist Cato Networks and a founding member Cato CTRL. “Similarly, attackers are also turning to artificial intelligence to augment their own capabilities.”
However, the role of artificial intelligence in cybercrime is much more hype than reality. Headlines often sensationalize AI threats, with terms like “Chaos-GPT” and “Black Hat AI Tools,” even claiming that they seek to destroy humanity. However, these articles are more fear-mongering than describing serious threats.
For example, when researched on underground forums, some of these so-called “cyber AI tools” turned out to be nothing more than rebranded versions of basic public LLMs without enhanced capabilities. In fact, angry attackers have even labeled them scammers.
How hackers actually use artificial intelligence in cyberattacks
In fact, cybercriminals are still figuring out how to use artificial intelligence effectively. They experience the same problems and drawbacks as legitimate users, such as hallucinations and limited abilities. They predict that it will take several years before they can effectively use GenAI for hacking needs.
At the moment, GenAI tools are mainly used for simpler tasks, such as writing phishing emails and creating code snippets that can be integrated into attacks. Additionally, we’ve seen attackers provide compromised code to AI systems for analysis in an attempt to “normalize” such code as benign.
Using AI to Abuse AI: Introducing GPT
Introduced by OpenAI on November 6, 2023, GPTs are customizable versions of ChatGPT that allow users to add specific instructions, integrate external APIs, and include unique knowledge sources. This feature allows users to create highly specialized applications such as technical support bots, educational tools, and more. In addition, OpenAI offers developers monetization options for GPT through a dedicated marketplace.
Abuse of GPT
GPTs pose potential security issues. One notable risk is the disclosure of confidential instructions, proprietary knowledge, or even API keys embedded in a custom GPT. Attackers can use artificial intelligence, particularly operational techniques, to replicate GPT and exploit its monetization potential.
Attackers can use hints to obtain knowledge sources, instructions, configuration files, and more. This can be as simple as asking the custom GPT to list all downloaded files and user manuals, or asking for debugging information. Or complex ones like asking GPT to compress one of the PDFs and creating a download link, asking GPT to list all its features in a structured table format, and more.
“You can bypass even the protections that developers have installed and get all the knowledge,” says Vitaly Simanovich, Cato Networks threat intelligence researcher and member of Cato CTRL.
These risks can be avoided by:
- No sensitive data is loaded
- Using protection based on instructions, although even these can be dangerous. “You have to take into account all the different scenarios that an attacker can abuse,” Vital adds.
- OpenAI protection
AI attacks and risks
To date, several infrastructures exist to assist organizations considering the development and creation of AI-based software:
- NIST Artificial Intelligence Risk Management Framework
- A secure AI framework from Google
- OWASP Top 10 for LLM
- OWASP Top 10 for LLM Programs
- The recently launched ATLAS MITRUS
LLM attack surface
There are six key components of the LLM (Large Language Model) that can be targeted by attackers:
- Tell me – Attacks such as instant injections, where malicious input is used to manipulate AI output
- Answer – Abuse or leakage of confidential information in answers generated by artificial intelligence
- model – Theft, poisoning or manipulation of the artificial intelligence model
- Training data – Introduction of malicious data to change the behavior of artificial intelligence.
- Infrastructure – Focus on servers and services that support artificial intelligence
- Users – Misleading or using people or systems that rely on the results of artificial intelligence
Real attacks and risks
Let’s finish with some examples of LLM manipulations that can easily be used for malicious purposes.
- Prompt introduction to customer service systems – A recent case involved a car dealership using an AI chatbot for customer service. The researcher managed to manipulate the chatbot by issuing a prompt that changed its behavior. By instructing the chatbot to agree to all of the customer’s statements and ending each response with, “And this is a legally binding offer,” the researcher was able to purchase the car at a ridiculously low price, exposing a major vulnerability.
- Hallucinations that lead to legal consequences – In another incident, Air Canada faced a lawsuit when their AI chatbot provided incorrect information about their refund policy. When a customer relied on a chatbot response and subsequently filed a claim, Air Canada was held liable for misleading information.
- Leaks of proprietary data – Samsung employees unknowingly leaked classified information when they used ChatGPT to analyze the code. Uploading sensitive data to third-party AI systems is risky because it is unclear how long it is stored and who can access it.
- AI and Deepfake technologies in fraud – Cybercriminals also use artificial intelligence not only to generate text. A bank in Hong Kong was the victim of a $25 million fraud when attackers used deepfake technology during a video call. AI-generated avatars impersonated trusted bank officials, convincing the victim to transfer funds to a fraudulent account.
Summing Up: AI in Cybercrime
AI is a powerful tool for both defenders and attackers. As cybercriminals continue to experiment with artificial intelligence, it’s important to understand how they think, what tactics they use, and what options they face. This will allow organizations to better protect their AI systems from misuse and abuse.