Close Menu
Indo Guard OnlineIndo Guard Online
  • Home
  • Cyber Security
  • Risk Management
  • Travel
  • Security News
  • Tech
  • More
    • Data Privacy
    • Data Protection
    • Global Security
What's Hot

Password “B” in Sitecore XP Sparks Sparks Erriss RCE when deploying businesses

June 17, 2025

Are you forgotten accounts of advertising services that leave you risk?

June 17, 2025

New Flodrix Botnet Option Operates Langflow Ai Server RCE BUG to launch DDOS ATTACKS

June 17, 2025
Facebook X (Twitter) Instagram
Facebook X (Twitter) Instagram YouTube
Indo Guard OnlineIndo Guard Online
Subscribe
  • Home
  • Cyber Security
  • Risk Management
  • Travel
  • Security News
  • Tech
  • More
    • Data Privacy
    • Data Protection
    • Global Security
Indo Guard OnlineIndo Guard Online
Home » From misuse to abuse: AI risks and attacks
Global Security

From misuse to abuse: AI risks and attacks

AdminBy AdminOctober 16, 2024No Comments6 Mins Read
AI Risks and Attacks
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link


October 16, 2024Hacker newsArtificial Intelligence / Cybercrime

Risks and attacks of artificial intelligence

AI from an attacker’s perspective: See how cybercriminals are using AI and exploiting its vulnerabilities to hack systems, users, and even other AI programs

Cybercriminals and Artificial Intelligence: Reality vs. Hype

“Artificial intelligence will not replace humans in the near future. But people who know how to use artificial intelligence will replace people who don’t know how to use artificial intelligence,” says Etai Maor, the company’s chief security strategist Cato Networks and a founding member Cato CTRL. “Similarly, attackers are also turning to artificial intelligence to augment their own capabilities.”

However, the role of artificial intelligence in cybercrime is much more hype than reality. Headlines often sensationalize AI threats, with terms like “Chaos-GPT” and “Black Hat AI Tools,” even claiming that they seek to destroy humanity. However, these articles are more fear-mongering than describing serious threats.

Risks and attacks of artificial intelligence

For example, when researched on underground forums, some of these so-called “cyber AI tools” turned out to be nothing more than rebranded versions of basic public LLMs without enhanced capabilities. In fact, angry attackers have even labeled them scammers.

How hackers actually use artificial intelligence in cyberattacks

In fact, cybercriminals are still figuring out how to use artificial intelligence effectively. They experience the same problems and drawbacks as legitimate users, such as hallucinations and limited abilities. They predict that it will take several years before they can effectively use GenAI for hacking needs.

Risks and attacks of artificial intelligence
Risks and attacks of artificial intelligence

At the moment, GenAI tools are mainly used for simpler tasks, such as writing phishing emails and creating code snippets that can be integrated into attacks. Additionally, we’ve seen attackers provide compromised code to AI systems for analysis in an attempt to “normalize” such code as benign.

Using AI to Abuse AI: Introducing GPT

Introduced by OpenAI on November 6, 2023, GPTs are customizable versions of ChatGPT that allow users to add specific instructions, integrate external APIs, and include unique knowledge sources. This feature allows users to create highly specialized applications such as technical support bots, educational tools, and more. In addition, OpenAI offers developers monetization options for GPT through a dedicated marketplace.

Abuse of GPT

GPTs pose potential security issues. One notable risk is the disclosure of confidential instructions, proprietary knowledge, or even API keys embedded in a custom GPT. Attackers can use artificial intelligence, particularly operational techniques, to replicate GPT and exploit its monetization potential.

Attackers can use hints to obtain knowledge sources, instructions, configuration files, and more. This can be as simple as asking the custom GPT to list all downloaded files and user manuals, or asking for debugging information. Or complex ones like asking GPT to compress one of the PDFs and creating a download link, asking GPT to list all its features in a structured table format, and more.

“You can bypass even the protections that developers have installed and get all the knowledge,” says Vitaly Simanovich, Cato Networks threat intelligence researcher and member of Cato CTRL.

These risks can be avoided by:

  • No sensitive data is loaded
  • Using protection based on instructions, although even these can be dangerous. “You have to take into account all the different scenarios that an attacker can abuse,” Vital adds.
  • OpenAI protection

AI attacks and risks

To date, several infrastructures exist to assist organizations considering the development and creation of AI-based software:

  • NIST Artificial Intelligence Risk Management Framework
  • A secure AI framework from Google
  • OWASP Top 10 for LLM
  • OWASP Top 10 for LLM Programs
  • The recently launched ATLAS MITRUS

LLM attack surface

There are six key components of the LLM (Large Language Model) that can be targeted by attackers:

  1. Tell me – Attacks such as instant injections, where malicious input is used to manipulate AI output
  2. Answer – Abuse or leakage of confidential information in answers generated by artificial intelligence
  3. model – Theft, poisoning or manipulation of the artificial intelligence model
  4. Training data – Introduction of malicious data to change the behavior of artificial intelligence.
  5. Infrastructure – Focus on servers and services that support artificial intelligence
  6. Users – Misleading or using people or systems that rely on the results of artificial intelligence

Real attacks and risks

Let’s finish with some examples of LLM manipulations that can easily be used for malicious purposes.

  • Prompt introduction to customer service systems – A recent case involved a car dealership using an AI chatbot for customer service. The researcher managed to manipulate the chatbot by issuing a prompt that changed its behavior. By instructing the chatbot to agree to all of the customer’s statements and ending each response with, “And this is a legally binding offer,” the researcher was able to purchase the car at a ridiculously low price, exposing a major vulnerability.
  • Risks and attacks of artificial intelligence
  • Hallucinations that lead to legal consequences – In another incident, Air Canada faced a lawsuit when their AI chatbot provided incorrect information about their refund policy. When a customer relied on a chatbot response and subsequently filed a claim, Air Canada was held liable for misleading information.
  • Leaks of proprietary data – Samsung employees unknowingly leaked classified information when they used ChatGPT to analyze the code. Uploading sensitive data to third-party AI systems is risky because it is unclear how long it is stored and who can access it.
  • AI and Deepfake technologies in fraud – Cybercriminals also use artificial intelligence not only to generate text. A bank in Hong Kong was the victim of a $25 million fraud when attackers used deepfake technology during a video call. AI-generated avatars impersonated trusted bank officials, convincing the victim to transfer funds to a fraudulent account.

Summing Up: AI in Cybercrime

AI is a powerful tool for both defenders and attackers. As cybercriminals continue to experiment with artificial intelligence, it’s important to understand how they think, what tactics they use, and what options they face. This will allow organizations to better protect their AI systems from misuse and abuse.

Watch the entire workshop here.

Did you find this article interesting? This article is from one of our respected partners. Follow us Twitter  and LinkedIn to read more exclusive content we publish.





Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Admin
  • Website

Related Posts

Password “B” in Sitecore XP Sparks Sparks Erriss RCE when deploying businesses

June 17, 2025

Are you forgotten accounts of advertising services that leave you risk?

June 17, 2025

New Flodrix Botnet Option Operates Langflow Ai Server RCE BUG to launch DDOS ATTACKS

June 17, 2025

Lack of the TP-Link Cve-2023-33538 router under active operation, CISA releases an immediate warning

June 17, 2025

Meta begins showing advertisements on WhatsApp after 6 years delay with the 2018 announcement

June 17, 2025

The United States seizes $ 7.74 million with a crystallian -related IT workers of North Korea

June 16, 2025
Add A Comment
Leave A Reply Cancel Reply

Loading poll ...
Coming Soon
Do You Like Our Website
: {{ tsp_total }}

Subscribe to Updates

Get the latest security news from Indoguardonline.com

Latest Posts

Password “B” in Sitecore XP Sparks Sparks Erriss RCE when deploying businesses

June 17, 2025

Are you forgotten accounts of advertising services that leave you risk?

June 17, 2025

New Flodrix Botnet Option Operates Langflow Ai Server RCE BUG to launch DDOS ATTACKS

June 17, 2025

Lack of the TP-Link Cve-2023-33538 router under active operation, CISA releases an immediate warning

June 17, 2025

Meta begins showing advertisements on WhatsApp after 6 years delay with the 2018 announcement

June 17, 2025

The United States seizes $ 7.74 million with a crystallian -related IT workers of North Korea

June 16, 2025

Anubis Ransomware encrypts files and napkins, making recovery impossible even after payment

June 16, 2025

Turning Cybersecurity Practice into Mrr Machine

June 16, 2025
About Us
About Us

Provide a constantly updating feed of the latest security news and developments specific to Indonesia.

Facebook X (Twitter) Pinterest YouTube WhatsApp
Our Picks

Password “B” in Sitecore XP Sparks Sparks Erriss RCE when deploying businesses

June 17, 2025

Are you forgotten accounts of advertising services that leave you risk?

June 17, 2025

New Flodrix Botnet Option Operates Langflow Ai Server RCE BUG to launch DDOS ATTACKS

June 17, 2025
Most Popular

In Indonesia, crippling immigration ransomware breach sparks privacy crisis

July 6, 2024

Why Indonesia’s Data Breach Crisis Calls for Better Security

July 6, 2024

Indonesia’s plan to integrate 27,000 govt apps in one platform welcomed but data security concerns linger

July 6, 2024
© 2025 indoguardonline.com
  • Home
  • About us
  • Contact us
  • Privacy Policy

Type above and press Enter to search. Press Esc to cancel.