Fundamentals for social engineering attacks – people’s manipulation – may have not changed for many years. These are vectors – how these methods are unfolding – developing. And, like most industries these days, AI accelerates its evolution.
This article studies how these changes affect business and how cybersecurity leaders can respond.
Establishing attacks: Use a trusted identity
Traditional forms of defense have already fought for solving social engineering, “the cause of most data violations”, according to Thomson Reuters. The next generation of cyber-fades working on AI can now start these attacks with unprecedented speed, scale and realism.
Old Way: Silicone masks
Introducing themselves for the Minister of the French Government, two frauds were able to extract more than 55 million euros from several victims. During the video calls one could wear the silicone mask of Jean-Yves Le Drian. To add a layer of plausibility, they also sat on vacation of their ministerial office with photos of the then President François Hollande.
More than 150 outstanding figures were reportedly linked and asked for money to pay ransom or anti -terrorist operations. The biggest broadcast is € 47 million when the goals were called to act from the two journalists who have passed in Syria.
New way: video Deepffakes
Many money for money failed. After all, silicon masks cannot completely repeat the appearance and movement of the skin on the person. AI Video Technology offers a new way to activate this attack form.
Last year, we saw Hong Kong, where the attackers created a video of the financial director’s director to spend $ 25 million. They then invited a colleague to a video conference call. It is here that the director of Deepfake convinced the employee to make a multimillion -dollar transfer to the scorch account.
Living calls: voice phishing
Voice phishing, often known as Winthing, uses a live audio to rely on the power of traditional phishing, where people are convinced to provide information that threatens their organization.
Old Way: False phone calls
The attacker can bring himself out for anyone, perhaps a reputable figure or from another reliable origin, and call the target.
They add to the conversation a sense of relevance, asking for payment to avoid negative results, such as loss of bill or lack. The victims lost the average $ 1400 in this form of attack in 2022.
New Way: Voice Cloning
Traditional defense recommendations include people’s request not to press the links that come with the requests and the man’s address to the official phone number. It looks like a zero trust never trust, always check. Of course, when the voice comes from whom a person knows, it is natural that confidence can bypass any problems with the check.
This is a big problem with the II, and the attackers who now use voice cloning technology are often taken from a few seconds. The mother called from the one who kept his daughter’s voice, saying that she would abduce her and that the attackers wanted a $ 50,000 reward.
Phishing -electronic mail
Most people with the email address became the lottery winner. At least they received an electronic message reported that they had won millions. Perhaps with reference to the king or prince who may need help to release funds, in return for the previous fee.
Old Way: Sprinkle and Pray
Over time, these phishing attempts have become much less effective for several reasons. They are sent mainly with a small personalization and a lot of grammatical mistakes, and people are more aware of “419 scams” with their requests to use certain money transfer services. Other versions, such as the use of counterfeit enterprises for banks, can often be blocked by protection against web -stytes and spam filters, as well as teaching people to check the URL carefully.
However, phishing remains the largest form of cybercrime. A FBI’s 2023 Internet report report Found phishing/fake was a source of 298 878 complaints. To give this context, the second largest (violation of personal data) registered 55 851 complaints.
New way: Realistic conversations on scale
AI allows the subject to the threat to access perfect tools using LLM instead of leaning on major transfers. They can also use AI to run them up to several recipients, while setting up a more focused form of a spear.
Moreover, they can use these tools in several languages. They open the door to a wider number of regions where goals may not be aware of traditional phishing methods and what to check. The Harvard Business Review warns that “the entire phishing process can be automated by LLMS, which reduces phishing costs by more than 95%, reaching equal or higher success levels.”
Rethinking threats means rethinking protection
Cybersecurity has always been in the arms race between the defense and the attack. But AI added another dimension. Now the goals cannot find out what is real and what fake is when the attacker tries to manipulate his:
- ConfidenceBy betraying a colleague and asking an employee to bypass security protocols for secret information
- Respect for power Pretending to be a financial director of the employee and ordered them to complete the urgent financial operation
- Fear By creating a sense of relevance and panic, means that the employee does not think whether the person they talk to is true
These are important parts of human nature and instinct that have developed over thousands of years. Naturally, this is not something that can develop at the same rate as the methods of malicious actors or progress II. Traditional forms of awareness, with internet courses and questions and answers, is not built for this reality that works on AI.
That’s why a part of the answer – especially while technical protection still catching up – this is to make your workforce experience Modeling social engineering attacks.
Because your employees may not remember what you talk about cyber protection -when it happens, but they remember how it makes them feel. So when there is a real attack, they know how to respond.