“Peace is the virtue of civilization. War is its crime. But often in the mines of war the sharpest tools of peace are forged.” – Victor Hugo.
In 1971, an alarming message began to appear on several computers that made up the ARPANET, the precursor to what we now know as the Internet. A post that says “I’m the Creeper: Catch Me If You Can.” was the result of a program called Creeper that was developed by the famous programmer Bob Thomas while he was working at BBN Technologies. Although Thomas’s intentions were not malicious, the Creeper program represents an evolution of what we now call a computer virus.
The appearance of Creeper on the ARPANET marked the beginning of the first anti-virus software. While this has not been confirmed, it is believed that Ray Tomlinson, known as the inventor of email, developed Reaper, a program designed to remove Creeper from infected machines. The development of this tool, which is used to defensively pursue and remove malware from a computer, is often referred to as the beginning of the development of the field of cyber security. This highlights the early recognition of the potential power of a cyber attack and the need to take defensive measures.
The discovery of the need for cyber security should not come as much of a surprise, as the cyber sphere is nothing more than an abstraction of the natural world. In the same way that we have moved from fighting with sticks and stones to swords and spears and now to bombs and planes, so too has cyber warfare evolved. In the beginning, it all started with the rudimentary Creeper virus, which was a cheeky depiction of what could be a harbinger of digital doom. The discovery of electronic weapon systems necessitated the invention of antivirus solutions such as Reaper, and as attacks became more sophisticated, so did defensive solutions. Fast forward to the age of cyberattacks, and digital battlefields have begun to take shape. Firewalls have emerged to replace huge city walls, load balancers act as generals directing resources to ensure that a single point is not overloaded, and intrusion detection and prevention systems replace sentries in watchtowers. This is not to say that all systems are perfect; there’s always the existential fear that a globally favorable benign rootkit, which we call the EDR solution, might contain a null pointer dereference that would act as a trojan horse capable of bricking tens of millions of Windows devices.
Leaving aside the catastrophic, albeit random situations, the question remains, what will happen next. Enter Offensive AI, the most dangerous cyber weapon to date. In 2023, Foster Nethercott published a white paper at the SANS Institute of Technology details how threat actors can abuse ChatGPT with minimal technical capabilities to create new malware capable of evading traditional security measures. Numerous other papers have also explored the use of generative artificial intelligence to create advanced worms such as Morris II and polymorphic malware such as Black Mamba.
A seemingly paradoxical solution to these growing threats is the further development and research of more sophisticated offensive artificial intelligence. Plato’s saying, “Necessity is the mother of invention” is an apt description of today’s cybersecurity, where new AI-driven threats drive the innovation of more sophisticated security controls. While the development of more sophisticated offensive AI tools and techniques is far from morally praiseworthy, it continues to emerge as an unavoidable necessity. To effectively protect against these threats, we must understand them, which necessitates their further development and study.
The rationale for this approach is based on one simple truth. You can’t defend against a threat you don’t understand, and without developing and researching these new threats, we can’t hope to understand them. The unfortunate reality is that bad actors are already using offensive AI to innovate and deploy new threats. To try to deny it would be wrong and naive. Because of this, the future of cybersecurity lies in the continued development of offensive AI.
If you want to learn more about Offensive AI and get hands-on experience implementing it in penetration testing, I invite you to attend my upcoming workshop at SANS Network Security 2024: Offensive AI for Social Engineering and Deep Fake Development September 7 in Las Vegas. This workshop will be a great introduction to my new course SEC535: Offensive Artificial Intelligence – Attack Tools and Techniques, which will be released in early 2025. The event as a whole will also be a great opportunity to meet some of the leading experts in the field of artificial intelligence and learn how it is shaping the future of cyber security. You can get event details and a full list of bonus events here.
note: This article was written by Foster Nethercott, a US Marine and Afghanistan veteran with nearly a decade of experience in the cybersecurity field. Foster owns the security consulting firm Fortisec and is a writer for the SANS Technology Institute, which is currently developing the new SEC 535 Offensive Artificial Intelligence course.