Details have emerged about a patched vulnerability in Microsoft 365 Copilot that could allow the theft of sensitive user information using a technique called ASCII smuggling.
“ASCII smuggling is a new technique that uses special Unicode characters that represent ASCII but are not actually visible in the user interface,” security researcher Johann Rehberger said.
“This means that an attacker can get the (large language model) user rendering of invisible data and embed it in clickable hyperlinks. This technique basically prepares data for hijacking!”
The entire attack combines a number of attack techniques to turn them into a robust chain of exploits. This involves the following steps –
- Trigger quick injection through malicious content hidden in a document shared in a chat
- Using a quick payload injection to instruct the Copilot to search for more emails and documents
- Using ASCII Smuggling to trick a user into clicking a link to output valuable data to a third-party server
The end result of the attack is that sensitive data present in emails, including multi-factor authentication (MFA) codes, can be transmitted to a server controlled by an adversary. After a responsible disclosure in January 2024, Microsoft resolved these issues.
The development takes place in connection with proof-of-concept (PoC) attacks. demonstrated against Microsoft’s Copilot system for manipulating responses, stealing private data and evading security tools, again highlighting the need to monitor risks in artificial intelligence (AI) tools.
Methods, detailed a Anti-aircraft gunshipallow attackers to perform search-augmented generation (ANUCHA) poisoning and indirect instant rooting, leading to remote code execution attacks that can fully control Microsoft Copilot and other AI applications. In a hypothetical attack scenario, an external hacker with code execution capabilities could trick Copilot into providing phishing pages to users.
Perhaps one of the newest attacks is the ability to turn artificial intelligence into a phishing machine. Red-teaming technique, duplicate LOLCopilotallows an attacker with access to a victim’s email account to send phishing messages that mimic the style of compromised users.
Microsoft too admitted that publicly exposed Copilot bots built with Microsoft Copilot Studio and lacking authentication protections could be a way for threat actors to extract sensitive information, provided they previously knew the Copilot name or URL.
“Enterprises must assess their risk tolerance and exposure to prevent data leakage from Copilots (formerly Power Virtual Agents) and include Data loss prevention and other security measures to control the creation and publication of copilots,” Rehberger said.