Reprompt attack enabled one-click data theft from Microsoft Copilot

Tombstone icon

Varonis researchers disclosed the Reprompt attack, a chained prompt injection technique that exfiltrated sensitive data from Microsoft Copilot Personal with a single click on a legitimate Copilot URL. The attack exploited the "q" URL parameter to inject instructions, bypassed data-leak guardrails by asking Copilot to repeat actions twice (safeguards only applied to initial requests), and used Copilot's Markdown rendering to silently send stolen data to an attacker-controlled server. No plugins or further user interaction were required, and the attacker maintained control even after the chat was closed. Microsoft patched the issue in its January 2026 security updates.

Incident Details

Severity:Facepalm
Company:Microsoft
Perpetrator:AI assistant
Incident Date:
Blast Radius:Microsoft Copilot Personal users exposed to profile data, conversation history, and file summary exfiltration via a single malicious link
Advertisement

The Attack

Varonis Threat Labs discovered a multistage attack against Microsoft Copilot Personal that required exactly one click from the victim. No malware installation, no compromised websites, no plugins. A single click on a legitimate Copilot URL was enough to silently exfiltrate the target's name, location, conversation history, and file summaries to an attacker-controlled server.

The researchers called it Reprompt. The name came from the attack's core technique: repeating a prompt to bypass safeguards that only applied to the first execution. Microsoft patched the vulnerability in its January 2026 security updates after Varonis disclosed the findings.

How the URL Injection Worked

Microsoft Copilot accepts a q parameter in its URLs. This parameter specifies an initial prompt that Copilot processes when the URL is opened - it's the mechanism that Copilot (and most other LLM interfaces) uses to let URLs contain pre-loaded queries. A normal use case might be a link that opens Copilot with a question already typed in.

The attackers exploited this by appending a long series of detailed instructions to the q parameter of an otherwise legitimate Copilot link. When a victim clicked the link - which appeared to be a normal copilot.microsoft.com URL - Copilot opened and immediately processed the attacker's injected instructions as if the user had typed them.

The URL could be delivered through email, embedded in a document, or placed on any web page. Because the domain was copilot.microsoft.com, standard security tools would not flag it as malicious. The phishing detection that organizations rely on to catch suspicious links was blind to this attack vector - the link was genuinely pointing to Microsoft's own service.

The Repeat Bypass

Microsoft had built data-leak safeguards into Copilot specifically to prevent the kind of exfiltration that Reprompt achieved. When Copilot received an instruction to access and transmit sensitive user data, the safeguards would block it.

The problem was that the safeguards only applied to the initial request. When the same instruction was repeated - when the prompt asked Copilot to perform the action a second time - the safeguards didn't activate again. The data-leak protections were a one-shot gate that could be walked through by simply asking twice.

This is the kind of vulnerability that suggests the safeguards were tested with single-turn interactions but not with the adversarial pattern of "ask, get blocked, ask again." A security control that stops working when the attacker retries is not much of a security control.

The Exfiltration Mechanism

Once past the safeguards, the injected prompt instructed Copilot to gather the target's personal data: profile information, conversation history, file summaries, and other accessible content. Copilot complied and assembled the data.

The exfiltration itself used Copilot's Markdown rendering capability. The injected instructions told Copilot to render a Markdown element - specifically an image tag or link - with the stolen data encoded in the URL pointing to an attacker-controlled server. When Copilot rendered the Markdown, the victim's browser made a request to the attacker's server, carrying the stolen data in the URL parameters.

This technique is known in prompt injection research as "image-tag exfiltration" or "Markdown injection." It exploits the fact that LLM interfaces that render Markdown will cause the browser to make outbound HTTP requests for any URLs referenced in the rendered output. If the URL contains encoded data, that data is transmitted to whatever server controls the URL's domain.

Persistence Without Interaction

One of the more concerning aspects of Reprompt was its persistence. Ars Technica reported that the attack continued to operate even after the user closed the Copilot chat. Once the victim clicked the link, no further interaction was needed. The injected instructions had already been processed, the data gathering was underway, and the exfiltration would complete regardless of whether the user stuck around.

As Varonis noted: "Even if the user just clicks on the link and immediately closes the tab of Copilot chat, the exploit still works." This meant the attack was effectively fire-and-forget from the attacker's perspective. Send the link, wait for the click, receive the data. The victim might not even notice Copilot opened briefly before they closed the tab.

The Attack Chain

Putting the stages together: an attacker crafts a Copilot URL with malicious instructions in the q parameter. The victim clicks the link, perhaps in an email that appears to be a legitimate Microsoft communication. Copilot opens and processes the injected instructions. The safeguards block the initial data access attempt. The instructions repeat the request. The safeguards don't trigger again. Copilot accesses the victim's personal data, conversation history, and file summaries. Copilot renders a Markdown element with the stolen data encoded in a URL. The victim's browser sends the data to the attacker's server. The attacker now has the victim's name, location, calendar events, and chat history.

Each individual step exploited a known weakness in LLM security: URL-based prompt injection, inadequate repeat-request handling, and Markdown-based data exfiltration. The innovation was chaining them into a single-click attack that required no plugins, no compromised infrastructure, and no ongoing interaction from the victim.

The Broader Pattern

Reprompt was not the first data exfiltration vulnerability found in Microsoft's AI products. SecurityWeek noted the related "EchoLeak" attack, which had previously demonstrated similar data theft from Microsoft 365 Copilot. The pattern of prompt injection attacks against Copilot products was becoming consistent enough to suggest structural weaknesses in how Microsoft was implementing LLM safety.

The q parameter attack surface was particularly troubling. Any system that allows external input to be injected directly into an LLM prompt through URL parameters is creating a prompt injection vector by design. The parameter exists to make Copilot accessible through links, but accessibility and exploitability are closely related when the input goes directly to an AI that can access user data.

TechRadar reported that Microsoft's fix made "prompt injection attacks via URLs no longer exploitable," which implies the q parameter was either removed, restricted, or subjected to stronger sanitization. The specific technical details of the patch weren't disclosed.

What Was Exposed

The data accessible through Reprompt included profile information (name, location), conversation history from previous Copilot sessions, and summaries of files the user had interacted with through Copilot. For users who had been using Copilot regularly, the conversation history alone could contain sensitive information about work projects, personal queries, and anything else they'd discussed with the AI assistant.

The attack didn't require the victim to have any particular configuration or to have done anything unusual. Any Microsoft Copilot Personal user who clicked the link was vulnerable. The attack worked on the default product configuration with no special conditions required.

Microsoft's January 2026 patch addressed the vulnerability, but the timeline between discovery and fix meant there was a window during which the technique could have been exploited in the wild. Varonis did not report evidence of active exploitation, but the technique was simple enough that independent discovery by malicious actors during the exposure window was plausible.

Discussion