GitLab Duo Vulnerability Enabled Attackers to Hijack AI Responses with Hidden Prompts
May 23, 2025
Artificial Intelligence / Vulnerability
 Cybersecurity researchers have discovered an indirect prompt injection flaw in GitLab's artificial intelligence (AI) assistant Duo that could have allowed attackers to steal source code and inject untrusted HTML into its responses, which could then be used to direct victims to malicious websites.  GitLab Duo  is an artificial intelligence (AI)-powered coding assistant  that enables users to write, review, and edit code. Built using Anthropic's Claude models, the service was first launched in June 2023.  But as Legit Security found , GitLab Duo Chat has been susceptible to an indirect prompt injection flaw that permits attackers to "steal source code from private projects, manipulate code suggestions shown to other users, and even exfiltrate confidential, undisclosed zero-day vulnerabilities."  Prompt injection refers to a class of vulnerabilities  common in AI systems that enable threat actors to weaponize large language models (LLMs) to manipulate responses  to user...