-->
#1 Trusted Cybersecurity News Platform
Followed by 5.40+ million
The Hacker News Logo
Subscribe – Get Latest News

Search results for prompt injection vulnerability | Breaking Cybersecurity News | The Hacker News

Researchers Find ChatGPT Vulnerabilities That Let Attackers Trick AI Into Leaking Data

Researchers Find ChatGPT Vulnerabilities That Let Attackers Trick AI Into Leaking Data

Nov 05, 2025 Artificial Intelligence / Vulnerability
Cybersecurity researchers have disclosed a new set of vulnerabilities impacting OpenAI's ChatGPT artificial intelligence (AI) chatbot that could be exploited by an attacker to steal personal information from users' memories and chat histories without their knowledge. The seven vulnerabilities and attack techniques, according to Tenable, were found in OpenAI's GPT-4o and GPT-5 models. OpenAI has since addressed some of them .  These issues expose the AI system to indirect prompt injection attacks , allowing an attacker to manipulate the expected behavior of a large language model (LLM) and trick it into performing unintended or malicious actions, security researchers Moshe Bernstein and Liv Matan said in a report shared with The Hacker News. The identified shortcomings are listed below - Indirect prompt injection vulnerability via trusted sites in Browsing Context, which involves asking ChatGPT to summarize the contents of web pages with malicious instructions added...
Researchers Reveal Reprompt Attack Allowing Single-Click Data Exfiltration From Microsoft Copilot

Researchers Reveal Reprompt Attack Allowing Single-Click Data Exfiltration From Microsoft Copilot

Jan 15, 2026 Prompt Injection / Enterprise Security
Cybersecurity researchers have disclosed details of a new attack method dubbed Reprompt that could allow bad actors to exfiltrate sensitive data from artificial intelligence (AI) chatbots like Microsoft Copilot in a single click, while bypassing enterprise security controls entirely. "Only a single click on a legitimate Microsoft link is required to compromise victims," Varonis security researcher Dolev Taler said in a report published Wednesday. "No plugins, no user interaction with Copilot." "The attacker maintains control even when the Copilot chat is closed, allowing the victim's session to be silently exfiltrated with no interaction beyond that first click." Following responsible disclosure, Microsoft has addressed the security issue. The attack does not affect enterprise customers using Microsoft 365 Copilot. At a high level, Reprompt employs three techniques to achieve a data‑exfiltration chain - Using the "q" URL parameter in...
Salesforce Patches Critical ForcedLeak Bug Exposing CRM Data via AI Prompt Injection

Salesforce Patches Critical ForcedLeak Bug Exposing CRM Data via AI Prompt Injection

Sep 25, 2025 Vulnerability / AI Security
Cybersecurity researchers have disclosed a critical flaw impacting Salesforce Agentforce , a platform for building artificial intelligence (AI) agents, that could allow attackers to potentially exfiltrate sensitive data from its customer relationship management (CRM) tool by means of an indirect prompt injection. The vulnerability has been codenamed ForcedLeak (CVSS score: 9.4) by Noma Security, which discovered and reported the problem on July 28, 2025. It impacts any organization using Salesforce Agentforce with the Web-to-Lead functionality enabled. "This vulnerability demonstrates how AI agents present a fundamentally different and expanded attack surface compared to traditional prompt-response systems," Sasi Levi, security research lead at Noma, said in a report shared with The Hacker News. One of the most severe threats facing generative artificial intelligence (GenAI) systems today is indirect prompt injection , which occurs when malicious instructions are ins...
cyber security

2026 Annual Threat Report: A Defender's Playbook From the Front Lines

websiteSentinelOneEnterprise Security / Cybersecurity
Learn how modern attackers bypass MFA, exploit gaps, weaponize automation, run 8-phase intrusions, and more.
cyber security

Anthropic Won't Release Mythos. But Claude Is Already in Your Salesforce

websiteRecoSaaS Security /AI Security
The real enterprise AI risk isn't the model they locked away. It's the one already inside.
Critical LangChain Core Vulnerability Exposes Secrets via Serialization Injection

Critical LangChain Core Vulnerability Exposes Secrets via Serialization Injection

Dec 26, 2025 AI Security / DevSecOps
A critical security flaw has been disclosed in LangChain Core that could be exploited by an attacker to steal sensitive secrets and even influence large language model (LLM) responses through prompt injection. LangChain Core (i.e., langchain-core ) is a core Python package that's part of the LangChain ecosystem, providing the core interfaces and model-agnostic abstractions for building applications powered by LLMs. The vulnerability, tracked as CVE-2025-68664, carries a CVSS score of 9.3 out of 10.0. Security researcher Yarden Porat has been credited with reporting the vulnerability on December 4, 2025. It has been codenamed LangGrinch . "A serialization injection vulnerability exists in LangChain's dumps() and dumpd() functions," the project maintainers said in an advisory. "The functions do not escape dictionaries with 'lc' keys when serializing free-form dictionaries." "The 'lc' key is used internally by LangChain to mark ser...
Researcher Uncovers 30+ Flaws in AI Coding Tools Enabling Data Theft and RCE Attacks

Researcher Uncovers 30+ Flaws in AI Coding Tools Enabling Data Theft and RCE Attacks

Dec 06, 2025 AI Security / Vulnerability
Over 30 security vulnerabilities have been disclosed in various artificial intelligence (AI)-powered Integrated Development Environments (IDEs) that combine prompt injection primitives with legitimate features to achieve data exfiltration and remote code execution. The security shortcomings have been collectively named IDEsaster by security researcher Ari Marzouk (MaccariTA), who discovered them over the last six months. They affect popular IDEs and extensions such as Cursor, Windsurf, Kiro.dev, GitHub Copilot, Zed.dev, Roo Code, Junie, and Cline, among others. Of these, 24 have been assigned CVE identifiers. "I think the fact that multiple universal attack chains affected each and every AI IDE tested is the most surprising finding of this research," Marzouk told The Hacker News. "All AI IDEs (and coding assistants that integrate with them) effectively ignore the base software (IDE) in their threat model. They treat their features as inherently safe because they’ve...
Prompt Injection Flaw in Vanna AI Exposes Databases to RCE Attacks

Prompt Injection Flaw in Vanna AI Exposes Databases to RCE Attacks

Jun 27, 2024 Artificial Intelligence / Vulnerability
Cybersecurity researchers have disclosed a high-severity security flaw in the Vanna.AI library that could be exploited to achieve remote code execution vulnerability via prompt injection techniques. The vulnerability, tracked as CVE-2024-5565 (CVSS score: 8.1), relates to a case of prompt injection in the "ask" function that could be exploited to trick the library into executing arbitrary commands, supply chain security firm JFrog said . Vanna is a Python-based machine learning library that allows users to chat with their SQL database to glean insights by "just asking questions" (aka prompts) that are translated into an equivalent SQL query using a large language model (LLM). The rapid rollout of generative artificial intelligence (AI) models in recent years has brought to the fore the risks of exploitation by malicious actors, who can weaponize the tools by providing adversarial inputs that bypass the safety mechanisms built into them. One such prominent clas...
Google Patches Antigravity IDE Flaw Enabling Prompt Injection Code Execution

Google Patches Antigravity IDE Flaw Enabling Prompt Injection Code Execution

Apr 21, 2026 Vulnerability / Artificial Intelligence
Cybersecurity researchers have discovered a vulnerability in Google's agentic integrated development environment (IDE), Antigravity, that could be exploited to achieve code execution. The flaw, since patched, combines Antigravity's permitted file-creation capabilities with an insufficient input sanitization in Antigravity's native file-searching tool, find_by_name, to bypass the program's Strict Mode , a restrictive security configuration that limits network access, prevents out-of-workspace writes, and ensures all commands are being run within a sandbox context . "By injecting the -X (exec-batch) flag through the Pattern parameter [in the find_by_name tool], an attacker can force fd to execute arbitrary binaries against workspace files," Pillar Security researcher Dan Lisichkin said in an analysis. "Combined with Antigravity's ability to create files as a permitted action, this enables a full attack chain: stage a malicious script, then trigger ...
Cursor AI Code Editor Flaw Enables Silent Code Execution via Malicious Repositories

Cursor AI Code Editor Flaw Enables Silent Code Execution via Malicious Repositories

Sep 12, 2025 AI Security / Vulnerability
A security weakness has been disclosed in the artificial intelligence (AI)-powered code editor Cursor that could trigger code execution when a maliciously crafted repository is opened using the program. The issue stems from the fact that an out-of-the-box security setting is disabled by default, opening the door for attackers to run arbitrary code on users' computers with their privileges. "Cursor ships with Workspace Trust disabled by default, so VS Code-style tasks configured with runOptions.runOn: 'folderOpen' auto-execute the moment a developer browses a project," Oasis Security said in an analysis. "A malicious .vscode/tasks.json turns a casual 'open folder' into silent code execution in the user's context." Cursor is an AI-powered fork of Visual Studio Code, which supports a feature called Workspace Trust to allow developers to safely browse and edit code regardless of where it came from or who wrote it. With this option disab...
Cursor AI Code Editor Fixed Flaw Allowing Attackers to Run Commands via Prompt Injection

Cursor AI Code Editor Fixed Flaw Allowing Attackers to Run Commands via Prompt Injection

Aug 01, 2025 Vulnerability / DevOps
Cybersecurity researchers have disclosed a now-patched, high-severity security flaw in Cursor, a popular artificial intelligence (AI) code editor, that could result in remote code execution (RCE). The vulnerability, tracked as CVE-2025-54135 (CVSS score: 8.6), has been addressed in version 1.3 released on July 29, 2025. It has been codenamed CurXecute by Aim Labs, which previously disclosed EchoLeak . "Cursor runs with developer‑level privileges, and when paired with an MCP server that fetches untrusted external data, that data can redirect the agent's control flow and exploit those privileges," the Aim Labs Team said in a report shared with The Hacker News. "By feeding poisoned data to the agent via MCP, an attacker can gain full remote code execution under the user privileges, and achieve any number of things, including opportunities for ransomware, data theft, AI manipulation and hallucinations, etc." In other words, the remote code execution can trigg...
GitLab Duo Vulnerability Enabled Attackers to Hijack AI Responses with Hidden Prompts

GitLab Duo Vulnerability Enabled Attackers to Hijack AI Responses with Hidden Prompts

May 23, 2025 Artificial Intelligence / Vulnerability
Cybersecurity researchers have discovered an indirect prompt injection flaw in GitLab's artificial intelligence (AI) assistant Duo that could have allowed attackers to steal source code and inject untrusted HTML into its responses, which could then be used to direct victims to malicious websites. GitLab Duo is an artificial intelligence (AI)-powered coding assistant that enables users to write, review, and edit code. Built using Anthropic's Claude models, the service was first launched in June 2023. But as Legit Security found , GitLab Duo Chat has been susceptible to an indirect prompt injection flaw that permits attackers to "steal source code from private projects, manipulate code suggestions shown to other users, and even exfiltrate confidential, undisclosed zero-day vulnerabilities." Prompt injection refers to a class of vulnerabilities common in AI systems that enable threat actors to weaponize large language models (LLMs) to manipulate responses to user...
Microsoft Issues Security Fixes for 56 Flaws, Including Active Exploit and Two Zero-Days

Microsoft Issues Security Fixes for 56 Flaws, Including Active Exploit and Two Zero-Days

Dec 10, 2025 Patch Tuesday / Vulnerability
Microsoft closed out 2025 with patches for 56 security flaws in various products across the Windows platform, including one vulnerability that has been actively exploited in the wild. Of the 56 flaws, three are rated Critical, and 53 are rated Important in severity. Two other defects are listed as publicly known at the time of the release. These include 29 privilege escalation, 18 remote code execution, four information disclosure, three denial-of-service, and two spoofing vulnerabilities. In total, Microsoft has addressed a total of 1,275 CVEs in 2025, according to data compiled by Fortra. Tenable's Satnam Narang said 2025 also marks the second consecutive year where the Windows maker has patched over 1,000 CVEs. It's the third time it has done so since Patch Tuesday's inception. The update is in addition to 17 shortcomings the tech giant patched in its Chromium-based Edge browser since the release of the November 2025 Patch Tuesday update . This also consists of a s...
Google Gemini Prompt Injection Flaw Exposed Private Calendar Data via Malicious Invites

Google Gemini Prompt Injection Flaw Exposed Private Calendar Data via Malicious Invites

Jan 19, 2026 Artificial Intelligence / Vulnerability
Cybersecurity researchers have disclosed details of a security flaw that leverages indirect prompt injection targeting Google Gemini as a way to bypass authorization guardrails and use Google Calendar as a data extraction mechanism. The vulnerability, Miggo Security's Head of Research, Liad Eliyahu, said, made it possible to circumvent Google Calendar's privacy controls by hiding a dormant malicious payload within a standard calendar invite. "This bypass enabled unauthorized access to private meeting data and the creation of deceptive calendar events without any direct user interaction," Eliyahu said in a report shared with The Hacker News. The starting point of the attack chain is a new calendar event that's crafted by the threat actor and sent to a target. The invite's description embeds a natural language prompt that's designed to do their bidding, resulting in a prompt injection. The attack gets activated when a user asks Gemini a completely inno...
Security Flaws in Popular ML Toolkits Enable Server Hijacks, Privilege Escalation

Security Flaws in Popular ML Toolkits Enable Server Hijacks, Privilege Escalation

Nov 11, 2024 Machine Learning / Vulnerability
Cybersecurity researchers have uncovered nearly two dozen security flaws spanning 15 different machine learning (ML) related open-source projects. These comprise vulnerabilities discovered both on the server- and client-side, software supply chain security firm JFrog said in an analysis published last week. The server-side weaknesses "allow attackers to hijack important servers in the organization such as ML model registries, ML databases and ML pipelines," it said . The vulnerabilities, discovered in Weave, ZenML, Deep Lake, Vanna.AI, and Mage AI, have been broken down into broader sub-categories that allow for remotely hijacking model registries, ML database frameworks, and taking over ML Pipelines. A brief description of the identified flaws is below - CVE-2024-7340 (CVSS score: 8.8) - A directory traversal vulnerability in the Weave ML toolkit that allows for reading files across the whole filesystem, effectively allowing a low-privileged authenticated user to es...
Google Adds Multi-Layered Defenses to Secure GenAI from Prompt Injection Attacks

Google Adds Multi-Layered Defenses to Secure GenAI from Prompt Injection Attacks

Jun 23, 2025 Artificial Intelligence / AI Security
Google has revealed the various safety measures that are being incorporated into its generative artificial intelligence (AI) systems to mitigate emerging attack vectors like indirect prompt injections and improve the overall security posture for agentic AI systems. "Unlike direct prompt injections, where an attacker directly inputs malicious commands into a prompt, indirect prompt injections involve hidden malicious instructions within external data sources," Google's GenAI security team said . These external sources can take the form of email messages, documents, or even calendar invites that trick the AI systems into exfiltrating sensitive data or performing other malicious actions. The tech giant said it has implemented what it described as a "layered" defense strategy that is designed to increase the difficulty, expense, and complexity required to pull off an attack against its systems. These efforts span model hardening, introducing purpose-built mac...
OpenAI Patches ChatGPT Data Exfiltration Flaw and Codex GitHub Token Vulnerability

OpenAI Patches ChatGPT Data Exfiltration Flaw and Codex GitHub Token Vulnerability

Mar 30, 2026 Vulnerability / Enterprise Security
A previously unknown vulnerability in OpenAI ChatGPT allowed sensitive conversation data to be exfiltrated without user knowledge or consent, according to new findings from Check Point. "A single malicious prompt could turn an otherwise ordinary conversation into a covert exfiltration channel, leaking user messages, uploaded files, and other sensitive content," the cybersecurity company said in a report published today. "A backdoored GPT could abuse the same weakness to obtain access to user data without the user's awareness or consent." Following responsible disclosure, OpenAI addressed the issue on February 20, 2026. There is no evidence that the issue was ever exploited in a malicious context. While ChatGPT is built with various guardrails to prevent unauthorized data sharing or generate direct outbound network requests , the newly discovered vulnerability bypasses these safeguards entirely by exploiting a side channel originating from the Linux runtime ...
Docker Fixes Critical Ask Gordon AI Flaw Allowing Code Execution via Image Metadata

Docker Fixes Critical Ask Gordon AI Flaw Allowing Code Execution via Image Metadata

Feb 03, 2026 Artificial Intelligence / Vulnerability
Cybersecurity researchers have disclosed details of a now-patched security flaw impacting Ask Gordon , an artificial intelligence (AI) assistant built into Docker Desktop and the Docker Command-Line Interface (CLI), that could be exploited to execute code and exfiltrate sensitive data. The critical vulnerability has been codenamed DockerDash by cybersecurity company Noma Labs. It was addressed by Docker with the release of version 4.50.0 in November 2025. "In DockerDash, a single malicious metadata label in a Docker image can be used to compromise your Docker environment through a simple three-stage attack: Gordon AI reads and interprets the malicious instruction, forwards it to the MCP [Model Context Protocol] Gateway, which then executes it through MCP tools," Sasi Levi, security research lead at Noma, said in a report shared with The Hacker News. "Every stage happens with zero validation, taking advantage of current agents and MCP Gateway architecture." ...
Wiz Uncovers Critical Access Bypass Flaw in AI-Powered Vibe Coding Platform Base44

Wiz Uncovers Critical Access Bypass Flaw in AI-Powered Vibe Coding Platform Base44

Jul 29, 2025 LLM Security / Vulnerability
Cybersecurity researchers have disclosed a now-patched critical security flaw in a popular vibe coding platform called Base44 that could allow unauthorized access to private applications built by its users. "The vulnerability we discovered was remarkably simple to exploit -- by providing only a non-secret 'app_id' value to undocumented registration and email verification endpoints, an attacker could have created a verified account for private applications on their platform," cloud security firm Wiz said in a report shared with The Hacker News. A net result of this issue is that it bypasses all authentication controls, including Single Sign-On (SSO) protections, granting full access to all the private applications and data contained within them. Following responsible disclosure on July 9, 2025, an official fix was rolled out by Wix, which owns Base44, within 24 hours. There is no evidence that the issue was ever maliciously exploited in the wild. While vibe codin...
Expert Insights Articles Videos
Cybersecurity Resources