#1 Trusted Cybersecurity News Platform
Followed by 5.20+ million
The Hacker News Logo
Subscribe – Get Latest News

Search results for prompt injection attack | Breaking Cybersecurity News | The Hacker News

Researchers Find ChatGPT Vulnerabilities That Let Attackers Trick AI Into Leaking Data

Researchers Find ChatGPT Vulnerabilities That Let Attackers Trick AI Into Leaking Data

Nov 05, 2025 Artificial Intelligence / Vulnerability
Cybersecurity researchers have disclosed a new set of vulnerabilities impacting OpenAI's ChatGPT artificial intelligence (AI) chatbot that could be exploited by an attacker to steal personal information from users' memories and chat histories without their knowledge. The seven vulnerabilities and attack techniques, according to Tenable, were found in OpenAI's GPT-4o and GPT-5 models. OpenAI has since addressed some of them .  These issues expose the AI system to indirect prompt injection attacks , allowing an attacker to manipulate the expected behavior of a large language model (LLM) and trick it into performing unintended or malicious actions, security researchers Moshe Bernstein and Liv Matan said in a report shared with The Hacker News. The identified shortcomings are listed below - Indirect prompt injection vulnerability via trusted sites in Browsing Context, which involves asking ChatGPT to summarize the contents of web pages with malicious instructions added...
Researchers Uncover Prompt Injection Vulnerabilities in DeepSeek and Claude AI

Researchers Uncover Prompt Injection Vulnerabilities in DeepSeek and Claude AI

Dec 09, 2024 Artificial Intelligenc / Vulnerability
Details have emerged about a now-patched security flaw in the DeepSeek artificial intelligence (AI) chatbot that, if successfully exploited, could permit a bad actor to take control of a victim's account by means of a prompt injection attack . Security researcher Johann Rehberger, who has chronicled many a prompt injection attack targeting various AI tools, found that providing the input "Print the xss cheat sheet in a bullet list. just payloads" in the DeepSeek chat triggered the execution of JavaScript code as part of the generated response – a classic case of cross-site scripting (XSS). XSS attacks can have serious consequences as they lead to the execution of unauthorized code in the context of the victim's web browser. An attacker could take advantage of such flaws to hijack a user's session and gain access to cookies and other data associated with the chat.deepseek[.]com domain, thereby leading to an account takeover. "After some experimenting,...
Researchers Reveal Reprompt Attack Allowing Single-Click Data Exfiltration From Microsoft Copilot

Researchers Reveal Reprompt Attack Allowing Single-Click Data Exfiltration From Microsoft Copilot

Jan 15, 2026 Prompt Injection / Enterprise Security
Cybersecurity researchers have disclosed details of a new attack method dubbed Reprompt that could allow bad actors to exfiltrate sensitive data from artificial intelligence (AI) chatbots like Microsoft Copilot in a single click, while bypassing enterprise security controls entirely. "Only a single click on a legitimate Microsoft link is required to compromise victims," Varonis security researcher Dolev Taler said in a report published Wednesday. "No plugins, no user interaction with Copilot." "The attacker maintains control even when the Copilot chat is closed, allowing the victim's session to be silently exfiltrated with no interaction beyond that first click." Following responsible disclosure, Microsoft has addressed the security issue. The attack does not affect enterprise customers using Microsoft 365 Copilot. At a high level, Reprompt employs three techniques to achieve a data‑exfiltration chain - Using the "q" URL parameter in...
cyber security

Operationalize Incident Response: Scale Tabletop Exercises with AEV

websiteFiligranIncident Response / Exposure Validation
Learn how to standardize, automate, and scale IR tabletop drills for compliance and team readiness.
cyber security

The Cyber Event of the Year Returns: SANS 2026

websiteSANS InstituteCybersecurity Training / Certification
50+ courses, NetWars, AI Keynote, and a full week of action. Join SANS in Orlando.
Researcher Uncovers 30+ Flaws in AI Coding Tools Enabling Data Theft and RCE Attacks

Researcher Uncovers 30+ Flaws in AI Coding Tools Enabling Data Theft and RCE Attacks

Dec 06, 2025 AI Security / Vulnerability
Over 30 security vulnerabilities have been disclosed in various artificial intelligence (AI)-powered Integrated Development Environments (IDEs) that combine prompt injection primitives with legitimate features to achieve data exfiltration and remote code execution. The security shortcomings have been collectively named IDEsaster by security researcher Ari Marzouk (MaccariTA), who discovered them over the last six months. They affect popular IDEs and extensions such as Cursor, Windsurf, Kiro.dev, GitHub Copilot, Zed.dev, Roo Code, Junie, and Cline, among others. Of these, 24 have been assigned CVE identifiers. "I think the fact that multiple universal attack chains affected each and every AI IDE tested is the most surprising finding of this research," Marzouk told The Hacker News. "All AI IDEs (and coding assistants that integrate with them) effectively ignore the base software (IDE) in their threat model. They treat their features as inherently safe because they've...
Cursor AI Code Editor Fixed Flaw Allowing Attackers to Run Commands via Prompt Injection

Cursor AI Code Editor Fixed Flaw Allowing Attackers to Run Commands via Prompt Injection

Aug 01, 2025 Vulnerability / DevOps
Cybersecurity researchers have disclosed a now-patched, high-severity security flaw in Cursor, a popular artificial intelligence (AI) code editor, that could result in remote code execution (RCE). The vulnerability, tracked as CVE-2025-54135 (CVSS score: 8.6), has been addressed in version 1.3 released on July 29, 2025. It has been codenamed CurXecute by Aim Labs, which previously disclosed EchoLeak . "Cursor runs with developer‑level privileges, and when paired with an MCP server that fetches untrusted external data, that data can redirect the agent's control flow and exploit those privileges," the Aim Labs Team said in a report shared with The Hacker News. "By feeding poisoned data to the agent via MCP, an attacker can gain full remote code execution under the user privileges, and achieve any number of things, including opportunities for ransomware, data theft, AI manipulation and hallucinations, etc." In other words, the remote code execution can trigg...
ChatGPT Atlas Browser Can Be Tricked by Fake URLs into Executing Hidden Commands

ChatGPT Atlas Browser Can Be Tricked by Fake URLs into Executing Hidden Commands

Oct 27, 2025 AI Security / Vulnerability
The newly released OpenAI ChatGPT Atlas web browser has been found to be susceptible to a prompt injection attack where its omnibox can be jailbroken by disguising a malicious prompt as a seemingly harmless URL to visit. "The omnibox (combined address/search bar) interprets input either as a URL to navigate to, or as a natural-language command to the agent," NeuralTrust said in a report published Friday. "We've identified a prompt injection technique that disguises malicious instructions to look like a URL, but that Atlas treats as high-trust 'user intent' text, enabling harmful actions." Last week, OpenAI launched Atlas as a web browser with built-in ChatGPT capabilities to assist users with web page summarization, inline text editing, and agentic functions. In the attack outlined by the artificial intelligence (AI) security company, an attacker can take advantage of the browser's lack of strict boundaries between trusted user input and untru...
Why React Didn't Kill XSS: The New JavaScript Injection Playbook

Why React Didn't Kill XSS: The New JavaScript Injection Playbook

Jul 29, 2025 AI Security /Software Engineering
React conquered XSS? Think again. That's the reality facing JavaScript developers in 2025, where attackers have quietly evolved their injection techniques to exploit everything from prototype pollution to AI-generated code, bypassing the very frameworks designed to keep applications secure. Full 47-page guide with framework-specific defenses (PDF, free). JavaScript conquered the web, but with that victory came new battlefields. While developers embraced React, Vue, and Angular, attackers evolved their tactics, exploiting AI prompt injection, supply chain compromises, and prototype pollution in ways traditional security measures can't catch. A Wake-up Call: The Polyfill.io Attack In June 2024, a single JavaScript injection attack compromised over 100,000 websites in the biggest JavaScript injection attack of the year. The Polyfill.io supply chain attack , where a Chinese company acquired a trusted JavaScript library and weaponized it to inject malicious code, affected major pl...
Salesforce Patches Critical ForcedLeak Bug Exposing CRM Data via AI Prompt Injection

Salesforce Patches Critical ForcedLeak Bug Exposing CRM Data via AI Prompt Injection

Sep 25, 2025 Vulnerability / AI Security
Cybersecurity researchers have disclosed a critical flaw impacting Salesforce Agentforce , a platform for building artificial intelligence (AI) agents, that could allow attackers to potentially exfiltrate sensitive data from its customer relationship management (CRM) tool by means of an indirect prompt injection. The vulnerability has been codenamed ForcedLeak (CVSS score: 9.4) by Noma Security, which discovered and reported the problem on July 28, 2025. It impacts any organization using Salesforce Agentforce with the Web-to-Lead functionality enabled. "This vulnerability demonstrates how AI agents present a fundamentally different and expanded attack surface compared to traditional prompt-response systems," Sasi Levi, security research lead at Noma, said in a report shared with The Hacker News. One of the most severe threats facing generative artificial intelligence (GenAI) systems today is indirect prompt injection , which occurs when malicious instructions are ins...
Researchers Disclose Google Gemini AI Flaws Allowing Prompt Injection and Cloud Exploits

Researchers Disclose Google Gemini AI Flaws Allowing Prompt Injection and Cloud Exploits

Sep 30, 2025 Artificial Intelligence / Vulnerability
Cybersecurity researchers have disclosed three now-patched security vulnerabilities impacting Google's Gemini artificial intelligence (AI) assistant that, if successfully exploited, could have exposed users to major privacy risks and data theft. "They made Gemini vulnerable to search-injection attacks on its Search Personalization Model; log-to-prompt injection attacks against Gemini Cloud Assist; and exfiltration of the user's saved information and location data via the Gemini Browsing Tool," Tenable security researcher Liv Matan said in a report shared with The Hacker News. The vulnerabilities have been collectively codenamed the Gemini Trifecta by the cybersecurity company. They reside in three distinct components of the Gemini suite - A prompt injection flaw in Gemini Cloud Assist that could allow attackers to exploit cloud-based services and compromise cloud resources by taking advantage of the fact that the tool is capable of summarizing logs pulled dir...
Traditional Security Frameworks Leave Organizations Exposed to AI-Specific Attack Vectors

Traditional Security Frameworks Leave Organizations Exposed to AI-Specific Attack Vectors

Dec 29, 2025 Cloud Security / Artificial Intelligence
In December 2024, the popular Ultralytics AI library was compromised, installing malicious code that hijacked system resources for cryptocurrency mining. In August 2025 , malicious Nx packages leaked 2,349 GitHub, cloud, and AI credentials. Throughout 2024, ChatGPT vulnerabilities allowed unauthorized extraction of user data from AI memory. The result: 23.77 million secrets were leaked through AI systems in 2024 alone, a 25% increase from the previous year. Here's what these incidents have in common: The compromised organizations had comprehensive security programs. They passed audits. They met compliance requirements. Their security frameworks simply weren't built for AI threats. Traditional security frameworks have served organizations well for decades. But AI systems operate fundamentally differently from the applications these frameworks were designed to protect. And the attacks against them don't fit into existing control categories. Security teams followed the f...
Prompt Injection Flaw in Vanna AI Exposes Databases to RCE Attacks

Prompt Injection Flaw in Vanna AI Exposes Databases to RCE Attacks

Jun 27, 2024 Artificial Intelligence / Vulnerability
Cybersecurity researchers have disclosed a high-severity security flaw in the Vanna.AI library that could be exploited to achieve remote code execution vulnerability via prompt injection techniques. The vulnerability, tracked as CVE-2024-5565 (CVSS score: 8.1), relates to a case of prompt injection in the "ask" function that could be exploited to trick the library into executing arbitrary commands, supply chain security firm JFrog said . Vanna is a Python-based machine learning library that allows users to chat with their SQL database to glean insights by "just asking questions" (aka prompts) that are translated into an equivalent SQL query using a large language model (LLM). The rapid rollout of generative artificial intelligence (AI) models in recent years has brought to the fore the risks of exploitation by malicious actors, who can weaponize the tools by providing adversarial inputs that bypass the safety mechanisms built into them. One such prominent clas...
Google Adds Multi-Layered Defenses to Secure GenAI from Prompt Injection Attacks

Google Adds Multi-Layered Defenses to Secure GenAI from Prompt Injection Attacks

Jun 23, 2025 Artificial Intelligence / AI Security
Google has revealed the various safety measures that are being incorporated into its generative artificial intelligence (AI) systems to mitigate emerging attack vectors like indirect prompt injections and improve the overall security posture for agentic AI systems. "Unlike direct prompt injections, where an attacker directly inputs malicious commands into a prompt, indirect prompt injections involve hidden malicious instructions within external data sources," Google's GenAI security team said . These external sources can take the form of email messages, documents, or even calendar invites that trick the AI systems into exfiltrating sensitive data or performing other malicious actions. The tech giant said it has implemented what it described as a "layered" defense strategy that is designed to increase the difficulty, expense, and complexity required to pull off an attack against its systems. These efforts span model hardening, introducing purpose-built mac...
Researchers Uncover GPT-5 Jailbreak and Zero-Click AI Agent Attacks Exposing Cloud and IoT Systems

Researchers Uncover GPT-5 Jailbreak and Zero-Click AI Agent Attacks Exposing Cloud and IoT Systems

Aug 09, 2025 Generative AI / Vulnerability
Cybersecurity researchers have uncovered a jailbreak technique to bypass ethical guardrails erected by OpenAI in its latest large language model (LLM) GPT-5 and produce illicit instructions. Generative artificial intelligence (AI) security platform NeuralTrust said it combined a known technique called Echo Chamber with narrative-driven steering to trick the model into producing undesirable responses. "We use Echo Chamber to seed and reinforce a subtly poisonous conversational context, then guide the model with low-salience storytelling that avoids explicit intent signaling," security researcher Martí Jordà said . "This combination nudges the model toward the objective while minimizing triggerable refusal cues." Echo Chamber is a jailbreak approach that was detailed by the company back in June 2025 as a way to deceive an LLM into generating responses to prohibited topics using indirect references, semantic steering, and multi-step inference. In recent weeks, the...
New Reports Uncover Jailbreaks, Unsafe Code, and Data Theft Risks in Leading AI Systems

New Reports Uncover Jailbreaks, Unsafe Code, and Data Theft Risks in Leading AI Systems

Apr 29, 2025 Vulnerability / Artificial Intelligence
Various generative artificial intelligence (GenAI) services have been found vulnerable to two types of jailbreak attacks that make it possible to produce illicit or dangerous content. The first of the two techniques, codenamed Inception, instructs an AI tool to imagine a fictitious scenario, which can then be adapted into a second scenario within the first one where there exists no safety guardrails . "Continued prompting to the AI within the second scenarios context can result in bypass of safety guardrails and allow the generation of malicious content," the CERT Coordination Center (CERT/CC) said in an advisory released last week. The second jailbreak is realized by prompting the AI for information on how not to reply to a specific request.  "The AI can then be further prompted with requests to respond as normal, and the attacker can then pivot back and forth between illicit questions that bypass safety guardrails and normal prompts," CERT/CC added. Success...
ServiceNow AI Agents Can Be Tricked Into Acting Against Each Other via Second-Order Prompts

ServiceNow AI Agents Can Be Tricked Into Acting Against Each Other via Second-Order Prompts

Nov 19, 2025 AI Security / SaaS Security
Malicious actors can exploit default configurations in ServiceNow's Now Assist generative artificial intelligence (AI) platform and leverage its agentic capabilities to conduct prompt injection attacks. The second-order prompt injection, according to AppOmni, makes use of Now Assist's agent-to-agent discovery to execute unauthorized actions, enabling attackers to copy and exfiltrate sensitive corporate data, modify records, and escalate privileges. "This discovery is alarming because it isn't a bug in the AI; it's expected behavior as defined by certain default configuration options," said Aaron Costello, chief of SaaS Security Research at AppOmni. "When agents can discover and recruit each other, a harmless request can quietly turn into an attack, with criminals stealing sensitive data or gaining more access to internal company systems. These settings are easy to overlook." The attack is made possible because of agent discovery and agent-to-a...
Zero-Click Agentic Browser Attack Can Delete Entire Google Drive Using Crafted Emails

Zero-Click Agentic Browser Attack Can Delete Entire Google Drive Using Crafted Emails

Dec 05, 2025 Email Security / Threat Research
A new agentic browser attack targeting Perplexity's Comet browser that's capable of turning a seemingly innocuous email into a destructive action that wipes a user's entire Google Drive contents, findings from Straiker STAR Labs show. The zero-click Google Drive Wiper technique hinges on connecting the browser to services like Gmail and Google Drive to automate routine tasks by granting them access to read emails, as well as browse files and folders, and perform actions like moving, renaming, or deleting content. For instance, a prompt issued by a benign user might look like this: "Please check my email and complete all my recent organization tasks." This will cause the browser agent to search the inbox for relevant messages and perform the necessary actions. "This behavior reflects excessive agency in LLM-powered assistants where the LLM performs actions that go far beyond the user's explicit request," security researcher Amanda Rousseau said in ...
Expert Insights Articles Videos
Cybersecurity Resources