#1 Trusted Cybersecurity News Platform
Followed by 5.20+ million
The Hacker News Logo
Subscribe – Get Latest News
Salesforce Security Handbook

Search results for Prompt Injection | Breaking Cybersecurity News | The Hacker News

Researchers Find ChatGPT Vulnerabilities That Let Attackers Trick AI Into Leaking Data

Researchers Find ChatGPT Vulnerabilities That Let Attackers Trick AI Into Leaking Data

Nov 05, 2025 Artificial Intelligence / Vulnerability
Cybersecurity researchers have disclosed a new set of vulnerabilities impacting OpenAI's ChatGPT artificial intelligence (AI) chatbot that could be exploited by an attacker to steal personal information from users' memories and chat histories without their knowledge. The seven vulnerabilities and attack techniques, according to Tenable, were found in OpenAI's GPT-4o and GPT-5 models. OpenAI has since addressed some of them .  These issues expose the AI system to indirect prompt injection attacks , allowing an attacker to manipulate the expected behavior of a large language model (LLM) and trick it into performing unintended or malicious actions, security researchers Moshe Bernstein and Liv Matan said in a report shared with The Hacker News. The identified shortcomings are listed below - Indirect prompt injection vulnerability via trusted sites in Browsing Context, which involves asking ChatGPT to summarize the contents of web pages with malicious instructions added...
Researchers Uncover 30+ Flaws in AI Coding Tools Enabling Data Theft and RCE Attacks

Researchers Uncover 30+ Flaws in AI Coding Tools Enabling Data Theft and RCE Attacks

Dec 06, 2025 AI Security / Vulnerability
Over 30 security vulnerabilities have been disclosed in various artificial intelligence (AI)-powered Integrated Development Environments (IDEs) that combine prompt injection primitives with legitimate features to achieve data exfiltration and remote code execution. The security shortcomings have been collectively named IDEsaster by security researcher Ari Marzouk (MaccariTA). They affect popular IDEs and extensions such as Cursor, Windsurf, Kiro.dev, GitHub Copilot, Zed.dev, Roo Code, Junie, and Cline, among others. Of these, 24 have been assigned CVE identifiers. "I think the fact that multiple universal attack chains affected each and every AI IDE tested is the most surprising finding of this research," Marzouk told The Hacker News. "All AI IDEs (and coding assistants that integrate with them) effectively ignore the base software (IDE) in their threat model. They treat their features as inherently safe because they've been there for years. However, once you add A...
Researchers Uncover Prompt Injection Vulnerabilities in DeepSeek and Claude AI

Researchers Uncover Prompt Injection Vulnerabilities in DeepSeek and Claude AI

Dec 09, 2024 Artificial Intelligenc / Vulnerability
Details have emerged about a now-patched security flaw in the DeepSeek artificial intelligence (AI) chatbot that, if successfully exploited, could permit a bad actor to take control of a victim's account by means of a prompt injection attack . Security researcher Johann Rehberger, who has chronicled many a prompt injection attack targeting various AI tools, found that providing the input "Print the xss cheat sheet in a bullet list. just payloads" in the DeepSeek chat triggered the execution of JavaScript code as part of the generated response – a classic case of cross-site scripting (XSS). XSS attacks can have serious consequences as they lead to the execution of unauthorized code in the context of the victim's web browser. An attacker could take advantage of such flaws to hijack a user's session and gain access to cookies and other data associated with the chat.deepseek[.]com domain, thereby leading to an account takeover. "After some experimenting,...
cyber security

The 2026 CISO Budget Benchmark

websiteWizEnterprise Security / Cloud Security
See how 300+ CISOs are planning 2026 budgets: top trends in AI, cloud, staffing, and tool consolidation shaping next year's security priorities.
cyber security

2025 Cloud Security Survey Report

websiteSentinelOneCloud Security / Identity Protection
Learn from 400+ security leaders and practitioners to get the latest insights and trends on cloud security
Prompt Injection Flaw in Vanna AI Exposes Databases to RCE Attacks

Prompt Injection Flaw in Vanna AI Exposes Databases to RCE Attacks

Jun 27, 2024 Artificial Intelligence / Vulnerability
Cybersecurity researchers have disclosed a high-severity security flaw in the Vanna.AI library that could be exploited to achieve remote code execution vulnerability via prompt injection techniques. The vulnerability, tracked as CVE-2024-5565 (CVSS score: 8.1), relates to a case of prompt injection in the "ask" function that could be exploited to trick the library into executing arbitrary commands, supply chain security firm JFrog said . Vanna is a Python-based machine learning library that allows users to chat with their SQL database to glean insights by "just asking questions" (aka prompts) that are translated into an equivalent SQL query using a large language model (LLM). The rapid rollout of generative artificial intelligence (AI) models in recent years has brought to the fore the risks of exploitation by malicious actors, who can weaponize the tools by providing adversarial inputs that bypass the safety mechanisms built into them. One such prominent clas...
Salesforce Patches Critical ForcedLeak Bug Exposing CRM Data via AI Prompt Injection

Salesforce Patches Critical ForcedLeak Bug Exposing CRM Data via AI Prompt Injection

Sep 25, 2025 Vulnerability / AI Security
Cybersecurity researchers have disclosed a critical flaw impacting Salesforce Agentforce , a platform for building artificial intelligence (AI) agents, that could allow attackers to potentially exfiltrate sensitive data from its customer relationship management (CRM) tool by means of an indirect prompt injection. The vulnerability has been codenamed ForcedLeak (CVSS score: 9.4) by Noma Security, which discovered and reported the problem on July 28, 2025. It impacts any organization using Salesforce Agentforce with the Web-to-Lead functionality enabled. "This vulnerability demonstrates how AI agents present a fundamentally different and expanded attack surface compared to traditional prompt-response systems," Sasi Levi, security research lead at Noma, said in a report shared with The Hacker News. One of the most severe threats facing generative artificial intelligence (GenAI) systems today is indirect prompt injection , which occurs when malicious instructions are ins...
Cursor AI Code Editor Fixed Flaw Allowing Attackers to Run Commands via Prompt Injection

Cursor AI Code Editor Fixed Flaw Allowing Attackers to Run Commands via Prompt Injection

Aug 01, 2025 Vulnerability / DevOps
Cybersecurity researchers have disclosed a now-patched, high-severity security flaw in Cursor, a popular artificial intelligence (AI) code editor, that could result in remote code execution (RCE). The vulnerability, tracked as CVE-2025-54135 (CVSS score: 8.6), has been addressed in version 1.3 released on July 29, 2025. It has been codenamed CurXecute by Aim Labs, which previously disclosed EchoLeak . "Cursor runs with developer‑level privileges, and when paired with an MCP server that fetches untrusted external data, that data can redirect the agent's control flow and exploit those privileges," the Aim Labs Team said in a report shared with The Hacker News. "By feeding poisoned data to the agent via MCP, an attacker can gain full remote code execution under the user privileges, and achieve any number of things, including opportunities for ransomware, data theft, AI manipulation and hallucinations, etc." In other words, the remote code execution can trigg...
Google Adds Multi-Layered Defenses to Secure GenAI from Prompt Injection Attacks

Google Adds Multi-Layered Defenses to Secure GenAI from Prompt Injection Attacks

Jun 23, 2025 Artificial Intelligence / AI Security
Google has revealed the various safety measures that are being incorporated into its generative artificial intelligence (AI) systems to mitigate emerging attack vectors like indirect prompt injections and improve the overall security posture for agentic AI systems. "Unlike direct prompt injections, where an attacker directly inputs malicious commands into a prompt, indirect prompt injections involve hidden malicious instructions within external data sources," Google's GenAI security team said . These external sources can take the form of email messages, documents, or even calendar invites that trick the AI systems into exfiltrating sensitive data or performing other malicious actions. The tech giant said it has implemented what it described as a "layered" defense strategy that is designed to increase the difficulty, expense, and complexity required to pull off an attack against its systems. These efforts span model hardening, introducing purpose-built mac...
Researchers Disclose Google Gemini AI Flaws Allowing Prompt Injection and Cloud Exploits

Researchers Disclose Google Gemini AI Flaws Allowing Prompt Injection and Cloud Exploits

Sep 30, 2025 Artificial Intelligence / Vulnerability
Cybersecurity researchers have disclosed three now-patched security vulnerabilities impacting Google's Gemini artificial intelligence (AI) assistant that, if successfully exploited, could have exposed users to major privacy risks and data theft. "They made Gemini vulnerable to search-injection attacks on its Search Personalization Model; log-to-prompt injection attacks against Gemini Cloud Assist; and exfiltration of the user's saved information and location data via the Gemini Browsing Tool," Tenable security researcher Liv Matan said in a report shared with The Hacker News. The vulnerabilities have been collectively codenamed the Gemini Trifecta by the cybersecurity company. They reside in three distinct components of the Gemini suite - A prompt injection flaw in Gemini Cloud Assist that could allow attackers to exploit cloud-based services and compromise cloud resources by taking advantage of the fact that the tool is capable of summarizing logs pulled dir...
ChatGPT Atlas Browser Can Be Tricked by Fake URLs into Executing Hidden Commands

ChatGPT Atlas Browser Can Be Tricked by Fake URLs into Executing Hidden Commands

Oct 27, 2025 AI Security / Vulnerability
The newly released OpenAI ChatGPT Atlas web browser has been found to be susceptible to a prompt injection attack where its omnibox can be jailbroken by disguising a malicious prompt as a seemingly harmless URL to visit. "The omnibox (combined address/search bar) interprets input either as a URL to navigate to, or as a natural-language command to the agent," NeuralTrust said in a report published Friday. "We've identified a prompt injection technique that disguises malicious instructions to look like a URL, but that Atlas treats as high-trust 'user intent' text, enabling harmful actions." Last week, OpenAI launched Atlas as a web browser with built-in ChatGPT capabilities to assist users with web page summarization, inline text editing, and agentic functions. In the attack outlined by the artificial intelligence (AI) security company, an attacker can take advantage of the browser's lack of strict boundaries between trusted user input and untru...
Microsoft Expands Sentinel Into Agentic Security Platform With Unified Data Lake

Microsoft Expands Sentinel Into Agentic Security Platform With Unified Data Lake

Sep 30, 2025 Artificial Intelligence / Threat Detection
Microsoft on Tuesday unveiled the expansion of its Sentinel Security Incidents and Event Management solution (SIEM) as a unified agentic platform with the general availability of the Sentinel data lake. In addition, the tech giant said it's also releasing a public preview of Sentinel Graph and Sentinel Model Context Protocol ( MCP ) server to turn telemetry into a security graph and allow AI agents access an organization's security context in a standardized manner. "With graph-based context, semantic access, and agentic orchestration, Sentinel gives defenders a single platform to ingest signals, correlate across domains, and empower AI agents built in Security Copilot, VS Code using GitHub Copilot, or other developer platforms," Vasu Jakkal, corporate vice president at Microsoft Security, said in a post shared with The Hacker News. Microsoft released Sentinel data lake in public preview earlier this July as a purpose-built, cloud-native tool to ingest, manage...
ServiceNow AI Agents Can Be Tricked Into Acting Against Each Other via Second-Order Prompts

ServiceNow AI Agents Can Be Tricked Into Acting Against Each Other via Second-Order Prompts

Nov 19, 2025 AI Security / SaaS Security
Malicious actors can exploit default configurations in ServiceNow's Now Assist generative artificial intelligence (AI) platform and leverage its agentic capabilities to conduct prompt injection attacks. The second-order prompt injection, according to AppOmni, makes use of Now Assist's agent-to-agent discovery to execute unauthorized actions, enabling attackers to copy and exfiltrate sensitive corporate data, modify records, and escalate privileges. "This discovery is alarming because it isn't a bug in the AI; it's expected behavior as defined by certain default configuration options," said Aaron Costello, chief of SaaS Security Research at AppOmni. "When agents can discover and recruit each other, a harmless request can quietly turn into an attack, with criminals stealing sensitive data or gaining more access to internal company systems. These settings are easy to overlook." The attack is made possible because of agent discovery and agent-to-a...
Cursor AI Code Editor Flaw Enables Silent Code Execution via Malicious Repositories

Cursor AI Code Editor Flaw Enables Silent Code Execution via Malicious Repositories

Sep 12, 2025 AI Security / Vulnerability
A security weakness has been disclosed in the artificial intelligence (AI)-powered code editor Cursor that could trigger code execution when a maliciously crafted repository is opened using the program. The issue stems from the fact that an out-of-the-box security setting is disabled by default, opening the door for attackers to run arbitrary code on users' computers with their privileges. "Cursor ships with Workspace Trust disabled by default, so VS Code-style tasks configured with runOptions.runOn: 'folderOpen' auto-execute the moment a developer browses a project," Oasis Security said in an analysis. "A malicious .vscode/tasks.json turns a casual 'open folder' into silent code execution in the user's context." Cursor is an AI-powered fork of Visual Studio Code, which supports a feature called Workspace Trust to allow developers to safely browse and edit code regardless of where it came from or who wrote it. With this option disab...
Wiz Uncovers Critical Access Bypass Flaw in AI-Powered Vibe Coding Platform Base44

Wiz Uncovers Critical Access Bypass Flaw in AI-Powered Vibe Coding Platform Base44

Jul 29, 2025 LLM Security / Vulnerability
Cybersecurity researchers have disclosed a now-patched critical security flaw in a popular vibe coding platform called Base44 that could allow unauthorized access to private applications built by its users. "The vulnerability we discovered was remarkably simple to exploit -- by providing only a non-secret 'app_id' value to undocumented registration and email verification endpoints, an attacker could have created a verified account for private applications on their platform," cloud security firm Wiz said in a report shared with The Hacker News. A net result of this issue is that it bypasses all authentication controls, including Single Sign-On (SSO) protections, granting full access to all the private applications and data contained within them. Following responsible disclosure on July 9, 2025, an official fix was rolled out by Wix, which owns Base44, within 24 hours. There is no evidence that the issue was ever maliciously exploited in the wild. While vibe codin...
GitLab Duo Vulnerability Enabled Attackers to Hijack AI Responses with Hidden Prompts

GitLab Duo Vulnerability Enabled Attackers to Hijack AI Responses with Hidden Prompts

May 23, 2025 Artificial Intelligence / Vulnerability
Cybersecurity researchers have discovered an indirect prompt injection flaw in GitLab's artificial intelligence (AI) assistant Duo that could have allowed attackers to steal source code and inject untrusted HTML into its responses, which could then be used to direct victims to malicious websites. GitLab Duo is an artificial intelligence (AI)-powered coding assistant that enables users to write, review, and edit code. Built using Anthropic's Claude models, the service was first launched in June 2023. But as Legit Security found , GitLab Duo Chat has been susceptible to an indirect prompt injection flaw that permits attackers to "steal source code from private projects, manipulate code suggestions shown to other users, and even exfiltrate confidential, undisclosed zero-day vulnerabilities." Prompt injection refers to a class of vulnerabilities common in AI systems that enable threat actors to weaponize large language models (LLMs) to manipulate responses to user...
c
Expert Insights Articles Videos
Cybersecurity Resources