#1 Trusted Cybersecurity News Platform
Followed by 5.20+ million
The Hacker News Logo
Subscribe – Get Latest News

Prompt Injection | Breaking Cybersecurity News | The Hacker News

Category — Prompt Injection
Three Flaws in Anthropic MCP Git Server Enable File Access and Code Execution

Three Flaws in Anthropic MCP Git Server Enable File Access and Code Execution

Jan 20, 2026 Vulnerability / Artificial Intelligence
A set of three security vulnerabilities has been disclosed in mcp-server-git , the official Git Model Context Protocol ( MCP ) server maintained by Anthropic, that could be exploited to read or delete arbitrary files and execute code under certain conditions. "These flaws can be exploited through prompt injection, meaning an attacker who can influence what an AI assistant reads (a malicious README, a poisoned issue description, a compromised webpage) can weaponize these vulnerabilities without any direct access to the victim's system," Cyata researcher Yarden Porat said in a report shared with The Hacker News. Mcp-server-git is a Python package and an MCP server that provides a set of built-in tools to read, search, and manipulate Git repositories programmatically via large language models (LLMs). The security issues, which have been addressed in versions 2025.9.25 and 2025.12.18 following responsible disclosure in June 2025, are listed below - CVE-2025-68143 (CV...
Google Gemini Prompt Injection Flaw Exposed Private Calendar Data via Malicious Invites

Google Gemini Prompt Injection Flaw Exposed Private Calendar Data via Malicious Invites

Jan 19, 2026 Artificial Intelligence / Vulnerability
Cybersecurity researchers have disclosed details of a security flaw that leverages indirect prompt injection targeting Google Gemini as a way to bypass authorization guardrails and use Google Calendar as a data extraction mechanism. The vulnerability, Miggo Security's Head of Research, Liad Eliyahu, said, made it possible to circumvent Google Calendar's privacy controls by hiding a dormant malicious payload within a standard calendar invite. "This bypass enabled unauthorized access to private meeting data and the creation of deceptive calendar events without any direct user interaction," Eliyahu said in a report shared with The Hacker News. The starting point of the attack chain is a new calendar event that's crafted by the threat actor and sent to a target. The invite's description embeds a natural language prompt that's designed to do their bidding, resulting in a prompt injection. The attack gets activated when a user asks Gemini a completely inno...
Researchers Reveal Reprompt Attack Allowing Single-Click Data Exfiltration From Microsoft Copilot

Researchers Reveal Reprompt Attack Allowing Single-Click Data Exfiltration From Microsoft Copilot

Jan 15, 2026 Prompt Injection / Enterprise Security
Cybersecurity researchers have disclosed details of a new attack method dubbed Reprompt that could allow bad actors to exfiltrate sensitive data from artificial intelligence (AI) chatbots like Microsoft Copilot in a single click, while bypassing enterprise security controls entirely. "Only a single click on a legitimate Microsoft link is required to compromise victims," Varonis security researcher Dolev Taler said in a report published Wednesday. "No plugins, no user interaction with Copilot." "The attacker maintains control even when the Copilot chat is closed, allowing the victim's session to be silently exfiltrated with no interaction beyond that first click." Following responsible disclosure, Microsoft has addressed the security issue. The attack does not affect enterprise customers using Microsoft 365 Copilot. At a high level, Reprompt employs three techniques to achieve a data‑exfiltration chain - Using the "q" URL parameter in...
cyber security

Secured Images 101

websiteWizDevOps / AppSec
Secure your container ecosystem with this easy-to-read digital poster that breaks down everything you need to know about container image security. Perfect for engineering, platform, DevOps, AppSec, and cloud security teams.
cyber security

When Zoom Phishes You: Unmasking a Novel TOAD Attack Hidden in Legitimate Infrastructure

websiteProphet SecurityArtificial Intelligence / SOC
Prophet AI uncovers a Telephone-Oriented Attack Delivery (TOAD) campaign weaponizing Zoom's own authentication infrastructure.
Model Security Is the Wrong Frame – The Real Risk Is Workflow Security

Model Security Is the Wrong Frame – The Real Risk Is Workflow Security

Jan 15, 2026 Data Security / Artificial Intelligence
As AI copilots and assistants become embedded in daily work, security teams are still focused on protecting the models themselves. But recent incidents suggest the bigger risk lies elsewhere: in the workflows that surround those models. Two Chrome extensions posing as AI helpers were recently caught stealing ChatGPT and DeepSeek chat data from over 900,000 users. Separately, researchers demonstrated how prompt injections hidden in code repositories could trick IBM's AI coding assistant into executing malware on a developer's machine. Neither attack broke the AI algorithms themselves.  They exploited the context in which the AI operates. That's the pattern worth paying attention to. When AI systems are embedded in real business processes, summarizing documents, drafting emails, and pulling data from internal tools, securing the model alone isn't enough. The workflow itself becomes the target. AI Models Are Becoming Workflow Engines To understand why this matters,...
Critical LangChain Core Vulnerability Exposes Secrets via Serialization Injection

Critical LangChain Core Vulnerability Exposes Secrets via Serialization Injection

Dec 26, 2025 AI Security / DevSecOps
A critical security flaw has been disclosed in LangChain Core that could be exploited by an attacker to steal sensitive secrets and even influence large language model (LLM) responses through prompt injection. LangChain Core (i.e., langchain-core ) is a core Python package that's part of the LangChain ecosystem, providing the core interfaces and model-agnostic abstractions for building applications powered by LLMs. The vulnerability, tracked as CVE-2025-68664, carries a CVSS score of 9.3 out of 10.0. Security researcher Yarden Porat has been credited with reporting the vulnerability on December 4, 2025. It has been codenamed LangGrinch . "A serialization injection vulnerability exists in LangChain's dumps() and dumpd() functions," the project maintainers said in an advisory. "The functions do not escape dictionaries with 'lc' keys when serializing free-form dictionaries." "The 'lc' key is used internally by LangChain to mark ser...
Researcher Uncovers 30+ Flaws in AI Coding Tools Enabling Data Theft and RCE Attacks

Researcher Uncovers 30+ Flaws in AI Coding Tools Enabling Data Theft and RCE Attacks

Dec 06, 2025 AI Security / Vulnerability
Over 30 security vulnerabilities have been disclosed in various artificial intelligence (AI)-powered Integrated Development Environments (IDEs) that combine prompt injection primitives with legitimate features to achieve data exfiltration and remote code execution. The security shortcomings have been collectively named IDEsaster by security researcher Ari Marzouk (MaccariTA), who discovered them over the last six months. They affect popular IDEs and extensions such as Cursor, Windsurf, Kiro.dev, GitHub Copilot, Zed.dev, Roo Code, Junie, and Cline, among others. Of these, 24 have been assigned CVE identifiers. "I think the fact that multiple universal attack chains affected each and every AI IDE tested is the most surprising finding of this research," Marzouk told The Hacker News. "All AI IDEs (and coding assistants that integrate with them) effectively ignore the base software (IDE) in their threat model. They treat their features as inherently safe because they've...
Zero-Click Agentic Browser Attack Can Delete Entire Google Drive Using Crafted Emails

Zero-Click Agentic Browser Attack Can Delete Entire Google Drive Using Crafted Emails

Dec 05, 2025 Email Security / Threat Research
A new agentic browser attack targeting Perplexity's Comet browser that's capable of turning a seemingly innocuous email into a destructive action that wipes a user's entire Google Drive contents, findings from Straiker STAR Labs show. The zero-click Google Drive Wiper technique hinges on connecting the browser to services like Gmail and Google Drive to automate routine tasks by granting them access to read emails, as well as browse files and folders, and perform actions like moving, renaming, or deleting content. For instance, a prompt issued by a benign user might look like this: "Please check my email and complete all my recent organization tasks." This will cause the browser agent to search the inbox for relevant messages and perform the necessary actions. "This behavior reflects excessive agency in LLM-powered assistants where the LLM performs actions that go far beyond the user's explicit request," security researcher Amanda Rousseau said in ...
Webinar: The "Agentic" Trojan Horse: Why the New AI Browsers War is a Nightmare for Security Teams

Webinar: The "Agentic" Trojan Horse: Why the New AI Browsers War is a Nightmare for Security Teams

Dec 01, 2025 Artificial Intelligence / Enterprise Security
The AI browser wars are coming to a desktop near you, and you need to start worrying about their security challenges. For the last two decades, whether you used Chrome, Edge, or Firefox, the fundamental paradigm remained the same: a passive window through which a human user viewed and interacted with the internet. That era is over. We are currently witnessing a shift that renders the old OS-centric browser debates irrelevant. The new battleground is agentic AI browsers, and for security professionals, it represents a terrifying inversion of the traditional threat landscape. A new webinar dives into the issue of AI browsers , their risks, and how security teams can deal with them. Even today, the browser is the main interface for AI consumption; it is where most users access AI assistants such as ChatGPT or Gemini, use AI-enabled SaaS applications, and engage AI agents. AI providers were the first to recognize this, which is why we've seen a spate of new 'agentic' AI browsers bein...
ServiceNow AI Agents Can Be Tricked Into Acting Against Each Other via Second-Order Prompts

ServiceNow AI Agents Can Be Tricked Into Acting Against Each Other via Second-Order Prompts

Nov 19, 2025 AI Security / SaaS Security
Malicious actors can exploit default configurations in ServiceNow's Now Assist generative artificial intelligence (AI) platform and leverage its agentic capabilities to conduct prompt injection attacks. The second-order prompt injection, according to AppOmni, makes use of Now Assist's agent-to-agent discovery to execute unauthorized actions, enabling attackers to copy and exfiltrate sensitive corporate data, modify records, and escalate privileges. "This discovery is alarming because it isn't a bug in the AI; it's expected behavior as defined by certain default configuration options," said Aaron Costello, chief of SaaS Security Research at AppOmni. "When agents can discover and recruit each other, a harmless request can quietly turn into an attack, with criminals stealing sensitive data or gaining more access to internal company systems. These settings are easy to overlook." The attack is made possible because of agent discovery and agent-to-a...
Researchers Find ChatGPT Vulnerabilities That Let Attackers Trick AI Into Leaking Data

Researchers Find ChatGPT Vulnerabilities That Let Attackers Trick AI Into Leaking Data

Nov 05, 2025 Artificial Intelligence / Vulnerability
Cybersecurity researchers have disclosed a new set of vulnerabilities impacting OpenAI's ChatGPT artificial intelligence (AI) chatbot that could be exploited by an attacker to steal personal information from users' memories and chat histories without their knowledge. The seven vulnerabilities and attack techniques, according to Tenable, were found in OpenAI's GPT-4o and GPT-5 models. OpenAI has since addressed some of them .  These issues expose the AI system to indirect prompt injection attacks , allowing an attacker to manipulate the expected behavior of a large language model (LLM) and trick it into performing unintended or malicious actions, security researchers Moshe Bernstein and Liv Matan said in a report shared with The Hacker News. The identified shortcomings are listed below - Indirect prompt injection vulnerability via trusted sites in Browsing Context, which involves asking ChatGPT to summarize the contents of web pages with malicious instructions added...
ChatGPT Atlas Browser Can Be Tricked by Fake URLs into Executing Hidden Commands

ChatGPT Atlas Browser Can Be Tricked by Fake URLs into Executing Hidden Commands

Oct 27, 2025 AI Security / Vulnerability
The newly released OpenAI ChatGPT Atlas web browser has been found to be susceptible to a prompt injection attack where its omnibox can be jailbroken by disguising a malicious prompt as a seemingly harmless URL to visit. "The omnibox (combined address/search bar) interprets input either as a URL to navigate to, or as a natural-language command to the agent," NeuralTrust said in a report published Friday. "We've identified a prompt injection technique that disguises malicious instructions to look like a URL, but that Atlas treats as high-trust 'user intent' text, enabling harmful actions." Last week, OpenAI launched Atlas as a web browser with built-in ChatGPT capabilities to assist users with web page summarization, inline text editing, and agentic functions. In the attack outlined by the artificial intelligence (AI) security company, an attacker can take advantage of the browser's lack of strict boundaries between trusted user input and untru...
CometJacking: One Click Can Turn Perplexity’s Comet AI Browser Into a Data Thief

CometJacking: One Click Can Turn Perplexity's Comet AI Browser Into a Data Thief

Oct 04, 2025 Agentic AI / Enterprise Security
Cybersecurity researchers have disclosed details of a new attack called CometJacking targeting Perplexity's agentic AI browser Comet by embedding malicious prompts within a seemingly innocuous link to siphon sensitive data, including from connected services, like email and calendar. The sneaky prompt injection attack plays out in the form of a malicious link that, when clicked, triggers the unexpected behavior unbeknownst to the victims. "CometJacking shows how a single, weaponized URL can quietly flip an AI browser from a trusted co-pilot to an insider threat," Michelle Levy, Head of Security Research at LayerX, said in a statement shared with The Hacker News. "This isn't just about stealing data; it's about hijacking the agent that already has the keys. Our research proves that trivial obfuscation can bypass data exfiltration checks and pull email, calendar, and connector data off-box in one click. AI-native browsers need security-by-design for agent...
ThreatsDay Bulletin: CarPlay Exploit, BYOVD Tactics, SQL C2 Attacks, iCloud Backdoor Demand & More

ThreatsDay Bulletin: CarPlay Exploit, BYOVD Tactics, SQL C2 Attacks, iCloud Backdoor Demand & More

Oct 02, 2025 Threat Intelligence / Cyber Attacks
From unpatched cars to hijacked clouds, this week's Threatsday headlines remind us of one thing — no corner of technology is safe. Attackers are scanning firewalls for critical flaws, bending vulnerable SQL servers into powerful command centers, and even finding ways to poison Chrome's settings to sneak in malicious extensions. On the defense side, AI is stepping up to block ransomware in real time, but privacy fights over data access and surveillance are heating up just as fast. It's a week that shows how wide the battlefield has become — from the apps on our phones to the cars we drive. Don't keep this knowledge to yourself: share this bulletin to protect others, and add The Hacker News to your Google News list so you never miss the updates that could make the difference. Claude Now Finds Your Bugs Anthropic Touts Safety Protections Built Into Claude Sonnet 4.6 Anthropic said it has rolled out a number of safety and security improve...
Researchers Disclose Google Gemini AI Flaws Allowing Prompt Injection and Cloud Exploits

Researchers Disclose Google Gemini AI Flaws Allowing Prompt Injection and Cloud Exploits

Sep 30, 2025 Artificial Intelligence / Vulnerability
Cybersecurity researchers have disclosed three now-patched security vulnerabilities impacting Google's Gemini artificial intelligence (AI) assistant that, if successfully exploited, could have exposed users to major privacy risks and data theft. "They made Gemini vulnerable to search-injection attacks on its Search Personalization Model; log-to-prompt injection attacks against Gemini Cloud Assist; and exfiltration of the user's saved information and location data via the Gemini Browsing Tool," Tenable security researcher Liv Matan said in a report shared with The Hacker News. The vulnerabilities have been collectively codenamed the Gemini Trifecta by the cybersecurity company. They reside in three distinct components of the Gemini suite - A prompt injection flaw in Gemini Cloud Assist that could allow attackers to exploit cloud-based services and compromise cloud resources by taking advantage of the fact that the tool is capable of summarizing logs pulled dir...
Salesforce Patches Critical ForcedLeak Bug Exposing CRM Data via AI Prompt Injection

Salesforce Patches Critical ForcedLeak Bug Exposing CRM Data via AI Prompt Injection

Sep 25, 2025 Vulnerability / AI Security
Cybersecurity researchers have disclosed a critical flaw impacting Salesforce Agentforce , a platform for building artificial intelligence (AI) agents, that could allow attackers to potentially exfiltrate sensitive data from its customer relationship management (CRM) tool by means of an indirect prompt injection. The vulnerability has been codenamed ForcedLeak (CVSS score: 9.4) by Noma Security, which discovered and reported the problem on July 28, 2025. It impacts any organization using Salesforce Agentforce with the Web-to-Lead functionality enabled. "This vulnerability demonstrates how AI agents present a fundamentally different and expanded attack surface compared to traditional prompt-response systems," Sasi Levi, security research lead at Noma, said in a report shared with The Hacker News. One of the most severe threats facing generative artificial intelligence (GenAI) systems today is indirect prompt injection , which occurs when malicious instructions are ins...
ShadowLeak Zero-Click Flaw Leaks Gmail Data via OpenAI ChatGPT Deep Research Agent

ShadowLeak Zero-Click Flaw Leaks Gmail Data via OpenAI ChatGPT Deep Research Agent

Sep 20, 2025 Artificial Intelligence / Cloud Security
Cybersecurity researchers have disclosed a zero-click flaw in OpenAI ChatGPT's Deep Research agent that could allow an attacker to leak sensitive Gmail inbox data with a single crafted email without any user action. The new class of attack has been codenamed ShadowLeak by Radware. Following responsible disclosure on June 18, 2025, the issue was addressed by OpenAI in early August. "The attack utilizes an indirect prompt injection that can be hidden in email HTML (tiny fonts, white-on-white text, layout tricks) so the user never notices the commands, but the agent still reads and obeys them," security researchers Zvika Babo, Gabi Nakibly, and Maor Uziel said . "Unlike prior research that relied on client-side image rendering to trigger the leak, this attack leaks data directly from OpenAI's cloud infrastructure, making it invisible to local or enterprise defenses." Launched by OpenAI in February 2025, Deep Research is an agentic capability built into ...
Cursor AI Code Editor Flaw Enables Silent Code Execution via Malicious Repositories

Cursor AI Code Editor Flaw Enables Silent Code Execution via Malicious Repositories

Sep 12, 2025 AI Security / Vulnerability
A security weakness has been disclosed in the artificial intelligence (AI)-powered code editor Cursor that could trigger code execution when a maliciously crafted repository is opened using the program. The issue stems from the fact that an out-of-the-box security setting is disabled by default, opening the door for attackers to run arbitrary code on users' computers with their privileges. "Cursor ships with Workspace Trust disabled by default, so VS Code-style tasks configured with runOptions.runOn: 'folderOpen' auto-execute the moment a developer browses a project," Oasis Security said in an analysis. "A malicious .vscode/tasks.json turns a casual 'open folder' into silent code execution in the user's context." Cursor is an AI-powered fork of Visual Studio Code, which supports a feature called Workspace Trust to allow developers to safely browse and edit code regardless of where it came from or who wrote it. With this option disab...
Someone Created the First AI-Powered Ransomware Using OpenAI's gpt-oss:20b Model

Someone Created the First AI-Powered Ransomware Using OpenAI's gpt-oss:20b Model

Aug 27, 2025 Ransomware / Artificial Intelligence
Cybersecurity company ESET has disclosed that it discovered an artificial intelligence (AI)-powered ransomware variant codenamed PromptLock . Written in Golang, the newly identified strain uses the gpt-oss:20b model from OpenAI locally via the Ollama API to generate malicious Lua scripts in real-time. The open-weight language model was released by OpenAI earlier this month. "PromptLock leverages Lua scripts generated from hard-coded prompts to enumerate the local filesystem, inspect target files, exfiltrate selected data, and perform encryption," ESET said . "These Lua scripts are cross-platform compatible, functioning on Windows, Linux, and macOS." The ransomware code also embeds instructions to craft a custom note based on the "files affected," and the infected machine is a personal computer, company server, or a power distribution controller. It's currently not known who is behind the malware, but ESET told The Hacker News that PromptLoc arti...
Experts Find AI Browsers Can Be Tricked by PromptFix Exploit to Run Malicious Hidden Prompts

Experts Find AI Browsers Can Be Tricked by PromptFix Exploit to Run Malicious Hidden Prompts

Aug 20, 2025 Artificial Intelligence / Browser Security
Cybersecurity researchers have demonstrated a new prompt injection technique called PromptFix that tricks a generative artificial intelligence (GenAI) model into carrying out intended actions by embedding the malicious instruction inside a fake CAPTCHA check on a web page. Described by Guardio Labs an "AI-era take on the ClickFix scam," the attack technique demonstrates how AI-driven browsers, such as Perplexity's Comet , that promise to automate mundane tasks like shopping for items online or handling emails on behalf of users can be deceived into interacting with phishing landing pages or fraudulent lookalike storefronts without the human user's knowledge or intervention. "With PromptFix, the approach is different: We don't try to glitch the model into obedience," Guardio researchers Nati Tal and Shaked Chen said . "Instead, we mislead it using techniques borrowed from the human social engineering playbook – appealing directly to its core des...
Expert Insights Articles Videos
Cybersecurity Resources