#1 Trusted Cybersecurity News Platform
Followed by 5.20+ million
The Hacker News Logo
Subscribe – Get Latest News

Search results for "log-to-prompt injection" | Breaking Cybersecurity News | The Hacker News

Researchers Disclose Google Gemini AI Flaws Allowing Prompt Injection and Cloud Exploits

Researchers Disclose Google Gemini AI Flaws Allowing Prompt Injection and Cloud Exploits

Sep 30, 2025 Artificial Intelligence / Vulnerability
Cybersecurity researchers have disclosed three now-patched security vulnerabilities impacting Google's Gemini artificial intelligence (AI) assistant that, if successfully exploited, could have exposed users to major privacy risks and data theft. "They made Gemini vulnerable to search-injection attacks on its Search Personalization Model; log-to-prompt injection attacks against Gemini Cloud Assist; and exfiltration of the user's saved information and location data via the Gemini Browsing Tool," Tenable security researcher Liv Matan said in a report shared with The Hacker News. The vulnerabilities have been collectively codenamed the Gemini Trifecta by the cybersecurity company. They reside in three distinct components of the Gemini suite - A prompt injection flaw in Gemini Cloud Assist that could allow attackers to exploit cloud-based services and compromise cloud resources by taking advantage of the fact that the tool is capable of summarizing logs pulled dir...
Researchers Demonstrate How MCP Prompt Injection Can Be Used for Both Attack and Defense

Researchers Demonstrate How MCP Prompt Injection Can Be Used for Both Attack and Defense

Apr 30, 2025 Artificial Intelligence / Email Security
As the field of artificial intelligence (AI) continues to evolve at a rapid pace, fresh research has found how techniques that render the Model Context Protocol ( MCP ) susceptible to prompt injection attacks could be used to develop security tooling or identify malicious tools, according to a new report from Tenable. MCP, launched by Anthropic in November 2024, is a framework designed to connect Large Language Models (LLMs) with external data sources and services, and make use of model-controlled tools to interact with those systems to enhance the accuracy, relevance, and utility of AI applications. It follows a client-server architecture, allowing hosts with MCP clients such as Claude Desktop or Cursor to communicate with different MCP servers, each of which exposes specific tools and capabilities. While the open standard offers a unified interface to access various data sources and even switch between LLM providers, they also come with a new set of risks, ranging from exc...
Google Adds Layered Defenses to Chrome to Block Indirect Prompt Injection Threats

Google Adds Layered Defenses to Chrome to Block Indirect Prompt Injection Threats

Dec 09, 2025 Browser Security / Artificial Intelligence
Google on Monday announced a set of new security features in Chrome, following the company's addition of agentic artificial intelligence (AI) capabilities to the web browser. To that end, the tech giant said it has implemented layered defenses to make it harder for bad actors to exploit indirect prompt injections that arise as a result of exposure to untrusted web content and inflict harm. Chief among the features is a User Alignment Critic, which uses a second model to independently evaluate the agent's actions in a manner that's isolated from malicious prompts. This approach complements Google's existing techniques, like spotlighting , which instruct the model to stick to user and system instructions rather than abiding by what's embedded in a web page. "The User Alignment Critic runs after the planning is complete to double-check each proposed action," Google said . "Its primary focus is task alignment: determining whether the proposed action s...
cyber security

Compliance-Ready Tabletop Exercises to Elevate Incident Response

websiteFiligranIncident Response / Exposure Validation
Standardize tabletop drills at scale. improving real-world team response and decision-making.
cyber security

The Cyber Event of the Year Returns: SANS 2026

websiteSANS InstituteCybersecurity Training / Certification
50+ courses, NetWars, AI Keynote, and a full week of action. Join SANS in Orlando.
⚡ Weekly Recap: Hot CVEs, npm Worm Returns, Firefox RCE, M365 Email Raid & More

⚡ Weekly Recap: Hot CVEs, npm Worm Returns, Firefox RCE, M365 Email Raid & More

Dec 01, 2025 Hacking News / Cybersecurity
Hackers aren't kicking down the door anymore. They just use the same tools we use every day — code packages, cloud accounts, email, chat, phones, and "trusted" partners — and turn them against us. One bad download can leak your keys. One weak vendor can expose many customers at once. One guest invite, one link on a phone, one bug in a common tool, and suddenly your mail, chats, repos, and servers are in play. Every story below is a reminder that your "safe" tools might be the real weak spot. ⚡ Threat of the Week Shai-Hulud Returns with More Aggression — The npm registry was targeted a second time by a self-replicating worm that went by the moniker "Sha1-Hulud: The Second Coming," affecting over 800 packages and 27,000 GitHub repositories. Like in the previous iteration, the main objective was to steal sensitive data like API keys, cloud credentials, and npm and GitHub authentication information, and facilitate deeper supply chain compromise in a worm-like fashion. Th...
⚡ Weekly Recap: IoT Exploits, Wallet Breaches, Rogue Extensions, AI Abuse & More

⚡ Weekly Recap: IoT Exploits, Wallet Breaches, Rogue Extensions, AI Abuse & More

Jan 05, 2026 Hacking News / Cybersecurity
The year opened without a reset. The same pressure carried over, and in some places it tightened. Systems people assume are boring or stable are showing up in the wrong places. Attacks moved quietly, reused familiar paths, and kept working longer than anyone wants to admit. This week's stories share one pattern. Nothing flashy. No single moment. Just steady abuse of trust — updates, extensions, logins, messages — the things people click without thinking. That's where damage starts now. This recap pulls those signals together. Not to overwhelm, but to show where attention slipped and why it matters early in the year. ⚡ Threat of the Week RondoDox Botnet Exploits React2Shell Flaw — A persistent nine-month-long campaign has targeted Internet of Things (IoT) devices and web applications to enroll them into a botnet known as RondoDox. As of December 2025, the activity has been observed leveraging the recently disclosed React2Shell (CVE-2025-55182, CVSS score: 10.0) flaw as an initial...
Critical Vulnerability in Anthropic's MCP Exposes Developer Machines to Remote Exploits

Critical Vulnerability in Anthropic's MCP Exposes Developer Machines to Remote Exploits

Jul 01, 2025 Vulnerability / AI Security
Cybersecurity researchers have discovered a critical security vulnerability in artificial intelligence (AI) company Anthropic's Model Context Protocol ( MCP ) Inspector project that could result in remote code execution (RCE) and allow an attacker to gain complete access to the hosts. The vulnerability, tracked as CVE-2025-49596 , carries a CVSS score of 9.4 out of a maximum of 10.0. "This is one of the first critical RCEs in Anthropic's MCP ecosystem, exposing a new class of browser-based attacks against AI developer tools," Oligo Security's Avi Lumelsky said in a report published last week. "With code execution on a developer's machine, attackers can steal data, install backdoors, and move laterally across networks - highlighting serious risks for AI teams, open-source projects, and enterprise adopters relying on MCP." MCP, introduced by Anthropic in November 2024, is an open protocol that standardizes the way large language model (LLM) appli...
Top-Rated Chinese AI App DeepSeek Limits Registrations Amid Cyberattacks

Top-Rated Chinese AI App DeepSeek Limits Registrations Amid Cyberattacks

Jan 28, 2025 Artificial Intelligence / Technology
DeepSeek, the Chinese AI startup that has captured much of the artificial intelligence (AI) buzz in recent days, said it's restricting registrations on the service, citing malicious attacks. "Due to large-scale malicious attacks on DeepSeek's services, we are temporarily limiting registrations to ensure continued service," the company said in an incident report page. "Existing users can log in as usual. Thanks for your understanding and support." Users attempting to sign up for an account are being displayed a similar message, stating "registration may be busy" and that they should wait and try again. "With the popularity of DeepSeek growing, it's not a big surprise that they are being targeted by malicious web traffic," Erich Kron, security awareness advocate at KnowBe4, said in a statement shared with The Hacker News. "These sorts of attacks could be a way to extort an organization by promising to stop attacks and rest...
ThreatsDay Bulletin: CarPlay Exploit, BYOVD Tactics, SQL C2 Attacks, iCloud Backdoor Demand & More

ThreatsDay Bulletin: CarPlay Exploit, BYOVD Tactics, SQL C2 Attacks, iCloud Backdoor Demand & More

Oct 02, 2025 Threat Intelligence / Cyber Attacks
From unpatched cars to hijacked clouds, this week's Threatsday headlines remind us of one thing — no corner of technology is safe. Attackers are scanning firewalls for critical flaws, bending vulnerable SQL servers into powerful command centers, and even finding ways to poison Chrome's settings to sneak in malicious extensions. On the defense side, AI is stepping up to block ransomware in real time, but privacy fights over data access and surveillance are heating up just as fast. It's a week that shows how wide the battlefield has become — from the apps on our phones to the cars we drive. Don't keep this knowledge to yourself: share this bulletin to protect others, and add The Hacker News to your Google News list so you never miss the updates that could make the difference. Claude Now Finds Your Bugs Anthropic Touts Safety Protections Built Into Claude Sonnet 4.6 Anthropic said it has rolled out a number of safety and security improve...
Offensive and Defensive AI: Let’s Chat(GPT) About It

Offensive and Defensive AI: Let's Chat(GPT) About It

Nov 07, 2023 Artificial Intelligence / Data Security
ChatGPT: Productivity tool, great for writing poems, and… a security risk?! In this article, we show how threat actors can exploit ChatGPT, but also how defenders can use it for leveling up their game. ChatGPT is the most swiftly growing consumer application to date. The extremely popular generative AI chatbot has the ability to generate human-like, coherent and contextually relevant responses. This makes it very valuable for applications like content creation, coding, education, customer support, and even personal assistance. However, ChatGPT also comes with security risks. ChatGPT can be used for data exfiltration, spreading misinformation, developing cyber attacks and writing phishing emails. On the flip side, it can help defenders who can use it for identifying vulnerabilities and learning about various defenses. In this article, we show numerous ways attackers can exploit ChatGPT and the OpenAI Playground. Just as importantly, we show ways that defenders can leverage ChatGPT t...
ThreatsDay Bulletin: $176M Crypto Fine, Hacking Formula 1, Chromium Vulns, AI Hijack & More

ThreatsDay Bulletin: $176M Crypto Fine, Hacking Formula 1, Chromium Vulns, AI Hijack & More

Oct 23, 2025 Cybersecurity / Hacking News
Criminals don't need to be clever all the time; they just follow the easiest path in: trick users, exploit stale components, or abuse trusted systems like OAuth and package registries. If your stack or habits make any of those easy, you're already a target. This week's ThreatsDay highlights show exactly how those weak points are being exploited — from overlooked misconfigurations to sophisticated new attack chains that turn ordinary tools into powerful entry points. Lumma Stealer Stumbles After Doxxing Drama Decline in Lumma Stealer Activity After Doxxing Campaign The activity of the Lumma Stealer (aka Water Kurita) information stealer has witnessed a "sudden drop" since last months after the identities of five alleged core group members were exposed as part of what's said to be an aggressive underground exposure campaign dubbed Lumma Rats since late August 2025. The targeted individuals are affiliated with the malware's development and administ...
New TokenBreak Attack Bypasses AI Moderation with Single-Character Text Changes

New TokenBreak Attack Bypasses AI Moderation with Single-Character Text Changes

Jun 12, 2025 AI Jailbreaking / Prompt Injection
Cybersecurity researchers have discovered a novel attack technique called TokenBreak that can be used to bypass a large language model's (LLM) safety and content moderation guardrails with just a single character change. "The TokenBreak attack targets a text classification model's tokenization strategy to induce false negatives, leaving end targets vulnerable to attacks that the implemented protection model was put in place to prevent," Kieran Evans, Kasimir Schulz, and Kenneth Yeung said in a report shared with The Hacker News. Tokenization is a fundamental step that LLMs use to break down raw text into their atomic units – i.e., tokens – which are common sequences of characters found in a set of text. To that end, the text input is converted into their numerical representation and fed to the model.  LLMs work by understanding the statistical relationships between these tokens, and produce the next token in a sequence of tokens. The output tokens are detokeni...
Securing AI to Benefit from AI

Securing AI to Benefit from AI

Oct 21, 2025 Artificial Intelligence / Security Operations
Artificial intelligence (AI) holds tremendous promise for improving cyber defense and making the lives of security practitioners easier. It can help teams cut through alert fatigue, spot patterns faster, and bring a level of scale that human analysts alone can't match. But realizing that potential depends on securing the systems that make it possible. Every organization experimenting with AI in security operations is, knowingly or not, expanding its attack surface. Without clear governance, strong identity controls, and visibility into how AI makes its decisions, even well-intentioned deployments can create risk faster than they reduce it. To truly benefit from AI, defenders need to approach securing it with the same rigor they apply to any other critical system. That means establishing trust in the data it learns from, accountability for the actions it takes, and oversight for the outcomes it produces. When secured correctly, AI can amplify human capability instead of replacing it t...
⚡ Weekly Recap: Fortinet Exploited, China's AI Hacks, PhaaS Empire Falls & More

⚡ Weekly Recap: Fortinet Exploited, China's AI Hacks, PhaaS Empire Falls & More

Nov 17, 2025 Cybersecurity / Hacking News
This week showed just how fast things can go wrong when no one's watching. Some attacks were silent and sneaky. Others used tools we trust every day — like AI, VPNs, or app stores — to cause damage without setting off alarms. It's not just about hacking anymore. Criminals are building systems to make money, spy, or spread malware like it's a business. And in some cases, they're using the same apps and services that businesses rely on — flipping the script without anyone noticing at first. The scary part? Some threats weren't even bugs — just clever use of features we all take for granted. And by the time people figured it out, the damage was done. Let's look at what really happened, why it matters, and what we should all be thinking about now. ⚡ Threat of the Week Silently Patched Fortinet Flaw Comes Under Attack — A vulnerability that was patched by Fortinet in FortiWeb Web Application Firewall (WAF) has been exploited in the wild since early October 2025 by threat actors to c...
⚡ Weekly Recap: VPN Exploits, Oracle's Silent Breach, ClickFix Surge and More

⚡ Weekly Recap: VPN Exploits, Oracle's Silent Breach, ClickFix Surge and More

Apr 07, 2025 Threat Intelligence / Cybersecurity
Today, every unpatched system, leaked password, and overlooked plugin is a doorway for attackers. Supply chains stretch deep into the code we trust, and malware hides not just in shady apps — but in job offers, hardware, and cloud services we rely on every day. Hackers don't need sophisticated exploits anymore. Sometimes, your credentials and a little social engineering are enough. This week, we trace how simple oversights turn into major breaches — and the silent threats most companies still underestimate. Let's dive in. ⚡ Threat of the Week UNC5221 Exploits New Ivanti Flaw to Drop Malware — The China-nexus cyber espionage group tracked as UNC5221 exploited a now-patched flaw in Ivanti Connect Secure, CVE-2025-22457 (CVSS score: 9.0), to deliver an in-memory dropper called TRAILBLAZE, a passive backdoor codenamed BRUSHFIRE, and the SPAWN malware suite. The vulnerability was originally patched by Ivanti on February 11, 2025, indicating that the threat actors studied the patch a...
Expert Insights Articles Videos
Cybersecurity Resources