-->
#1 Trusted Cybersecurity News Platform
Followed by 5.20+ million
The Hacker News Logo
Subscribe – Get Latest News
Security Service Edge

artificial intelligence | Breaking Cybersecurity News | The Hacker News

Category — artificial intelligence
Claude Code Flaws Allow Remote Code Execution and API Key Exfiltration

Claude Code Flaws Allow Remote Code Execution and API Key Exfiltration

Feb 25, 2026 Artificial Intelligence / Vulnerability
Cybersecurity researchers have disclosed multiple security vulnerabilities in Anthropic's Claude Code, an artificial intelligence (AI)-powered coding assistant, that could result in remote code execution and theft of API credentials. "The vulnerabilities exploit various configuration mechanisms, including Hooks, Model Context Protocol (MCP) servers, and environment variables – executing arbitrary shell commands and exfiltrating Anthropic API keys when users clone and open untrusted repositories," Check Point researchers Aviv Donenfeld and Oded Vanunu said in a report shared with The Hacker News. The identified shortcomings fall under three broad categories - No CVE (CVSS score: 8.7) - A code injection vulnerability stemming from a user consent bypass when starting Claude Code in a new directory that could result in arbitrary code execution without additional confirmation via untrusted project hooks defined in .claude/settings.json. (Fixed in version 1.0.87 in Sep...
Top 5 Ways Broken Triage Increases Business Risk Instead of Reducing It

Top 5 Ways Broken Triage Increases Business Risk Instead of Reducing It

Feb 25, 2026 Malware Analysis / Threat Detection
Triage is supposed to make things simpler. In a lot of teams, it does the opposite. When you can’t reach a confident verdict early, alerts turn into repeat checks, back-and-forth, and “just escalate it” calls. That cost doesn’t stay inside the SOC; it shows up as missed SLAs, higher cost per case, and more room for real threats to slip through. So where does triage go wrong? Here are five triage issues that turn investigations into expensive guesswork, and how top teams are changing the outcome with execution evidence. 1. Decisions Made Without Real Evidence Business risk: The hardest triage failure to notice is when decisions get made before proof exists. If responders rely on partial signals (labels, hash matches, reputation), they end up approving or escalating cases without seeing what the file or link actually does.  That uncertainty fuels false positives, missed real threats, slower containment, and higher cost per case, while giving attackers more time before anyone h...
RoguePilot Flaw in GitHub Codespaces Enabled Copilot to Leak GITHUB_TOKEN

RoguePilot Flaw in GitHub Codespaces Enabled Copilot to Leak GITHUB_TOKEN

Feb 24, 2026 Artificial Intelligence / Cloud Security
A vulnerability in GitHub Codespaces could have been exploited by bad actors to seize control of repositories by injecting malicious Copilot instructions in a GitHub issue. The artificial intelligence (AI)-driven vulnerability has been codenamed RoguePilot by Orca Security. It has since been patched by Microsoft following responsible disclosure. "Attackers can craft hidden instructions inside a GitHub issue that are automatically processed by GitHub Copilot, giving them silent control of the in-codespaces AI agent," security researcher Roi Nisimi said in a report. The vulnerability has been described as a case of passive or indirect prompt injection where a malicious instruction is embedded within data or content that's processed by the large language model (LLM), causing it to produce unintended outputs or carry out arbitrary actions. The cloud security company also called it a type of AI-mediated supply chain attack that induces the LLM to automatically execute ...
cyber security

Shadow AI Is Everywhere. Here’s How You Can Find and Secure It

websiteNudge SecuritySaaS Security / Shadow AI
Learn what actually works for uncovering shadow AI apps, integrations, and data exposure—and where some methods fall short.
cyber security

OpenClaw: RCE, Leaked Tokens, and 21K Exposed Instances in 2 Weeks

websiteReco AIAttack Surface / AI Agents
The viral AI agent connects to Slack, Gmail, and Drive—and most security teams have zero visibility into it.
Anthropic Says Chinese AI Firms Used 16 Million Claude Queries to Copy Model

Anthropic Says Chinese AI Firms Used 16 Million Claude Queries to Copy Model

Feb 24, 2026 Artificial Intelligence / Anthropic
Anthropic on Monday said it identified "industrial-scale campaigns" mounted by three artificial intelligence (AI) companies, DeepSeek, Moonshot AI, and MiniMax, to illegally extract Claude's capabilities to improve their own models. The distillation attacks generated over 16 million exchanges with its large language model (LLM) through about 24,000 fraudulent accounts in violation of its terms of service and regional access restrictions. All three companies are based in China, where the use of its services is prohibited  use of its services is prohibited due to "legal, regulatory, and security risks." Distillation refers to a technique where a less capable model is trained on the outputs generated by a stronger AI system. While distillation is a legitimate way for companies to produce smaller, cheaper versions of their own frontier models, it's illegal for competitors to leverage it to acquire such capabilities from other AI companies at a fraction of t...
Wormable XMRig Campaign Uses BYOVD Exploit and Time-Based Logic Bomb

Wormable XMRig Campaign Uses BYOVD Exploit and Time-Based Logic Bomb

Feb 23, 2026 Vulnerability / Threat Intelligence
Cybersecurity researchers have disclosed details of a new cryptojacking campaign that uses pirated software bundles as lures to deploy a bespoke XMRig miner program on compromised hosts. "Analysis of the recovered dropper, persistence triggers, and mining payload reveals a sophisticated, multi-stage infection prioritizing maximum cryptocurrency mining hashrate, often destabilizing the victim system," Trellix researcher Aswath A said in a technical report published last week. "Furthermore, the malware exhibits worm-like capabilities, spreading across external storage devices, enabling lateral movement even in air-gapped environments." The entry point of the attack is the use of social engineering decoys, advertising free premium software in the form of pirated software bundles, such as installers for office productivity suites, to trick unsuspecting users into downloading malware-laced executables. The binary acts as the central nervous system of the infectio...
⚡ Weekly Recap: Double-Tap Skimmers, PromptSpy AI, 30Tbps DDoS, Docker Malware & More

⚡ Weekly Recap: Double-Tap Skimmers, PromptSpy AI, 30Tbps DDoS, Docker Malware & More

Feb 23, 2026 Cybersecurity / Hacking
Security news rarely moves in a straight line. This week, it feels more like a series of sharp turns, some happening quietly in the background, others playing out in public view. The details are different, but the pressure points are familiar. Across devices, cloud services, research labs, and even everyday apps, the line between normal behavior and hidden risk keeps getting thinner. Tools meant to protect, update, or improve systems are also becoming pathways when something goes wrong. This recap gathers the signals in one place. Quick reads, real impact, and developments that deserve a closer look before they become next week’s bigger problem. ⚡ Threat of the Week Dell RecoverPoint for VMs Zero-Day Exploited — A maximum severity security vulnerability in Dell RecoverPoint for Virtual Machines has been exploited as a zero-day by a suspected China-nexus threat cluster dubbed UNC6201 since mid-2024. The activity involves the exploitation of CVE-2026-22769 (CVSS score: 10.0), a ca...
How Exposed Endpoints Increase Risk Across LLM Infrastructure

How Exposed Endpoints Increase Risk Across LLM Infrastructure

Feb 23, 2026 Artificial Intelligence / Zero Trust
As more organizations run their own Large Language Models (LLMs), they are also deploying more internal services and Application Programming Interfaces (APIs) to support those models. Modern security risks are being introduced less from the models themselves and more from the infrastructure that serves, connects and automates the model. Each new LLM endpoint expands the attack surface, often in ways that are easy to overlook during rapid deployment, especially when endpoints are trusted implicitly. When LLM endpoints accumulate excessive permissions and long-lived credentials are exposed, they can provide far more access than intended. Organizations must prioritize endpoint privilege management because exposed endpoints have become an increasingly common attack vector for cybercriminals to access the systems, identities and secrets that power LLM workloads. What is an endpoint in modern LLM infrastructure? In modern LLM infrastructure, an endpoint is any interface where something —...
MuddyWater Targets MENA Organizations with GhostFetch, CHAR, and HTTP_VIP

MuddyWater Targets MENA Organizations with GhostFetch, CHAR, and HTTP_VIP

Feb 23, 2026 Threat Intelligence / Artificial Intelligence
The Iranian hacking group known as MuddyWater (aka Earth Vetala, Mango Sandstorm, and MUDDYCOAST) has targeted several organizations and individuals mainly located across the Middle East and North Africa (MENA) region as part of a new campaign codenamed Operation Olalampo . The activity, first observed on January 26, 2026, has resulted in the deployment of new malware families that share overlapping samples previously identified as used by the threat actor, according to a report published by Group-IB. These include downloaders like GhostFetch and HTTP_VIP, along with a Rust backdoor called CHAR and an advanced implant codenamed GhostBackDoor that's dropped by GhostFetch. "These attacks follow similar patterns and align with the killchains previously observed in MuddyWater attacks; starting with a phishing email with a Microsoft Office document attached to it that contains malicious macro code that decodes the embedded payload and drops it on the system and executes it, pro...
AI-Assisted Threat Actor Compromises 600+ FortiGate Devices in 55 Countries

AI-Assisted Threat Actor Compromises 600+ FortiGate Devices in 55 Countries

Feb 21, 2026 Threat Intelligence / Artificial Intelligence
A Russian-speaking, financially motivated threat actor has been observed taking advantage of commercial generative artificial intelligence (AI) services to compromise over 600 FortiGate devices located in 55 countries. That's according to new findings from Amazon Threat Intelligence, which said it observed the activity between January 11 and February 18, 2026. "No exploitation of FortiGate vulnerabilities was observed—instead, this campaign succeeded by exploiting exposed management ports and weak credentials with single-factor authentication, fundamental security gaps that AI helped an unsophisticated actor exploit at scale," CJ Moses, Chief Information Security Officer (CISO) of Amazon Integrated Security, said in a report. The tech giant described the threat actor as having limited technical capabilities, a constraint they overcame by relying on multiple commercial generative AI tools to implement various phases of the attack cycle, such as tool development, attac...
Anthropic Launches Claude Code Security for AI-Powered Vulnerability Scanning

Anthropic Launches Claude Code Security for AI-Powered Vulnerability Scanning

Feb 21, 2026 Artificial Intelligence / DevSecOps
Artificial intelligence (AI) company Anthropic has begun to roll out a new security feature for Claude Code that can scan a user's software codebase for vulnerabilities and suggest patches. The capability, called Claude Code Security , is currently available in a limited research preview to Enterprise and Team customers. "It scans codebases for security vulnerabilities and suggests targeted software patches for human review, allowing teams to find and fix security issues that traditional methods often miss," the company said in a Friday announcement. Anthropic said the feature aims to leverage AI as a tool to help find and resolve vulnerabilities to counter attacks where threat actors weaponize the same tools to automate vulnerability discovery.  With AI agents increasingly capable of detecting security vulnerabilities that have otherwise escaped human notice, the tech upstart said the same capabilities could be used by adversaries to uncover exploitable weakness...
EC-Council Expands AI Certification Portfolio to Strengthen U.S. AI Workforce Readiness and Security

EC-Council Expands AI Certification Portfolio to Strengthen U.S. AI Workforce Readiness and Security

Feb 21, 2026 Artificial Intelligence / Training
With $5.5 trillion in global AI risk exposure and 700,000 U.S. workers needing reskilling, four new AI certifications and Certified CISO v4 help close the gap between AI adoption and workforce readiness . EC-Council , creator of the world-renowned Certified Ethical Hacker (CEH) credential and a global leader in applied cybersecurity education, today launched its Enterprise AI Credential Suite, with four new role-based AI certifications debuting alongside Certified CISO v4 , an overhauled executive cyber leadership program. The dual launch is the largest single expansion of EC-Council’s portfolio in its 25-year history. It addresses a structural gap that no single tool, platform, or policy can solve alone: AI is scaling faster than the workforce trained to run, secure, and govern it. The launch aligns with U.S. priorities on workforce development and applied AI education outlined in Executive Order 14179, the July 2025 AI Action Plan’s workforce development pillar, and Executive Or...
Cline CLI 2.3.0 Supply Chain Attack Installed OpenClaw on Developer Systems

Cline CLI 2.3.0 Supply Chain Attack Installed OpenClaw on Developer Systems

Feb 20, 2026 Software Security / Artificial Intelligence
In yet another software supply chain attack, the open-source, artificial intelligence (AI)-powered coding assistant Cline CLI was updated to stealthily install OpenClaw , a self-hosted autonomous AI agent that has become exceedingly popular in the past few months. "On February 17, 2026, at 3:26 AM PT, an unauthorized party used a compromised npm publish token to publish an update to Cline CLI on the NPM registry: cline@2.3.0," the maintainers of the Cline package said in an advisory. "The published package contains a modified package.json with an added postinstall script: 'postinstall": "npm install -g openclaw@latest.'" As a result, this causes OpenClaw to be installed on the developer's machine when Cline version 2.3.0 is installed. Cline said no additional modifications were introduced to the package and there was no malicious behavior observed. However, it noted that the installation of OpenClaw was not authorized or intended. The s...
PromptSpy Android Malware Abuses Gemini AI to Automate Recent-Apps Persistence

PromptSpy Android Malware Abuses Gemini AI to Automate Recent-Apps Persistence

Feb 19, 2026 Malware / Mobile Security
Cybersecurity researchers have discovered what they say is the first Android malware that abuses Gemini, Google's generative artificial intelligence (AI) chatbot, as part of its execution flow and achieves persistence. The malware has been codenamed PromptSpy by ESET. The malware is equipped to capture lockscreen data, block uninstallation efforts, gather device information, take screenshots, and record screen activity as video. "Gemini is used to analyze the current screen and provide PromptSpy with step-by-step instructions on how to ensure the malicious app remains pinned in the recent apps list, thus preventing it from being easily swiped away or killed by the system," ESET researcher Lukáš Štefanko said in a report published today. "Since Android malware often relies on UI navigation, leveraging generative AI enables the threat actors to adapt to more or less any device, layout, or OS version, which can greatly expand the pool of potential victims." ...
ThreatsDay Bulletin: OpenSSL RCE, Foxit 0-Days, Copilot Leak, AI Password Flaws & 20+ Stories

ThreatsDay Bulletin: OpenSSL RCE, Foxit 0-Days, Copilot Leak, AI Password Flaws & 20+ Stories

Feb 19, 2026 Cybersecurity / Hacking News
The cyber threat space doesn’t pause, and this week makes that clear. New risks, new tactics, and new security gaps are showing up across platforms, tools, and industries — often all at the same time. Some developments are headline-level. Others sit in the background but carry long-term impact. Together, they shape how defenders need to think about exposure, response, and preparedness right now. This edition of ThreatsDay Bulletin brings those signals into one place. Scan through the roundup for quick, clear updates on what’s unfolding across the cybersecurity and hacking landscape. Privacy model hardening Google Showcases New Privacy and Security Features in Android 17 Google announced the first beta version of Android 17 , with two privacy and security enhancements: the deprecation of Cleartext Traffic Attribute and support for HPKE Hybrid Cryptography to enable secure communication using a combination of public key and symme...
From Exposure to Exploitation: How AI Collapses Your Response Window

From Exposure to Exploitation: How AI Collapses Your Response Window

Feb 19, 2026 Artificial Intelligence / DevSecOps
We’ve all seen this before: a developer deploys a new cloud workload and grants overly broad permissions just to keep the sprint moving. An engineer generates a "temporary" API key for testing and forgets to revoke it. In the past, these were minor operational risks, debts you’d eventually pay down during a slower cycle. In 2026, “Eventually” is Now But today, within minutes, AI-powered adversarial systems can find that over-permissioned workload, map its identity relationships, and calculate a viable route to your critical assets. Before your security team has even finished their morning coffee, AI agents have simulated thousands of attack sequences and moved toward execution. AI compresses reconnaissance, simulation, and prioritization into a single automated sequence. The exposure you created this morning can be modeled, validated, and positioned inside a viable attack path before your team has lunch. The Collapse of the Exploitation Window Historically, the exploita...
Cybersecurity Tech Predictions for 2026: Operating in a World of Permanent Instability

Cybersecurity Tech Predictions for 2026: Operating in a World of Permanent Instability

Feb 18, 2026 Zero Trust / Data Security
In 2025, navigating the digital seas still felt like a matter of direction. Organizations charted routes, watched the horizon, and adjusted course to reach safe harbors of resilience, trust, and compliance. In 2026, the seas are no longer calm between storms. Cybersecurity now unfolds in a state of  continuous atmospheric instability : AI-driven threats that adapt in real time, expanding digital ecosystems, fragile trust relationships, persistent regulatory pressure, and accelerating technological change. This is not turbulence on the way to stability; it  is the climate. In this environment, cybersecurity technologies are no longer merely navigational aids. They are  structural reinforcements . They determine whether an organization endures volatility or learns to function normally within it. That is why security investments in 2026 are increasingly made not for coverage, but for  operational continuity : sustained operations, decision-grade visibility and cont...
3 Ways to Start Your Intelligent Workflow Program

3 Ways to Start Your Intelligent Workflow Program

Feb 18, 2026 Workflow Automation / Enterprise Security
Security, IT, and engineering teams today are under relentless pressure to accelerate outcomes, cut operational drag, and unlock the full potential of AI and automation. But simply investing in tools isn’t enough. 88% of AI proofs-of-concept never make it to production, even though 70% of workers cite freeing time for high-value work as the primary AI automation motivation. Real impact comes from intelligent workflows that combine automation, AI-driven decisioning, and human ingenuity into seamless processes that work across teams and systems.  In this article, we’ll highlight three use cases across Security and IT that can serve as powerful starting points for your intelligent workflow program. For each use case, we’ll share a pre-built workflow to help you tackle real bottlenecks in your organization with automation while connecting directly into your existing tech stack. These use cases are great starting points to help you turn theory into practice and achieve measurable gai...
Researchers Show Copilot and Grok Can Be Abused as Malware C2 Proxies

Researchers Show Copilot and Grok Can Be Abused as Malware C2 Proxies

Feb 17, 2026 Malware / Artificial Intelligence
Cybersecurity researchers have disclosed that artificial intelligence (AI) assistants that support web browsing or URL fetching capabilities can be turned into stealthy command-and-control (C2) relays, a technique that could allow attackers to blend into legitimate enterprise communications and evade detection. The attack method, which has been demonstrated against Microsoft Copilot and xAI Grok, has been codenamed AI as a C2 proxy by Check Point. It leverages "anonymous web access combined with browsing and summarization prompts," the cybersecurity company said. "The same mechanism can also enable AI-assisted malware operations, including generating reconnaissance workflows, scripting attacker actions, and dynamically deciding 'what to do next' during an intrusion." The development signals yet another consequential evolution in how threat actors could abuse AI systems, not just to scale or accelerate different phases of the cyber attack cycle, but als...
SmartLoader Attack Uses Trojanized Oura MCP Server to Deploy StealC Infostealer

SmartLoader Attack Uses Trojanized Oura MCP Server to Deploy StealC Infostealer

Feb 17, 2026 Infostealer / Artificial Intelligence
Cybersecurity researchers have disclosed details of a new SmartLoader campaign that involves distributing a trojanized version of a Model Context Protocol ( MCP ) server associated with Oura Health to deliver an information stealer known as StealC . "The threat actors cloned a legitimate Oura MCP Server – a tool that connects AI assistants to Oura Ring health data – and built a deceptive infrastructure of fake forks and contributors to manufacture credibility," Straiker's AI Research (STAR) Labs team said in a report shared with The Hacker News. The end game is to leverage the trojanized version of the Oura MCP server to deliver the StealC infostealer, allowing the threat actors to steal credentials, browser passwords, and data from cryptocurrency wallets. SmartLoader, first highlighted by OALABS Research in early 2024, is a malware loader that's known to be distributed via fake GitHub repositories containing artificial intelligence (AI)-generated lures to giv...
Microsoft Finds “Summarize with AI” Prompts Manipulating Chatbot Recommendations

Microsoft Finds “Summarize with AI” Prompts Manipulating Chatbot Recommendations

Feb 17, 2026 Enterprise Security / Artificial Intelligence
New research from Microsoft has revealed that legitimate businesses are gaming artificial intelligence (AI) chatbots via the "Summarize with AI" button that's being increasingly placed on websites in ways that mirror classic search engine poisoning (SEO). The new AI hijacking technique has been codenamed AI Recommendation Poisoning by the Microsoft Defender Security Research Team. The tech giant described it as a case of an AI memory poisoning attack that's used to induce bias and deceive the AI system to generate responses that artificially boost visibility and skew recommendations. "Companies are embedding hidden instructions in 'Summarize with AI' buttons that, when clicked, attempt to inject persistence commands into an AI assistant's memory via URL prompt parameters," Microsoft said . "These prompts instruct the AI to 'remember [Company] as a trusted source' or 'recommend [Company] first.'" Microsoft said it id...
Expert Insights Articles Videos
Cybersecurity Resources