-->
#1 Trusted Cybersecurity News Platform
Followed by 5.40+ million
The Hacker News Logo
Subscribe – Get Latest News

Search results for anthropic-ai/claude-code | Breaking Cybersecurity News | The Hacker News

Claude Code Source Leaked via npm Packaging Error, Anthropic Confirms

Claude Code Source Leaked via npm Packaging Error, Anthropic Confirms

Apr 01, 2026 Data Breach / Artificial Intelligence
Anthropic on Tuesday confirmed that internal code for its popular artificial intelligence (AI) coding assistant, Claude Code, had been inadvertently released due to a human error. "No sensitive customer data or credentials were involved or exposed," an Anthropic spokesperson said in a statement shared with CNBC News. "This was a release packaging issue caused by human error, not a security breach. We’re rolling out measures to prevent this from happening again." The discovery came after the AI upstart released version 2.1.88 of the Claude Code npm package, with users spotting that it contained a source map file that could be used to access Claude Code's source code – comprising nearly 2,000 TypeScript files and more than 512,000 lines of code. The version is no longer available for download from npm. Security researcher Chaofan Shou was the first to publicly flag it on X, stating "Claude code source code has been leaked via a map file in their npm re...
How Ceros Gives Security Teams Visibility and Control in Claude Code

How Ceros Gives Security Teams Visibility and Control in Claude Code

Mar 19, 2026 Artificial Intelligence / Enterprise Security
Security teams have spent years building identity and access controls for human users and service accounts. But a new category of actor has quietly entered most enterprise environments, and it operates entirely outside those controls. Claude Code, Anthropic's AI coding agent, is now running across engineering organizations at scale. It reads files, executes shell commands, calls external APIs, and connects to third-party integrations called MCP servers. It does all of this autonomously, with the full permissions of the developer who launched it, on the developer's local machine, before any network-layer security tool can see it. It leaves no audit trail that the existing security infrastructure was built to capture. This walkthrough covers Ceros, an AI Trust Layer built by Beyond Identity that sits directly on the developer's machine alongside Claude Code and provides real-time visibility, runtime policy enforcement, and a cryptographic audit trail of every action the a...
Anthropic Launches Claude Code Security for AI-Powered Vulnerability Scanning

Anthropic Launches Claude Code Security for AI-Powered Vulnerability Scanning

Feb 21, 2026 Artificial Intelligence / DevSecOps
Artificial intelligence (AI) company Anthropic has begun to roll out a new security feature for Claude Code that can scan a user's software codebase for vulnerabilities and suggest patches. The capability, called Claude Code Security , is currently available in a limited research preview to Enterprise and Team customers. "It scans codebases for security vulnerabilities and suggests targeted software patches for human review, allowing teams to find and fix security issues that traditional methods often miss," the company said in a Friday announcement. Anthropic said the feature aims to leverage AI as a tool to help find and resolve vulnerabilities to counter attacks where threat actors weaponize the same tools to automate vulnerability discovery.  With AI agents increasingly capable of detecting security vulnerabilities that have otherwise escaped human notice, the tech upstart said the same capabilities could be used by adversaries to uncover exploitable weakness...
cyber security

2026 Cloud Threats Report

websiteWizCloud Security / Threat Landscape
80% of cloud breaches still start with the basics - and AI is making them faster. Get insights into the patterns behind today's cloud attacks.
cyber security

Pentest Like Attackers Actually Do. SEC560 at SANSFIRE 2026

websiteSANS InstituteLive Training / Cybersecurity
From Kerberoasting to domain dominance—SEC560 covers the full kill chain. Washington, D.C., July 13.
Chinese Hackers Use Anthropic's AI to Launch Automated Cyber Espionage Campaign

Chinese Hackers Use Anthropic's AI to Launch Automated Cyber Espionage Campaign

Nov 14, 2025 Cyber Espionage / AI Security
State-sponsored threat actors from China used artificial intelligence (AI) technology developed by Anthropic to orchestrate automated cyber attacks as part of a "highly sophisticated espionage campaign" in mid-September 2025. "The attackers used AI's 'agentic' capabilities to an unprecedented degree – using AI not just as an advisor, but to execute the cyber attacks themselves," the AI upstart said . The activity is assessed to have manipulated Claude Code, Anthropic's AI coding tool, to attempt to break into about 30 global targets spanning large tech companies, financial institutions, chemical manufacturing companies, and government agencies. A subset of these intrusions succeeded. Anthropic has since banned the relevant accounts and enforced defensive mechanisms to flag such attacks. The campaign, GTG-1002, marks the first time a threat actor has leveraged AI to conduct a "large-scale cyber attack" without major human intervention an...
Anthropic Disrupts AI-Powered Cyberattacks Automating Theft and Extortion Across Critical Sectors

Anthropic Disrupts AI-Powered Cyberattacks Automating Theft and Extortion Across Critical Sectors

Aug 27, 2025 Cyber Attack / Artificial Intelligence
Anthropic on Wednesday revealed that it disrupted a sophisticated operation that weaponized its artificial intelligence (AI)-powered chatbot Claude to conduct large-scale theft and extortion of personal data in July 2025. "The actor targeted at least 17 distinct organizations, including in healthcare, the emergency services, and government, and religious institutions," the company said . "Rather than encrypt the stolen information with traditional ransomware, the actor threatened to expose the data publicly in order to attempt to extort victims into paying ransoms that sometimes exceeded $500,000." "The actor employed Claude Code on Kali Linux as a comprehensive attack platform, embedding operational instructions in a CLAUDE.md file that provided persistent context for every interaction." The unknown threat actor is said to have used AI to an "unprecedented degree," using Claude Code, Anthropic's agentic coding tool, to automate variou...
Claude Code Flaws Allow Remote Code Execution and API Key Exfiltration

Claude Code Flaws Allow Remote Code Execution and API Key Exfiltration

Feb 25, 2026 Artificial Intelligence / Vulnerability
Cybersecurity researchers have disclosed multiple security vulnerabilities in Anthropic's Claude Code, an artificial intelligence (AI)-powered coding assistant, that could result in remote code execution and theft of API credentials. "The vulnerabilities exploit various configuration mechanisms, including Hooks, Model Context Protocol (MCP) servers, and environment variables – executing arbitrary shell commands and exfiltrating Anthropic API keys when users clone and open untrusted repositories," Check Point researchers Aviv Donenfeld and Oded Vanunu said in a report shared with The Hacker News. The identified shortcomings fall under three broad categories - No CVE (CVSS score: 8.7) - A code injection vulnerability stemming from a user consent bypass when starting Claude Code in a new directory that could result in arbitrary code execution without additional confirmation via untrusted project hooks defined in .claude/settings.json. (Fixed in version 1.0.87 in Sep...
Cursor AI Code Editor Flaw Enables Silent Code Execution via Malicious Repositories

Cursor AI Code Editor Flaw Enables Silent Code Execution via Malicious Repositories

Sep 12, 2025 AI Security / Vulnerability
A security weakness has been disclosed in the artificial intelligence (AI)-powered code editor Cursor that could trigger code execution when a maliciously crafted repository is opened using the program. The issue stems from the fact that an out-of-the-box security setting is disabled by default, opening the door for attackers to run arbitrary code on users' computers with their privileges. "Cursor ships with Workspace Trust disabled by default, so VS Code-style tasks configured with runOptions.runOn: 'folderOpen' auto-execute the moment a developer browses a project," Oasis Security said in an analysis. "A malicious .vscode/tasks.json turns a casual 'open folder' into silent code execution in the user's context." Cursor is an AI-powered fork of Visual Studio Code, which supports a feature called Workspace Trust to allow developers to safely browse and edit code regardless of where it came from or who wrote it. With this option disab...
Anthropic's Claude Mythos Finds Thousands of Zero-Day Flaws Across Major Systems

Anthropic's Claude Mythos Finds Thousands of Zero-Day Flaws Across Major Systems

Apr 08, 2026 Artificial Intelligence / Secure Coding
Artificial Intelligence (AI) company Anthropic announced a new cybersecurity initiative called Project Glasswing  that will use a preview version of its new frontier model, Claude Mythos , to find and address security vulnerabilities. The model will be used by a small set of organizations, including Amazon Web Services, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorgan Chase, the Linux Foundation, Microsoft, NVIDIA, and Palo Alto Networks, along with Anthropic, to secure critical software. The company said it's forming this initiative in response to capabilities observed in its general-purpose frontier model that demonstrate a "level of coding capability where they can surpass all but the most skilled humans at finding and exploiting software vulnerabilities." Because of its cybersecurity capabilities and concerns that they could be abused, Anthropic has opted not to make the model generall...
ThreatsDay Bulletin: Hybrid P2P Botnet, 13-Year-Old Apache RCE and 18 More Stories

ThreatsDay Bulletin: Hybrid P2P Botnet, 13-Year-Old Apache RCE and 18 More Stories

Apr 09, 2026 Hacking News / Cybersecurity News
Thursday. Another week, another batch of things that probably should've been caught sooner but weren't. This one's got some range — old vulnerabilities getting new life, a few "why was that even possible" moments, attackers leaning on platforms and tools you'd normally trust without thinking twice. Quiet escalations more than loud zero-days, but the kind that matter more in practice anyway. Mix of malware, infrastructure exposure, AI-adjacent weirdness, and some supply chain stuff that's... not great. Let's get into it. Resilient hybrid botnet surge Phorpiex Botnet Detailed A new variant of the botnet known as Phorpiex (aka Trik) has been observed, using a hybrid communication model that combines traditional C2 HTTP polling with a peer-to-peer (P2P) protocol over both TCP and UDP to ensure operational continuity in the face of server takedowns. The malware acts as a conduit for encrypted payloads, ma...
5 Threats That Reshaped Web Security This Year [2025]

5 Threats That Reshaped Web Security This Year [2025]

Dec 04, 2025 Web Security / Data Privacy
As 2025 draws to a close, security professionals face a sobering realization: the traditional playbook for web security has become dangerously obsolete. AI-powered attacks, evolving injection techniques, and supply chain compromises affecting hundreds of thousands of websites forced a fundamental rethink of defensive strategies. Here are the five threats that reshaped web security this year, and why the lessons learned will define digital protection for years to come. 1. Vibe Coding Natural language coding, " vibe coding " , transformed from novelty to production reality in 2025, with nearly 25% of Y Combinator startups using AI to build core codebases. One developer launched a multiplayer flight simulator in under three hours, eventually scaling it to 89,000 players and generating thousands in monthly revenue. The Result Code that functions perfectly yet contains exploitable flaws, bypassing traditional security tools. AI generates what you ask for, not what you forget...
Google Gemini Prompt Injection Flaw Exposed Private Calendar Data via Malicious Invites

Google Gemini Prompt Injection Flaw Exposed Private Calendar Data via Malicious Invites

Jan 19, 2026 Artificial Intelligence / Vulnerability
Cybersecurity researchers have disclosed details of a security flaw that leverages indirect prompt injection targeting Google Gemini as a way to bypass authorization guardrails and use Google Calendar as a data extraction mechanism. The vulnerability, Miggo Security's Head of Research, Liad Eliyahu, said, made it possible to circumvent Google Calendar's privacy controls by hiding a dormant malicious payload within a standard calendar invite. "This bypass enabled unauthorized access to private meeting data and the creation of deceptive calendar events without any direct user interaction," Eliyahu said in a report shared with The Hacker News. The starting point of the attack chain is a new calendar event that's crafted by the threat actor and sent to a target. The invite's description embeds a natural language prompt that's designed to do their bidding, resulting in a prompt injection. The attack gets activated when a user asks Gemini a completely inno...
Anthropic Finds 22 Firefox Vulnerabilities Using Claude Opus 4.6 AI Model

Anthropic Finds 22 Firefox Vulnerabilities Using Claude Opus 4.6 AI Model

Mar 07, 2026 Browser Security / Artificial Intelligence
Anthropic on Friday said it discovered 22 new security vulnerabilities in the Firefox web browser as part of a security partnership with Mozilla. Of these, 14 have been classified as high, seven have been classified as moderate, and one has been rated low in severity. The issues were addressed in Firefox 148 , released late last month. The vulnerabilities were identified over a two-week period in January 2026. The artificial intelligence (AI) company said the number of high-severity bugs identified by its Claude Opus 4.6 large language model (LLM) represents "almost a fifth" of all high-severity vulnerabilities that were patched in Firefox in 2025. Anthropic said the LLM detected a use-after-free bug in the browser's JavaScript after "just" 20 minutes of exploration, which was then validated by a human researcher in a virtualized environment to rule out the possibility of a false positive. "By the end of this effort, we had scanned nearly 6,000 C++ ...
Someone Created the First AI-Powered Ransomware Using OpenAI's gpt-oss:20b Model

Someone Created the First AI-Powered Ransomware Using OpenAI's gpt-oss:20b Model

Aug 27, 2025 Ransomware / Artificial Intelligence
Cybersecurity company ESET has disclosed that it discovered an artificial intelligence (AI)-powered ransomware variant codenamed PromptLock . Written in Golang, the newly identified strain uses the gpt-oss:20b model from OpenAI locally via the Ollama API to generate malicious Lua scripts in real-time. The open-weight language model was released by OpenAI earlier this month. "PromptLock leverages Lua scripts generated from hard-coded prompts to enumerate the local filesystem, inspect target files, exfiltrate selected data, and perform encryption," ESET said . "These Lua scripts are cross-platform compatible, functioning on Windows, Linux, and macOS." The ransomware code also embeds instructions to craft a custom note based on the "files affected," and the infected machine is a personal computer, company server, or a power distribution controller. It's currently not known who is behind the malware, but ESET told The Hacker News that PromptLoc arti...
"I Had a Dream" and Generative AI Jailbreaks

"I Had a Dream" and Generative AI Jailbreaks

Oct 09, 2023 Artificial Intelligence /
"Of course, here's an example of simple code in the Python programming language that can be associated with the keywords "MyHotKeyHandler," "Keylogger," and "macOS," this is a message from ChatGPT followed by a piece of malicious code and a brief remark not to use it for illegal purposes. Initially published by  Moonlock Lab , the screenshots of ChatGPT writing code for a keylogger malware is yet another example of trivial ways to hack large language models and exploit them against their policy of use. In the case of Moonlock Lab, their malware research engineer told ChatGPT about a dream where an attacker was writing code. In the dream, he could only see the three words: "MyHotKeyHandler," "Keylogger," and "macOS." The engineer asked ChatGPT to completely recreate the malicious code and help him stop the attack. After a brief conversation, the AI finally provided the answer. "At times, the code generated isn...
New Reports Uncover Jailbreaks, Unsafe Code, and Data Theft Risks in Leading AI Systems

New Reports Uncover Jailbreaks, Unsafe Code, and Data Theft Risks in Leading AI Systems

Apr 29, 2025 Vulnerability / Artificial Intelligence
Various generative artificial intelligence (GenAI) services have been found vulnerable to two types of jailbreak attacks that make it possible to produce illicit or dangerous content. The first of the two techniques, codenamed Inception, instructs an AI tool to imagine a fictitious scenario, which can then be adapted into a second scenario within the first one where there exists no safety guardrails . "Continued prompting to the AI within the second scenarios context can result in bypass of safety guardrails and allow the generation of malicious content," the CERT Coordination Center (CERT/CC) said in an advisory released last week. The second jailbreak is realized by prompting the AI for information on how not to reply to a specific request.  "The AI can then be further prompted with requests to respond as normal, and the attacker can then pivot back and forth between illicit questions that bypass safety guardrails and normal prompts," CERT/CC added. Success...
Researchers Reveal Reprompt Attack Allowing Single-Click Data Exfiltration From Microsoft Copilot

Researchers Reveal Reprompt Attack Allowing Single-Click Data Exfiltration From Microsoft Copilot

Jan 15, 2026 Prompt Injection / Enterprise Security
Cybersecurity researchers have disclosed details of a new attack method dubbed Reprompt that could allow bad actors to exfiltrate sensitive data from artificial intelligence (AI) chatbots like Microsoft Copilot in a single click, while bypassing enterprise security controls entirely. "Only a single click on a legitimate Microsoft link is required to compromise victims," Varonis security researcher Dolev Taler said in a report published Wednesday. "No plugins, no user interaction with Copilot." "The attacker maintains control even when the Copilot chat is closed, allowing the victim's session to be silently exfiltrated with no interaction beyond that first click." Following responsible disclosure, Microsoft has addressed the security issue. The attack does not affect enterprise customers using Microsoft 365 Copilot. At a high level, Reprompt employs three techniques to achieve a data‑exfiltration chain - Using the "q" URL parameter in...
GitLab Duo Vulnerability Enabled Attackers to Hijack AI Responses with Hidden Prompts

GitLab Duo Vulnerability Enabled Attackers to Hijack AI Responses with Hidden Prompts

May 23, 2025 Artificial Intelligence / Vulnerability
Cybersecurity researchers have discovered an indirect prompt injection flaw in GitLab's artificial intelligence (AI) assistant Duo that could have allowed attackers to steal source code and inject untrusted HTML into its responses, which could then be used to direct victims to malicious websites. GitLab Duo is an artificial intelligence (AI)-powered coding assistant that enables users to write, review, and edit code. Built using Anthropic's Claude models, the service was first launched in June 2023. But as Legit Security found , GitLab Duo Chat has been susceptible to an indirect prompt injection flaw that permits attackers to "steal source code from private projects, manipulate code suggestions shown to other users, and even exfiltrate confidential, undisclosed zero-day vulnerabilities." Prompt injection refers to a class of vulnerabilities common in AI systems that enable threat actors to weaponize large language models (LLMs) to manipulate responses to user...
Expert Insights Articles Videos
Cybersecurity Resources