-->
#1 Trusted Cybersecurity News Platform
Followed by 5.40+ million
The Hacker News Logo
Subscribe – Get Latest News

artificial intelligence | Breaking Cybersecurity News | The Hacker News

Category — artificial intelligence
ThreatsDay Bulletin: $290M DeFi Hack, macOS LotL Abuse, ProxySmart SIM Farms +25 New Stories

ThreatsDay Bulletin: $290M DeFi Hack, macOS LotL Abuse, ProxySmart SIM Farms +25 New Stories

Apr 23, 2026 Hacking News / Cybersecurity News
You scroll past one incident and see another that feels familiar, like it should have been fixed years ago, but it still works with small changes. Same bugs. Same mistakes. The supply chain is messy. Packages you did not check are stealing data, adding backdoors, and spreading. Attacking the systems behind apps is easier than breaking the apps themselves. The exploits are simple but still work, giving attackers easy access. AI tools are also part of the problem now. They trust bad input and take real actions, which makes the damage bigger. Then there are quieter issues. Apps take data they should not. Devices behave in strange ways. Attackers keep testing what they can get away with. No noise. Just ongoing damage. Here is the list for this week’s ThreatsDay Bulletin. State-backed crypto heist North Korea Likely Behind KelpDAP $290M Crypto Heist Inter-blockchain communication protocol LayerZero has revealed that North Korean thr...
[Webinar] Mythos Reality Check: Beating Automated Exploitation at AI Speed

[Webinar] Mythos Reality Check: Beating Automated Exploitation at AI Speed

Apr 23, 2026 Artificial Intelligence / Enterprise Security
Imagine a world where hackers don't sleep, don't take breaks, and find weak spots in your systems instantly. Well, that world is already here. Thanks to AI, attackers are now launching automated, large-scale exploits faster than ever before. The time you have to fix a vulnerability before it gets attacked is shrinking to zero. We call this the Collapsing Exploit Window , and it means your standard patching routine is officially too slow. If you are fighting AI-speed attacks with manual-speed defenses, your systems are at a breaking point. It’s time to rethink everything. Join our highly anticipated webinar featuring expert guest Ofer Gayer, Vice President of Product at Miggo Security, and learn how to beat the bots at their own game: Mythos and the Collapsing Exploit Window: Rethink Vulnerability Prioritization at AI Speed . Here is exactly what you will walk away with: The Truth About Mythos: We are cutting through the hype. Learn what Mythos actually represents and w...
Project Glasswing Proved AI Can Find the Bugs. Who's Going to Fix Them?

Project Glasswing Proved AI Can Find the Bugs. Who's Going to Fix Them?

Apr 23, 2026 Artificial Intelligence / Exposure Management
Last week, Anthropic announced Project Glasswing, an AI model so effective at discovering software vulnerabilities that they took the extraordinary step of postponing its public release. Instead, the company has given access to Apple, Microsoft, Google, Amazon, and a coalition of others to find and patch bugs before adversaries can . Mythos Preview, the model that led to Project Glasswing, found vulnerabilities across every major operating system and browser. Some of these bugs had survived decades of human audits, aggressive fuzzing, and open-source scrutiny. One had been sitting for 27 years  in  OpenBSD,  generally considered to be one of the world’s most secure operating systems. It's tempting to file this under " AI lab says their AI is too dangerous, " the same playbook OpenAI ran with GPT-2.  Not so fast; there's a material difference this time.  Mythos didn't just find individual CVEs.  It chained four independent bugs into an exploit sequen...
cyber security

2026 Annual Threat Report: A Defender's Playbook From the Front Lines

websiteSentinelOneEnterprise Security / Cybersecurity
Learn how modern attackers bypass MFA, exploit gaps, weaponize automation, run 8-phase intrusions, and more.
cyber security

Anthropic Won't Release Mythos. But Claude Is Already in Your Salesforce

websiteRecoSaaS Security /AI Security
The real enterprise AI risk isn't the model they locked away. It's the one already inside.
Vercel Finds More Compromised Accounts in Context.ai-Linked Breach

Vercel Finds More Compromised Accounts in Context.ai-Linked Breach

Apr 23, 2026 Artificial Intelligence / SaaS Security
Vercel on Wednesday revealed that it has identified an additional set of customer accounts that were compromised as part of a security incident that enabled unauthorized access to its internal systems. The company said it made the discovery after expanding its investigation to include an extra set of compromise indicators, alongside a review of requests to the Vercel network and environment variable read events in its logs. "Second, we have uncovered a small number of customer accounts with evidence of prior compromise that is independent of and predates this incident, potentially as a result of social engineering, malware, or other methods," the company said in an update. In both cases, Vercel said it notified affected parties. It did not disclose the exact number of customers who were impacted. The development comes after the company that created the Next.js framework acknowledged the breach originated with a compromise of Context.ai after it was used by a Vercel em...
NGate Campaign Targets Brazil, Trojanizes HandyPay to Steal NFC Data and PINs

NGate Campaign Targets Brazil, Trojanizes HandyPay to Steal NFC Data and PINs

Apr 21, 2026 Mobile Security / Artificial Intelligence
Cybersecurity researchers have discovered a new iteration of an Android malware family called  NGate  that has been found to abuse a legitimate application called  HandyPay instead of NFCGate. "The threat actors took the app, which is used to relay NFC data, and patched it with malicious code that appears to have been AI-generated," ESET security researcher Lukáš Štefanko said in a report shared with The Hacker News. "As with previous iterations of NGate, the malicious code allows the attackers to transfer NFC data from the victim's payment card to their own device and use it for contactless ATM cash-outs and unauthorized payments." In addition, the malicious payload is capable of capturing the victim's payment card PIN and exfiltrating it to the threat actor's command-and-control (C2) server. NGate, also known as NFSkate, was first publicly documented by the Slovakian cybersecurity vendor in August 2024, detailing its ability to carry out relay ...
No Exploit Needed: How Attackers Walk Through the Front Door via Identity-Based Attacks

No Exploit Needed: How Attackers Walk Through the Front Door via Identity-Based Attacks

Apr 21, 2026 Incident Response / Artificial Intelligence
The cybersecurity industry has spent the last several years chasing sophisticated threats like zero-days, supply chain compromises, and AI-generated exploits. However, the most reliable entry point for attackers still hasn't changed: stolen credentials. Identity-based attacks remain a dominant initial access vector in breaches today. Attackers obtain valid credentials through credential stuffing from prior breach databases, password spraying against exposed services, or phishing campaigns — and use them to walk through the front door. No exploits needed. Just a valid username and password. What makes this difficult to defend against is how unremarkable the initial access looks. A successful login from a legitimate credential doesn't trigger the same alarms as a port scan or a malware callback. The attacker looks like an employee. Once inside, they dump and crack additional passwords, reuse those credentials to move laterally, and expand their foothold across the environment....
Google Patches Antigravity IDE Flaw Enabling Prompt Injection Code Execution

Google Patches Antigravity IDE Flaw Enabling Prompt Injection Code Execution

Apr 21, 2026 Vulnerability / Artificial Intelligence
Cybersecurity researchers have discovered a vulnerability in Google's agentic integrated development environment (IDE), Antigravity, that could be exploited to achieve code execution. The flaw, since patched, combines Antigravity's permitted file-creation capabilities with an insufficient input sanitization in Antigravity's native file-searching tool, find_by_name, to bypass the program's Strict Mode , a restrictive security configuration that limits network access, prevents out-of-workspace writes, and ensures all commands are being run within a sandbox context . "By injecting the -X (exec-batch) flag through the Pattern parameter [in the find_by_name tool], an attacker can force fd to execute arbitrary binaries against workspace files," Pillar Security researcher Dan Lisichkin said in an analysis. "Combined with Antigravity's ability to create files as a permitted action, this enables a full attack chain: stage a malicious script, then trigger ...
⚡ Weekly Recap: Vercel Hack, Push Fraud, QEMU Abused, New Android RATs Emerge & More

⚡ Weekly Recap: Vercel Hack, Push Fraud, QEMU Abused, New Android RATs Emerge & More

Apr 20, 2026 Cybersecurity / Hacking
Monday’s recap shows the same pattern in different places. A third-party tool becomes a way in, then leads to internal access. A trusted download path is briefly swapped to deliver malware. Browser extensions act normally while pulling data and running code. Even update channels are used to push payloads. It’s not breaking systems—it’s bending trust. There’s also a shift in how attacks run. Slower check-ins, multi-stage payloads, andmore code kept in memory. Attackers lean on real tools and normal workflows instead of custom builds. Some cases hint at supply-chain spread, where one weak link reaches further than expected. Go through the whole recap. The pattern across access, execution, and control only shows up when you see it all together. ⚡ Threat of the Week Vercel Discloses Data Breach —Web infrastructure provider Vercel has disclosed a security breach that allows bad actors to gain unauthorized access to "certain" internal Vercel systems. The incident originated f...
Why Most AI Deployments Stall After the Demo

Why Most AI Deployments Stall After the Demo

Apr 20, 2026 Artificial Intelligence / Privacy
The fastest way to fall in love with an AI tool is to watch the demo. Everything moves quickly. Prompts land cleanly. The system produces impressive outputs in seconds. It feels like the beginning of a new era for your team. But most AI initiatives don't fail because of bad technology. They stall because what worked in the demo doesn't survive contact with real operations. The gap between a controlled demonstration and day-to-day reality is where teams run into trouble. Most AI product demos are built to highlight potential, not friction. They use clean data, predictable inputs, carefully crafted prompts, and well-understood use cases. Production environments don't look like that. In real operations, data is messy, inputs are inconsistent, systems are fragmented, and context is incomplete. Latency matters. Edge cases quickly outnumber ideal ones. This is why teams often see an initial burst of enthusiasm followed by a slowdown once they try to deploy AI more broadly. W...
Anthropic MCP Design Vulnerability Enables RCE, Threatening AI Supply Chain

Anthropic MCP Design Vulnerability Enables RCE, Threatening AI Supply Chain

Apr 20, 2026 Artificial Intelligence / Vulnerability
Cybersecurity researchers have discovered a critical "by design" weakness in the Model Context Protocol's ( MCP ) architecture that could pave the way for remote code execution and have a cascading effect on the artificial intelligence (AI) supply chain. "This flaw enables Arbitrary Command Execution (RCE) on any system running a vulnerable MCP implementation, granting attackers direct access to sensitive user data, internal databases, API keys, and chat histories," OX Security researchers Moshe Siman Tov Bustan, Mustafa Naamnih, Nir Zadok, and Roni Bar said in an analysis published last week. The cybersecurity company said the systemic vulnerability is baked into Anthropic's official MCP software development kit (SDK) across any supported language, including Python, TypeScript, Java, and Rust. In all, it affects more than 7,000 publicly accessible servers and software packages totaling more than 150 million downloads. At issue are unsafe defaults in...
[Webinar] Eliminate Ghost Identities Before They Expose Your Enterprise Data

[Webinar] Eliminate Ghost Identities Before They Expose Your Enterprise Data

Apr 18, 2026 Artificial Intelligence / Enterprise Security
In 2024, compromised service accounts and forgotten API keys were behind 68% of cloud breaches. Not phishing. Not weak passwords. Unmanaged non-human identities that nobody was watching. For every employee in your org, there are 40 to 50 automated credentials: service accounts, API tokens, AI agent connections, and OAuth grants. When projects end or employees leave, most of these stay active. Fully privileged. Completely unmonitored. Attackers don't need to break in. They just pick up the keys you left out. Join our upcoming webinar where we’ll show you how to find and eliminate these "Ghost Identities" before they become a back door for hackers. AI agents and automated workflows are multiplying these credentials at a pace security teams can't manually track. Many carry admin-level access they never needed. One compromised token can give an attacker lateral movement across your entire environment, and the average dwell time fo...
Google Blocks 8.3B Policy-Violating Ads in 2025, Launches Android 17 Privacy Overhaul

Google Blocks 8.3B Policy-Violating Ads in 2025, Launches Android 17 Privacy Overhaul

Apr 17, 2026 Artificial Intelligence / Mobile Security
Google this week announced a new set of Play policy updates to strengthen user privacy and protect businesses against fraud, even as it revealed it blocked or removed over 8.3 billion ads globally and suspended 24.9 million accounts in 2025. The new policy updates relate to contact and location permissions in Android, allowing third-party apps to access the contact lists and a user's location in a more privacy-friendly manner. This includes a new Contact Picker, which offers a standardized, secure, and searchable interface for contact selection. "This feature allows users to grant apps access only to the specific contacts they choose, aligning with Android's commitment to data transparency and minimized permission footprints," Google said . Previously, apps requiring access to a specific user's contacts relied on READ_CONTACTS, an overly broad permission that granted apps the ability to access all contacts and their associated information. With the latest cha...
ThreatsDay Bulletin: Defender 0-Day, SonicWall Brute-Force, 17-Year-Old Excel RCE and 15 More Stories

ThreatsDay Bulletin: Defender 0-Day, SonicWall Brute-Force, 17-Year-Old Excel RCE and 15 More Stories

Apr 16, 2026 Hacking News / Cybersecurity News
You know that feeling when you open your feed on a Thursday morning and it's just... a lot? Yeah. This week delivered. We've got hackers getting creative in ways that are almost impressive if you ignore the whole "crime" part, ancient vulnerabilities somehow still ruining people's days, and enough supply chain drama to fill a season of television nobody asked for. Not all bad though. Some threat actors got exposed with receipts, a few platforms finally tightened things up, and there's research in here that's genuinely worth your time. Grab your coffee and keep scrolling. Targeted wallet breach Zerion Hack Likely Linked to North Korea Cryptocurrency wallet service Zerion has disclosed that one of its team member's devices was compromised, resulting in the theft of approximately $100K in stolen funds from internal company hot wallets. The company noted that user funds, Zerion apps, or infrastructure were...
Deterministic + Agentic AI: The Architecture Exposure Validation Requires

Deterministic + Agentic AI: The Architecture Exposure Validation Requires

Apr 15, 2026 Artificial Intelligence / Enterprise Security
Few technologies have moved from experimentation to boardroom mandate as quickly as AI. Across industries, leadership teams have embraced its broader potential, and boards, investors, and executives are already pushing organizations to adopt it across operational and security functions. Pentera’s AI Security and Exposure Report 2026 reflects that momentum: every CISO surveyed reported that AI is already in use across their organizations. Security testing is inevitably part of that shift. Modern environments are too dynamic, and attack techniques too variable, for purely static testing logic to remain sufficient on its own. Adaptive payload generation, contextual interpretation of controls, and real-time execution adjustments are necessary to get closer to how attackers, and increasingly their own AI agents, operate. For experienced security teams, the need to incorporate AI into testing is no longer in question. You have to fight fire with fire. Wh...
OpenAI Launches GPT-5.4-Cyber with Expanded Access for Security Teams

OpenAI Launches GPT-5.4-Cyber with Expanded Access for Security Teams

Apr 15, 2026 Vulnerability / Secure Coding
OpenAI on Tuesday unveiled GPT-5.4-Cyber , a variant of its latest flagship model, GPT‑5.4 , that's specifically optimized for defensive cybersecurity use cases, days after rival Anthropic unveiled its own frontier model, Mythos . "The progressive use of AI accelerates defenders – those responsible for keeping systems, data, and users safe – enabling them to find and fix problems faster in the digital infrastructure everyone relies on," OpenAI said . In conjunction with the announcement, the artificial intelligence (AI) company said it's ramping up its Trusted Access for Cyber ( TAC ) program to thousands of authenticated individual defenders and hundreds of teams responsible for securing critical software. AI systems are inherently dual-use, as bad actors can repurpose technologies developed for legitimate applications to their own advantage and achieve malicious goals. One core area of concern is that adversaries could invert the models fine...
AI-Driven Pushpaganda Scam Exploits Google Discover to Spread Scareware and Ad Fraud

AI-Driven Pushpaganda Scam Exploits Google Discover to Spread Scareware and Ad Fraud

Apr 14, 2026 Ad Fraud / Artificial Intelligence
Cybersecurity researchers have unmasked a novel ad fraud scheme that has been found to leverage search engine poisoning (SEO) techniques and artificial intelligence (AI)-generated content to push deceptive news stories into Google's Discover feed and trick users into enabling persistent browser notifications that lead to scareware and financial scams. The campaign, which has been found to target the personalized content feeds of Android and Chrome users, has been codenamed Pushpaganda by HUMAN's Satori Threat Intelligence and Research Team. "This operation, named for push notifications central to the scheme, generates invalid organic traffic from real mobile devices by tricking users into subscribing to enabling notifications that presented alarming messages," researchers Louisa Abel, Vikas Parthasarathy, João Santos, and Adam Sell said in a report shared with The Hacker News. At its peak, about 240 million bid requests have been associated wit...
Analysis of 216M Security Findings Shows a 4x Increase In Critical Risk (2026 Report)

Analysis of 216M Security Findings Shows a 4x Increase In Critical Risk (2026 Report)

Apr 14, 2026 Application Security / DevSecOps
OX Security recently analyzed 216 million security findings across 250 organizations over a 90-day period. The primary takeaway: while raw alert volume grew by 52% year-over-year, prioritized critical risk grew by nearly 400%. The surge in AI-assisted development is creating a "velocity gap" where the density of high-impact vulnerabilities is scaling faster than remediation workflows. The ratio of critical findings to raw alerts nearly tripled, moving from 0.035% to 0.092%. Key Findings from the 2026 Analysis: CVSS vs. Business Context: Technical severity scores are no longer the primary driver of risk. The most common elevation factors were High Business Priority (27.76%) and PII Processing (22.08%) . In modern environments, where a vulnerability lives is now more important than what the vulnerability is. The AI Fingerprint: We observed a direct correlation between the adoption of AI coding tools and the quadrupling of critical f...
⚡ Weekly Recap: Fiber Optic Spying, Windows Rootkit, AI Vulnerability Hunting and More

⚡ Weekly Recap: Fiber Optic Spying, Windows Rootkit, AI Vulnerability Hunting and More

Apr 13, 2026 Cybersecurity / Hacking
Monday is back, and the weekend’s backlog of chaos is officially hitting the fan. We are tracking a critical zero-day that has been quietly living in your PDFs for months, plus some aggressive state-sponsored meddling in infrastructure that is finally coming to light. It is one of those mornings where the gap between a quiet shift and a full-blown incident response is basically non-existent. The variety this week is particularly nasty. We have AI models being turned into autonomous exploit engines, North Korean groups playing the long game with social engineering, and fileless malware hitting enterprise workflows. There is also a major botnet takedown and new research proving that even fiber optic cables can be used to eavesdrop on your private conversations. Skim this before your next meeting. Let’s get into it. ⚡ Threat of the Week Adobe Acrobat Reader 0-Day Under Attack   — Adobe released emergency updates to fix a critical...
Your MTTD Looks Great. Your Post-Alert Gap Doesn't

Your MTTD Looks Great. Your Post-Alert Gap Doesn't

Apr 13, 2026 Threat Detection / Artificial Intelligence
Anthropic restricted its Mythos Preview model last week after it autonomously found and exploited zero-day vulnerabilities in every major operating system and browser. Palo Alto Networks' Wendi Whitmore warned that similar capabilities are weeks or months from proliferation. CrowdStrike's 2026 Global Threat Report puts average eCrime breakout time at 29 minutes. Mandiant's M-Trends 2026 shows adversary hand-off times have collapsed to 22 seconds.  Offense is getting faster. The question is where exactly defenders are slow — because it's not where most SOC dashboards suggest. Detection tooling has gotten materially better. EDR, cloud security, email security, identity, and SIEM platforms ship with built-in detection logic that pushes MTTD close to zero for known techniques. That's real progress, and it's the result of years of investment in detection engineering across the industry.  But when adversaries are operating on timelines measured in s...
Browser Extensions Are the New AI Consumption Channel That No One Is Talking About

Browser Extensions Are the New AI Consumption Channel That No One Is Talking About

Apr 10, 2026 Artificial Intelligence / Enterprise Security
While much of the discussion on AI security centers around protecting ‘shadow’ AI and GenAI consumption, there's a wide-open window nobody's guarding: AI browser extensions.  A  new report from LayerX exposes just how deep this blind spot goes, and why AI extensions may be the most dangerous AI threat surface in your network that isn't on anyone's radar. AI browser extensions don't trigger your DLP and don't show up in your SaaS logs. They live inside the browser itself, with direct access to everything your employees see, type, and stay logged into. AI extensions are 60% more likely to have a vulnerability than extensions on average, are 3 times more likely to have access to cookies, 2.5 times more likely to be able to execute remote scripts in the browser, and 6 times more likely to have increased their permissions in the past year. These extensions install in seconds and can remain...
Expert Insights Articles Videos
Cybersecurity Resources