#1 Trusted Cybersecurity News Platform
Followed by 5.20+ million
The Hacker News Logo
Subscribe – Get Latest News
Salesforce Security Handbook

artificial intelligence | Breaking Cybersecurity News | The Hacker News

Category — artificial intelligence
Record 29.7 Tbps DDoS Attack Linked to AISURU Botnet with up to 4 Million Infected Hosts

Record 29.7 Tbps DDoS Attack Linked to AISURU Botnet with up to 4 Million Infected Hosts

Dec 04, 2025 DDoS Attacks / Network Security
Cloudflare on Wednesday said it detected and mitigated the largest ever distributed denial-of-service (DDoS) attack that measured at 29.7 terabits per second (Tbps). The activity, the web infrastructure and security company said, originated from a DDoS botnet-for-hire known as AISURU , which has been linked to a number of hyper-volumetric DDoS attacks over the past year. The attack lasted for 69 seconds. It did not disclose the target of the attack. The botnet has prominently targeted telecommunication providers, gaming companies, hosting providers, and financial services. Also tackled by Cloudflare was a 14.1 Bpps DDoS attack from the same botnet. AISURU is believed to be powered by a massive network comprising an estimated 1-4 million infected hosts worldwide. "The 29.7 Tbps was a UDP carpet-bombing attack bombarding an average of 15,000 destination ports per second," Omer Yoachimik and Jorge Pacheco said . "The distributed attack randomized various packet attrib...
Discover the AI Tools Fueling the Next Cybercrime Wave — Watch the Webinar

Discover the AI Tools Fueling the Next Cybercrime Wave — Watch the Webinar

Dec 03, 2025 Cybercrime / Artificial Intelligence
Remember when phishing emails were easy to spot? Bad grammar, weird formatting, and requests from a "Prince" in a distant country? Those days are over. Today, a 16-year-old with zero coding skills and a $200 allowance can launch a campaign that rivals state-sponsored hackers. They don't need to be smart; they just need to subscribe to the right AI tool. We are witnessing the industrialization of cybercrime. The barrier to entry has collapsed, and your current email filters are looking for threats that no longer exist. Watch the Live Breakdown of AI Phishing Tools ➜ The New "Big Three" of Cybercrime Security leaders don't need another lecture on what phishing is. You need to see exactly what you are up against. This isn't science fiction—these tools are being sold on the dark web right now. In this webinar , we are going inside the "AI Phishing Factory" to deconstruct the three tools rewriting the threat landscape: WormGPT: Think of...
Chopping AI Down to Size: Turning Disruptive Technology into a Strategic Advantage

Chopping AI Down to Size: Turning Disruptive Technology into a Strategic Advantage

Dec 03, 2025 Security Operations / Artificial Intelligence
Most people know the story of Paul Bunyan. A giant lumberjack, a trusted axe, and a challenge from a machine that promised to outpace him. Paul doubled down on his old way of working, swung harder, and still lost by a quarter inch. His mistake was not losing the contest. His mistake was assuming that effort alone could outmatch a new kind of tool. Security professionals are facing a similar moment. AI is our modern steam-powered saw. It is faster in some areas, unfamiliar in others, and it challenges a lot of long-standing habits. The instinct is to protect what we know instead of learning what the new tool can actually do. But if we follow Paul's approach, we'll find ourselves on the wrong side of a shift that is already underway. The right move is to learn the tool, understand its capabilities, and leverage it for outcomes that make your job easier.  AI's Role in Daily Cybersecurity Work AI is now embedded in almost every security product we touch. Endpoint protection platfor...
cyber security

Enhance Microsoft Intune to Optimize Endpoint Management

websiteAction1Patching / Endpoint Management
Pairing Intune with a dedicated patching tool improves control and visibility for remote teams. See how.
cyber security

Default Admin Rights Are a Hacker's Dream – and Keeper is Their Nightmare

websiteKeeper SecurityPrivilege Management / Zero Trust
Eliminate standing admin rights and enable Just-in-Time access across all Windows, Linux and macOS endpoints.
Webinar: The "Agentic" Trojan Horse: Why the New AI Browsers War is a Nightmare for Security Teams

Webinar: The "Agentic" Trojan Horse: Why the New AI Browsers War is a Nightmare for Security Teams

Dec 01, 2025 Artificial Intelligence / Enterprise Security
The AI browser wars are coming to a desktop near you, and you need to start worrying about their security challenges. For the last two decades, whether you used Chrome, Edge, or Firefox, the fundamental paradigm remained the same: a passive window through which a human user viewed and interacted with the internet. That era is over. We are currently witnessing a shift that renders the old OS-centric browser debates irrelevant. The new battleground is agentic AI browsers, and for security professionals, it represents a terrifying inversion of the traditional threat landscape. A new webinar dives into the issue of AI browsers , their risks, and how security teams can deal with them. Even today, the browser is the main interface for AI consumption; it is where most users access AI assistants such as ChatGPT or Gemini, use AI-enabled SaaS applications, and engage AI agents. AI providers were the first to recognize this, which is why we've seen a spate of new 'agentic' AI browsers bein...
FBI Reports $262M in ATO Fraud as Researchers Cite Growing AI Phishing and Holiday Scams

FBI Reports $262M in ATO Fraud as Researchers Cite Growing AI Phishing and Holiday Scams

Nov 26, 2025 Online Security / Artificial Intelligence
The U.S. Federal Bureau of Investigation (FBI) has warned that cybercriminals are impersonating financial institutions with an aim to steal money or sensitive information to facilitate account takeover (ATO) fraud schemes. The activity targets individuals, businesses, and organizations of varied sizes and across sectors, the agency said, adding the fraudulent schemes have led to more than $262 million in losses since the start of the year. The FBI said it has received over 5,100 complaints. ATO fraud typically refers to attacks that enable threat actors to obtain unauthorized access to an online financial institution, payroll system, or health savings account to siphon data and funds for personal gain. The access is often obtained by approaching targets through social engineering techniques, such as texts, calls, and emails that prey on users' fears, or via bogus websites. These methods make it possible for attackers to deceive users into providing their login credentials on a...
Chinese DeepSeek-R1 AI Generates Insecure Code When Prompts Mention Tibet or Uyghurs

Chinese DeepSeek-R1 AI Generates Insecure Code When Prompts Mention Tibet or Uyghurs

Nov 24, 2025 Artificial Intelligence / Web Security
New research from CrowdStrike has revealed that DeepSeek's artificial intelligence (AI) reasoning model DeepSeek-R1 produces more security vulnerabilities in response to prompts that contain topics deemed politically sensitive by China. "We found that when DeepSeek-R1 receives prompts containing topics the Chinese Communist Party (CCP) likely considers politically sensitive, the likelihood of it producing code with severe security vulnerabilities increases by up to 50%," the cybersecurity company said . The Chinese AI company previously attracted national security concerns, leading to a ban in many countries. Its open-source DeepSeek-R1 model was also found to censor topics considered sensitive by the Chinese government, refusing to answer questions about the Great Firewall of China or the political status of Taiwan, among others. In a statement released earlier this month, Taiwan's National Security Bureau warned citizens to be vigilant when using Chinese-m...
ShadowRay 2.0 Exploits Unpatched Ray Flaw to Build Self-Spreading GPU Cryptomining Botnet

ShadowRay 2.0 Exploits Unpatched Ray Flaw to Build Self-Spreading GPU Cryptomining Botnet

Nov 20, 2025 Vulnerability / Cloud Computing
Oligo Security has warned of ongoing attacks exploiting a two-year-old security flaw in the Ray open-source artificial intelligence (AI) framework to turn infected clusters with NVIDIA GPUs into a self-replicating cryptocurrency mining botnet. The activity, codenamed ShadowRay 2.0 , is an evolution of a prior wave that was observed between September 2023 and March 2024. The attack, at its core, exploits a critical missing authentication bug (CVE-2023-48022, CVSS score: 9.8) to take control of susceptible instances and hijack their computing power for illicit cryptocurrency mining using XMRig. The vulnerability has remained unpatched due to a " long-standing design decision " that's consistent with Ray's development best practices, which requires it to be run in an isolated network and act upon trusted code. The campaign involves submitting malicious jobs, with commands ranging from simple reconnaissance to complex multi-stage Bash and Python payloads, to an una...
TamperedChef Malware Spreads via Fake Software Installers in Ongoing Global Campaign

TamperedChef Malware Spreads via Fake Software Installers in Ongoing Global Campaign

Nov 20, 2025 Malvertising / Artificial Intelligence
Threat actors are leveraging bogus installers masquerading as popular software to trick users into installing malware as part of a global malvertising campaign dubbed TamperedChef . The end goal of the attacks is to establish persistence and deliver JavaScript malware that facilitates remote access and control, per a new report from Acronis Threat Research Unit (TRU). The campaign , per the Singapore-headquartered company, is still ongoing, with new artifacts being detected and associated infrastructure remaining active. "The operator(s) rely on social engineering by using everyday application names, malvertising, Search Engine Optimization (SEO), and abused digital certificates that aim to increase user trust and evade security detection," researchers Darrel Virtusio and Jozsef Gegeny said. TamperedChef is the name assigned to a long-running campaign that has leveraged seemingly legitimate installers for various utilities to distribute an information stealer malware of...
Researchers Find Serious AI Bugs Exposing Meta, Nvidia, and Microsoft Inference Frameworks

Researchers Find Serious AI Bugs Exposing Meta, Nvidia, and Microsoft Inference Frameworks

Nov 14, 2025 Artificial Intelligence / Vulnerability
Cybersecurity researchers have uncovered critical remote code execution vulnerabilities impacting major artificial intelligence (AI) inference engines, including those from Meta, Nvidia, Microsoft, and open-source PyTorch projects such as vLLM and SGLang. "These vulnerabilities all traced back to the same root cause: the overlooked unsafe use of ZeroMQ (ZMQ) and Python's pickle deserialization," Oligo Security researcher Avi Lumelsky said in a report published Thursday. At its core, the issue stems from what has been described as a pattern called ShadowMQ , in which the insecure deserialization logic has propagated to several projects as a result of code reuse. The root cause is a vulnerability in Meta's Llama large language model (LLM) framework ( CVE-2024-50050 , CVSS score: 6.3/9.3) that was patched by the company last October. Specifically, it involved the use of ZeroMQ's recv_pyobj() method to deserialize incoming data using Python's pickle module. ...
Chinese Hackers Use Anthropic's AI to Launch Automated Cyber Espionage Campaign

Chinese Hackers Use Anthropic's AI to Launch Automated Cyber Espionage Campaign

Nov 14, 2025 Cyber Espionage / AI Security
State-sponsored threat actors from China used artificial intelligence (AI) technology developed by Anthropic to orchestrate automated cyber attacks as part of a "highly sophisticated espionage campaign" in mid-September 2025. "The attackers used AI's 'agentic' capabilities to an unprecedented degree – using AI not just as an advisor, but to execute the cyber attacks themselves," the AI upstart said . The activity is assessed to have manipulated Claude Code, Anthropic's AI coding tool, to attempt to break into about 30 global targets spanning large tech companies, financial institutions, chemical manufacturing companies, and government agencies. A subset of these intrusions succeeded. Anthropic has since banned the relevant accounts and enforced defensive mechanisms to flag such attacks. The campaign, GTG-1002, marks the first time a threat actor has leveraged AI to conduct a "large-scale cyber attack" without major human intervention an...
Google Launches 'Private AI Compute' — Secure AI Processing with On-Device-Level Privacy

Google Launches 'Private AI Compute' — Secure AI Processing with On-Device-Level Privacy

Nov 12, 2025 Artificial Intelligence / Encryption
Google on Tuesday unveiled a new privacy-enhancing technology called Private AI Compute to process artificial intelligence (AI) queries in a secure platform in the cloud. The company said it has built Private AI Compute to "unlock the full speed and power of Gemini cloud models for AI experiences, while ensuring your personal data stays private to you and is not accessible to anyone else, not even Google." Private AI Compute has been described as a "secure, fortified space" for processing sensitive user data in a manner that's analogous to on-device processing but with extended AI capabilities. It's powered by Trillium Tensor Processing Units (TPUs) and Titanium Intelligence Enclaves (TIE), allowing the company to use its frontier models without sacrificing on security and privacy. In other words, the privacy infrastructure is designed to take advantage of the computational speed and power of the cloud while retaining the security and privacy assuran...
New Browser Security Report Reveals Emerging Threats for Enterprises

New Browser Security Report Reveals Emerging Threats for Enterprises

Nov 10, 2025 Browser Security / Enterprise Security
According to the new Browser Security Report 2025 , security leaders are discovering that most identity, SaaS, and AI-related risks converge in a single place, the user's browser. Yet traditional controls like DLP, EDR, and SSE still operate one layer too low. What's emerging isn't just a blindspot. It's a parallel threat surface: unmanaged extensions acting like supply chain implants, GenAI tools accessed through personal accounts, sensitive data copy/pasted directly into prompt fields, and sessions that bypass SSO altogether. This article unpacks the key findings from the report and what they reveal about the shifting locus of control in enterprise security. GenAI Is Now the Top Data Exfiltration Channel The rise of GenAI in enterprise workflows has created a massive governance gap. Nearly half of employees use GenAI tools, but most do so through unmanaged accounts, outside of IT visibility. Key stats from the report: 77% of employees paste data into GenAI prompts 82% of...
Microsoft Uncovers 'Whisper Leak' Attack That Identifies AI Chat Topics in Encrypted Traffic

Microsoft Uncovers 'Whisper Leak' Attack That Identifies AI Chat Topics in Encrypted Traffic

Nov 08, 2025 Network Security / Data Protection
Microsoft has disclosed details of a novel side-channel attack targeting remote language models that could enable a passive adversary with capabilities to observe network traffic to glean details about model conversation topics despite encryption protections under certain circumstances. This leakage of data exchanged between humans and streaming-mode language models could pose serious risks to the privacy of user and enterprise communications, the company noted. The attack has been codenamed Whisper Leak . "Cyber attackers in a position to observe the encrypted traffic (for example, a nation-state actor at the internet service provider layer, someone on the local network, or someone connected to the same Wi-Fi router) could use this cyber attack to infer if the user's prompt is on a specific topic," security researchers Jonathan Bar Or and Geoff McDonald, along with the Microsoft Defender Security Research Team, said . Put differently, the attack allows an attacker t...
Vibe-Coded Malicious VS Code Extension Found with Built-In Ransomware Capabilities

Vibe-Coded Malicious VS Code Extension Found with Built-In Ransomware Capabilities

Nov 07, 2025 Supply Chain Attack / Malware
Cybersecurity researchers have flagged a malicious Visual Studio Code (VS Code) extension with basic ransomware capabilities that appears to be created with the help of artificial intelligence – in other words, vibe-coded. Secure Annex researcher John Tuckner, who flagged the extension " susvsex ," said it does not attempt to hide its malicious functionality. The extension was uploaded on November 5, 2025, by a user named "suspublisher18" along with the description "Just testing" and the email address "donotsupport@example[.]com." "Automatically zips, uploads, and encrypts files from C:\Users\Public\testing (Windows) or /tmp/testing (macOS) on first launch," reads the description of the extension. As of November 6, Microsoft has stepped in to remove it from the official VS Code Extension Marketplace.  According to details shared by "suspublisher18," the extension is designed to automatically activate itself on any even...
Google Uncovers PROMPTFLUX Malware That Uses Gemini AI to Rewrite Its Code Hourly

Google Uncovers PROMPTFLUX Malware That Uses Gemini AI to Rewrite Its Code Hourly

Nov 05, 2025 Artificial Intelligence / Threat Intelligence
Google on Wednesday said it discovered an unknown threat actor using an experimental Visual Basic Script (VB Script) malware dubbed PROMPTFLUX that interacts with its Gemini artificial intelligence (AI) model API to write its own source code for improved obfuscation and evasion. "PROMPTFLUX is written in VB Script and interacts with Gemini's API to request specific VBScript obfuscation and evasion techniques to facilitate 'just-in-time' self-modification, likely to evade static signature-based detection," Google Threat Intelligence Group (GTIG) said in a report shared with The Hacker News. The novel feature is part of its "Thinking Robot" component, which periodically queries the large language model (LLM), Gemini 1.5 Flash or later in this case, to obtain new code so as to sidestep detection. This, in turn, is accomplished by using a hard-coded API key to send the query to the Gemini API endpoint. The prompt sent to the model is both highly speci...
Researchers Find ChatGPT Vulnerabilities That Let Attackers Trick AI Into Leaking Data

Researchers Find ChatGPT Vulnerabilities That Let Attackers Trick AI Into Leaking Data

Nov 05, 2025 Artificial Intelligence / Vulnerability
Cybersecurity researchers have disclosed a new set of vulnerabilities impacting OpenAI's ChatGPT artificial intelligence (AI) chatbot that could be exploited by an attacker to steal personal information from users' memories and chat histories without their knowledge. The seven vulnerabilities and attack techniques, according to Tenable, were found in OpenAI's GPT-4o and GPT-5 models. OpenAI has since addressed some of them .  These issues expose the AI system to indirect prompt injection attacks , allowing an attacker to manipulate the expected behavior of a large language model (LLM) and trick it into performing unintended or malicious actions, security researchers Moshe Bernstein and Liv Matan said in a report shared with The Hacker News. The identified shortcomings are listed below - Indirect prompt injection vulnerability via trusted sites in Browsing Context, which involves asking ChatGPT to summarize the contents of web pages with malicious instructions added...
Google’s AI ‘Big Sleep’ Finds 5 New Vulnerabilities in Apple’s Safari WebKit

Google's AI 'Big Sleep' Finds 5 New Vulnerabilities in Apple's Safari WebKit

Nov 04, 2025 Artificial Intelligence / Vulnerability
Google's artificial intelligence (AI)-powered cybersecurity agent called Big Sleep has been credited by Apple for discovering as many as five different security flaws in the WebKit component used in its Safari web browser that, if successfully exploited, could result in a browser crash or memory corruption. The list of vulnerabilities is as follows - CVE-2025-43429 - A buffer overflow vulnerability that may lead to an unexpected process crash when processing maliciously crafted web content (addressed through improved bounds checking) CVE-2025-43430 - An unspecified vulnerability that could result in an unexpected process crash when processing maliciously crafted web content (addressed through improved state management) CVE-2025-43431 & CVE-2025-43433 - Two unspecified vulnerabilities that may lead to memory corruption when processing maliciously crafted web content (addressed through improved memory handling) CVE-2025-43434 - A use-after-free vulnerability that may ...
OpenAI Unveils Aardvark: GPT-5 Agent That Finds and Fixes Code Flaws Automatically

OpenAI Unveils Aardvark: GPT-5 Agent That Finds and Fixes Code Flaws Automatically

Oct 31, 2025 Artificial Intelligence / Code Security
OpenAI has announced the launch of an "agentic security researcher" that's powered by its GPT-5 large language model (LLM) and is programmed to emulate a human expert capable of scanning, understanding, and patching code. Called Aardvark , the artificial intelligence (AI) company said the autonomous agent is designed to help developers and security teams flag and fix security vulnerabilities at scale. It's currently available in private beta. "Aardvark continuously analyzes source code repositories to identify vulnerabilities, assess exploitability, prioritize severity, and propose targeted patches," OpenAI noted . It works by embedding itself into the software development pipeline, monitoring commits and changes to codebases, detecting security issues and how they might be exploited, and proposing fixes to address them using LLM-based reasoning and tool-use. Powering the agent is GPT‑5 , which OpenAI introduced in August 2025. The company describes it...
Google's Built-In AI Defenses on Android Now Block 10 Billion Scam Messages a Month

Google's Built-In AI Defenses on Android Now Block 10 Billion Scam Messages a Month

Oct 30, 2025 Mobile Security / Artificial Intelligence
Google on Thursday revealed that the scam defenses built into Android safeguard users around the world from more than 10 billion suspected malicious calls and messages every month. The tech giant also said it has blocked over 100 million suspicious numbers from using Rich Communication Services (RCS), an evolution of the SMS protocol, thereby preventing scams before they could even be sent. In recent years, the company has adopted various safeguards to combat phone call scams and automatically filter known spam using on-device artificial intelligence and move them automatically to the "spam & blocked" folder in the Google Messages app for Android. Earlier this month, Google also globally rolled out safer links in Google Messages, warning users when they attempt to click on any URLs in a message flagged as spam and step them visiting the potentially harmful website, unless the message is marked as "not spam." Google said its analysis of user-submitted rep...
c
Expert Insights Articles Videos
Cybersecurity Resources