#1 Trusted Cybersecurity News Platform
Followed by 5.20+ million
The Hacker News Logo
Subscribe – Get Latest News
Salesforce Security Handbook

artificial intelligence | Breaking Cybersecurity News | The Hacker News

Category — artificial intelligence
Researchers Find Serious AI Bugs Exposing Meta, Nvidia, and Microsoft Inference Frameworks

Researchers Find Serious AI Bugs Exposing Meta, Nvidia, and Microsoft Inference Frameworks

Nov 14, 2025 Artificial Intelligence / Vulnerability
Cybersecurity researchers have uncovered critical remote code execution vulnerabilities impacting major artificial intelligence (AI) inference engines, including those from Meta, Nvidia, Microsoft, and open-source PyTorch projects such as vLLM and SGLang. "These vulnerabilities all traced back to the same root cause: the overlooked unsafe use of ZeroMQ (ZMQ) and Python's pickle deserialization," Oligo Security researcher Avi Lumelsky said in a report published Thursday. At its core, the issue stems from what has been described as a pattern called ShadowMQ , in which the insecure deserialization logic has propagated to several projects as a result of code reuse. The root cause is a vulnerability in Meta's Llama large language model (LLM) framework ( CVE-2024-50050 , CVSS score: 6.3/9.3) that was patched by the company last October. Specifically, it involved the use of ZeroMQ's recv_pyobj() method to deserialize incoming data using Python's pickle module. ...
Chinese Hackers Use Anthropic's AI to Launch Automated Cyber Espionage Campaign

Chinese Hackers Use Anthropic's AI to Launch Automated Cyber Espionage Campaign

Nov 14, 2025 Cyber Espionage / AI Security
State-sponsored threat actors from China used artificial intelligence (AI) technology developed by Anthropic to orchestrate automated cyber attacks as part of a "highly sophisticated espionage campaign" in mid-September 2025. "The attackers used AI's 'agentic' capabilities to an unprecedented degree – using AI not just as an advisor, but to execute the cyber attacks themselves," the AI upstart said . The activity is assessed to have manipulated Claude Code, Anthropic's AI coding tool, to attempt to break into about 30 global targets spanning large tech companies, financial institutions, chemical manufacturing companies, and government agencies. A subset of these intrusions succeeded. Anthropic has since banned the relevant accounts and enforced defensive mechanisms to flag such attacks. The campaign, GTG-1002, marks the first time a threat actor has leveraged AI to conduct a "large-scale cyber attack" without major human intervention an...
Google Launches 'Private AI Compute' — Secure AI Processing with On-Device-Level Privacy

Google Launches 'Private AI Compute' — Secure AI Processing with On-Device-Level Privacy

Nov 12, 2025 Artificial Intelligence / Encryption
Google on Tuesday unveiled a new privacy-enhancing technology called Private AI Compute to process artificial intelligence (AI) queries in a secure platform in the cloud. The company said it has built Private AI Compute to "unlock the full speed and power of Gemini cloud models for AI experiences, while ensuring your personal data stays private to you and is not accessible to anyone else, not even Google." Private AI Compute has been described as a "secure, fortified space" for processing sensitive user data in a manner that's analogous to on-device processing but with extended AI capabilities. It's powered by Trillium Tensor Processing Units (TPUs) and Titanium Intelligence Enclaves (TIE), allowing the company to use its frontier models without sacrificing on security and privacy. In other words, the privacy infrastructure is designed to take advantage of the computational speed and power of the cloud while retaining the security and privacy assuran...
cyber security

7 Security Best Practices for MCP

websiteWizMCP Security / Cloud Security
Learn what security teams are doing to secure their AI integrations without slowing innovation. This cheat sheet outlines 7 best practices you can start using today.
cyber security

2025 Gartner® MQ Report for Endpoint Protection Platforms (July 2025 Edition)

websiteSentinelOneEndpoint Protection / Unified Security
Compare leading Endpoint Protection vendors and see why SentinelOne is named a 5x Leader.
New Browser Security Report Reveals Emerging Threats for Enterprises

New Browser Security Report Reveals Emerging Threats for Enterprises

Nov 10, 2025 Browser Security / Enterprise Security
According to the new Browser Security Report 2025 , security leaders are discovering that most identity, SaaS, and AI-related risks converge in a single place, the user's browser. Yet traditional controls like DLP, EDR, and SSE still operate one layer too low. What's emerging isn't just a blindspot. It's a parallel threat surface: unmanaged extensions acting like supply chain implants, GenAI tools accessed through personal accounts, sensitive data copy/pasted directly into prompt fields, and sessions that bypass SSO altogether. This article unpacks the key findings from the report and what they reveal about the shifting locus of control in enterprise security. GenAI Is Now the Top Data Exfiltration Channel The rise of GenAI in enterprise workflows has created a massive governance gap. Nearly half of employees use GenAI tools, but most do so through unmanaged accounts, outside of IT visibility. Key stats from the report: 77% of employees paste data into GenAI prompts 82% of...
Microsoft Uncovers 'Whisper Leak' Attack That Identifies AI Chat Topics in Encrypted Traffic

Microsoft Uncovers 'Whisper Leak' Attack That Identifies AI Chat Topics in Encrypted Traffic

Nov 08, 2025 Network Security / Data Protection
Microsoft has disclosed details of a novel side-channel attack targeting remote language models that could enable a passive adversary with capabilities to observe network traffic to glean details about model conversation topics despite encryption protections under certain circumstances. This leakage of data exchanged between humans and streaming-mode language models could pose serious risks to the privacy of user and enterprise communications, the company noted. The attack has been codenamed Whisper Leak . "Cyber attackers in a position to observe the encrypted traffic (for example, a nation-state actor at the internet service provider layer, someone on the local network, or someone connected to the same Wi-Fi router) could use this cyber attack to infer if the user's prompt is on a specific topic," security researchers Jonathan Bar Or and Geoff McDonald, along with the Microsoft Defender Security Research Team, said . Put differently, the attack allows an attacker t...
Vibe-Coded Malicious VS Code Extension Found with Built-In Ransomware Capabilities

Vibe-Coded Malicious VS Code Extension Found with Built-In Ransomware Capabilities

Nov 07, 2025 Supply Chain Attack / Malware
Cybersecurity researchers have flagged a malicious Visual Studio Code (VS Code) extension with basic ransomware capabilities that appears to be created with the help of artificial intelligence – in other words, vibe-coded. Secure Annex researcher John Tuckner, who flagged the extension " susvsex ," said it does not attempt to hide its malicious functionality. The extension was uploaded on November 5, 2025, by a user named "suspublisher18" along with the description "Just testing" and the email address "donotsupport@example[.]com." "Automatically zips, uploads, and encrypts files from C:\Users\Public\testing (Windows) or /tmp/testing (macOS) on first launch," reads the description of the extension. As of November 6, Microsoft has stepped in to remove it from the official VS Code Extension Marketplace.  According to details shared by "suspublisher18," the extension is designed to automatically activate itself on any even...
Google Uncovers PROMPTFLUX Malware That Uses Gemini AI to Rewrite Its Code Hourly

Google Uncovers PROMPTFLUX Malware That Uses Gemini AI to Rewrite Its Code Hourly

Nov 05, 2025 Artificial Intelligence / Threat Intelligence
Google on Wednesday said it discovered an unknown threat actor using an experimental Visual Basic Script (VB Script) malware dubbed PROMPTFLUX that interacts with its Gemini artificial intelligence (AI) model API to write its own source code for improved obfuscation and evasion. "PROMPTFLUX is written in VB Script and interacts with Gemini's API to request specific VBScript obfuscation and evasion techniques to facilitate 'just-in-time' self-modification, likely to evade static signature-based detection," Google Threat Intelligence Group (GTIG) said in a report shared with The Hacker News. The novel feature is part of its "Thinking Robot" component, which periodically queries the large language model (LLM), Gemini 1.5 Flash or later in this case, to obtain new code so as to sidestep detection. This, in turn, is accomplished by using a hard-coded API key to send the query to the Gemini API endpoint. The prompt sent to the model is both highly speci...
Researchers Find ChatGPT Vulnerabilities That Let Attackers Trick AI Into Leaking Data

Researchers Find ChatGPT Vulnerabilities That Let Attackers Trick AI Into Leaking Data

Nov 05, 2025 Artificial Intelligence / Vulnerability
Cybersecurity researchers have disclosed a new set of vulnerabilities impacting OpenAI's ChatGPT artificial intelligence (AI) chatbot that could be exploited by an attacker to steal personal information from users' memories and chat histories without their knowledge. The seven vulnerabilities and attack techniques, according to Tenable, were found in OpenAI's GPT-4o and GPT-5 models. OpenAI has since addressed some of them .  These issues expose the AI system to indirect prompt injection attacks , allowing an attacker to manipulate the expected behavior of a large language model (LLM) and trick it into performing unintended or malicious actions, security researchers Moshe Bernstein and Liv Matan said in a report shared with The Hacker News. The identified shortcomings are listed below - Indirect prompt injection vulnerability via trusted sites in Browsing Context, which involves asking ChatGPT to summarize the contents of web pages with malicious instructions added...
Google’s AI ‘Big Sleep’ Finds 5 New Vulnerabilities in Apple’s Safari WebKit

Google's AI 'Big Sleep' Finds 5 New Vulnerabilities in Apple's Safari WebKit

Nov 04, 2025 Artificial Intelligence / Vulnerability
Google's artificial intelligence (AI)-powered cybersecurity agent called Big Sleep has been credited by Apple for discovering as many as five different security flaws in the WebKit component used in its Safari web browser that, if successfully exploited, could result in a browser crash or memory corruption. The list of vulnerabilities is as follows - CVE-2025-43429 - A buffer overflow vulnerability that may lead to an unexpected process crash when processing maliciously crafted web content (addressed through improved bounds checking) CVE-2025-43430 - An unspecified vulnerability that could result in an unexpected process crash when processing maliciously crafted web content (addressed through improved state management) CVE-2025-43431 & CVE-2025-43433 - Two unspecified vulnerabilities that may lead to memory corruption when processing maliciously crafted web content (addressed through improved memory handling) CVE-2025-43434 - A use-after-free vulnerability that may ...
OpenAI Unveils Aardvark: GPT-5 Agent That Finds and Fixes Code Flaws Automatically

OpenAI Unveils Aardvark: GPT-5 Agent That Finds and Fixes Code Flaws Automatically

Oct 31, 2025 Artificial Intelligence / Code Security
OpenAI has announced the launch of an "agentic security researcher" that's powered by its GPT-5 large language model (LLM) and is programmed to emulate a human expert capable of scanning, understanding, and patching code. Called Aardvark , the artificial intelligence (AI) company said the autonomous agent is designed to help developers and security teams flag and fix security vulnerabilities at scale. It's currently available in private beta. "Aardvark continuously analyzes source code repositories to identify vulnerabilities, assess exploitability, prioritize severity, and propose targeted patches," OpenAI noted . It works by embedding itself into the software development pipeline, monitoring commits and changes to codebases, detecting security issues and how they might be exploited, and proposing fixes to address them using LLM-based reasoning and tool-use. Powering the agent is GPT‑5 , which OpenAI introduced in August 2025. The company describes it...
Google's Built-In AI Defenses on Android Now Block 10 Billion Scam Messages a Month

Google's Built-In AI Defenses on Android Now Block 10 Billion Scam Messages a Month

Oct 30, 2025 Mobile Security / Artificial Intelligence
Google on Thursday revealed that the scam defenses built into Android safeguard users around the world from more than 10 billion suspected malicious calls and messages every month. The tech giant also said it has blocked over 100 million suspicious numbers from using Rich Communication Services (RCS), an evolution of the SMS protocol, thereby preventing scams before they could even be sent. In recent years, the company has adopted various safeguards to combat phone call scams and automatically filter known spam using on-device artificial intelligence and move them automatically to the "spam & blocked" folder in the Google Messages app for Android. Earlier this month, Google also globally rolled out safer links in Google Messages, warning users when they attempt to click on any URLs in a message flagged as spam and step them visiting the potentially harmful website, unless the message is marked as "not spam." Google said its analysis of user-submitted rep...
New AI-Targeted Cloaking Attack Tricks AI Crawlers Into Citing Fake Info as Verified Facts

New AI-Targeted Cloaking Attack Tricks AI Crawlers Into Citing Fake Info as Verified Facts

Oct 29, 2025 Machine Learning / AI Safety
Cybersecurity researchers have flagged a new security issue in agentic web browsers like OpenAI ChatGPT Atlas that exposes underlying artificial intelligence (AI) models to context poisoning attacks. In the attack devised by AI security company SPLX, a bad actor can set up websites that serve different content to browsers and AI crawlers run by ChatGPT and Perplexity. The technique has been codenamed AI-targeted cloaking . The approach is a variation of search engine cloaking, which refers to the practice of presenting one version of a web page to users and a different version to search engine crawlers with the end goal of manipulating search rankings. The only difference in this case is that attackers optimize for AI crawlers from various providers by means of a trivial user agent check that leads to content delivery manipulation. "Because these systems rely on direct retrieval, whatever content is served to them becomes ground truth in AI Overviews, summaries, or autonom...
Discover Practical AI Tactics for GRC — Join the Free Expert Webinar

Discover Practical AI Tactics for GRC — Join the Free Expert Webinar

Oct 29, 2025 Artificial Intelligence / Compliance
Artificial Intelligence (AI) is rapidly transforming Governance, Risk, and Compliance (GRC) . It's no longer a future concept—it's here, and it's already reshaping how teams operate. AI's capabilities are profound: it's speeding up audits, flagging critical risks faster, and drastically cutting down on time-consuming manual work. This leads to greater efficiency, higher accuracy, and a more proactive GRC function. However, this powerful shift introduces significant new challenges. AI brings its own set of risks, including potential bias, dangerous blind spots, and regulatory gaps that are only beginning to be addressed by governing bodies. Staying ahead of this curve—not just struggling to keep up—requires clear, practical knowledge. Don't Just Stay Afloat—Master the Change To help you navigate this complex landscape, we invite you to our free, high-impact webinar, " The Future of AI in GRC: Opportunities, Risks, and Practical Insights . " This se...
Preparing for the Digital Battlefield of 2026: Ghost Identities, Poisoned Accounts, & AI Agent Havoc

Preparing for the Digital Battlefield of 2026: Ghost Identities, Poisoned Accounts, & AI Agent Havoc

Oct 29, 2025 Artificial Intelligence / Data Breach
BeyondTrust's annual cybersecurity predictions point to a year where old defenses will fail quietly, and new attack vectors will surge. Introduction The next major breach won't be a phished password. It will be the result of a massive, unmanaged identity debt. This debt takes many forms: it's the "ghost" identity from a 2015 breach lurking in your IAM, the privilege sprawl from thousands of new AI agents bloating your attack surface , or the automated account poisoning that exploits weak identity verification in financial systems. All of these vectors—physical, digital, new, and old—are converging on one single point of failure: identity .  Based on analysis from BeyondTrust's cybersecurity experts, here are three critical identity-based threats that will define the coming year:  1. Agentic AI Emerges as the Ultimate Attack Vector By 2026, agentic AI will be connected to nearly every technology we operate, effectively becoming the new middleware for most organizations. ...
New ChatGPT Atlas Browser Exploit Lets Attackers Plant Persistent Hidden Commands

New ChatGPT Atlas Browser Exploit Lets Attackers Plant Persistent Hidden Commands

Oct 27, 2025 Artificial Intelligence / Vulnerability
Cybersecurity researchers have discovered a new vulnerability in OpenAI's ChatGPT Atlas web browser that could allow malicious actors to inject nefarious instructions into the artificial intelligence (AI)-powered assistant's memory and run arbitrary code. "This exploit can allow attackers to infect systems with malicious code, grant themselves access privileges, or deploy malware," LayerX Security Co-Founder and CEO, Or Eshed, said in a report shared with The Hacker News. The attack, at its core, leverages a cross-site request forgery ( CSRF ) flaw that could be exploited to inject malicious instructions into ChatGPT's persistent memory. The corrupted memory can then persist across devices and sessions, permitting an attacker to conduct various actions, including seizing control of a user's account, browser, or connected systems, when a logged-in user attempts to use ChatGPT for legitimate purposes. Memory, first introduced by OpenAI in February 2024, is...
Secure AI at Scale and Speed — Learn the Framework in this Free Webinar

Secure AI at Scale and Speed — Learn the Framework in this Free Webinar

Oct 23, 2025 Artificial Intelligence / Data Protection
AI is everywhere—and your company wants in. Faster products, smarter systems, fewer bottlenecks. But if you're in security, that excitement often comes with a sinking feeling. Because while everyone else is racing ahead, you're left trying to manage a growing web of AI agents you didn't create, can't fully see, and weren't designed to control. Join our upcoming webinar and learn how to make AI security work with you, not against you . The Quiet Crisis No One Talks About Did you know most companies now have 100 AI agents for every one human employee? Even more shocking? 99% of those AI identities are completely unmanaged. No oversight. No lifecycle controls. And every one of them could be a backdoor waiting to happen. It's not your fault. Traditional tools weren't built for this new AI world. But the risks are real—and growing. Let's Change That. Together. In our free webinar, " Turning Controls into Accelerators of AI Adoption ," we'll help you flip the script. Th...
Securing AI to Benefit from AI

Securing AI to Benefit from AI

Oct 21, 2025 Artificial Intelligence / Security Operations
Artificial intelligence (AI) holds tremendous promise for improving cyber defense and making the lives of security practitioners easier. It can help teams cut through alert fatigue, spot patterns faster, and bring a level of scale that human analysts alone can't match. But realizing that potential depends on securing the systems that make it possible. Every organization experimenting with AI in security operations is, knowingly or not, expanding its attack surface. Without clear governance, strong identity controls, and visibility into how AI makes its decisions, even well-intentioned deployments can create risk faster than they reduce it. To truly benefit from AI, defenders need to approach securing it with the same rigor they apply to any other critical system. That means establishing trust in the data it learns from, accountability for the actions it takes, and oversight for the outcomes it produces. When secured correctly, AI can amplify human capability instead of replacing it t...
Identity Security: Your First and Last Line of Defense

Identity Security: Your First and Last Line of Defense

Oct 17, 2025 Artificial Intelligence / Identity Security
The danger isn't that AI agents have bad days — it's that they never do. They execute faithfully, even when what they're executing is a mistake. A single misstep in logic or access can turn flawless automation into a flawless catastrophe. This isn't some dystopian fantasy—it's Tuesday at the office now. We've entered a new phase where autonomous AI agents act with serious system privileges. They execute code, handle complex tasks, and access sensitive data with unprecedented autonomy. They don't sleep, don't ask questions, and don't always wait for permission. That's powerful. That's also risky. Because today's enterprise threats go way beyond your garden-variety phishing scams and malware. The modern security perimeter? It's all about identity management. Here's the million-dollar question every CISO should be asking: Who or what has access to your critical systems, can you secure and govern that access, and can you actually prove it? Ho...
c
Expert Insights Articles Videos
Cybersecurity Resources