#1 Trusted Cybersecurity News Platform
Followed by 5.20+ million
The Hacker News Logo
Subscribe – Get Latest News

artificial intelligence | Breaking Cybersecurity News | The Hacker News

Category — artificial intelligence
ThreatsDay Bulletin: Pixel Zero-Click, Redis RCE, China C2s, RAT Ads, Crypto Scams & 15+ Stories

ThreatsDay Bulletin: Pixel Zero-Click, Redis RCE, China C2s, RAT Ads, Crypto Scams & 15+ Stories

Jan 22, 2026 Cybersecurity / Hacking News
Most of this week's threats didn't rely on new tricks. They relied on familiar systems behaving exactly as designed, just in the wrong hands. Ordinary files, routine services, and trusted workflows were enough to open doors without forcing them. What stands out is how little friction attackers now need. Some activity focused on quiet reach and coverage, others on timing and reuse. The emphasis wasn't speed or spectacle, but control gained through scale, patience, and misplaced trust. The stories below trace where that trust bent, not how it broke. Each item is a small signal of a larger shift, best seen when viewed together. Spear-phishing delivers custom backdoor Operation Nomad Leopard Targets Afghanistan Government entities in Afghanistan have been at the receiving end of a spear-phishing campaign dubbed Operation Nomad Leopard that employs bogus administrative documents as decoys to distribute a backdoor named FALSECUB by means o...
Webinar: How Smart MSSPs Using AI to Boost Margins with Half the Staff

Webinar: How Smart MSSPs Using AI to Boost Margins with Half the Staff

Jan 21, 2026 Artificial Intelligence / Automation
Every managed security provider is chasing the same problem in 2026 — too many alerts, too few analysts, and clients demanding "CISO-level protection" at SMB budgets. The truth? Most MSSPs are running harder, not smarter. And it's breaking their margins. That's where the quiet revolution is happening: AI isn't just writing reports or surfacing risks — it's rebuilding how security services are delivered . The Shift Until now, MSSPs scaled by adding people. Each new client meant another analyst, another spreadsheet, another late-night ticket queue. AI automation flips that model. It handles assessments, benchmarking, and reporting in minutes — freeing your team to focus on strategy, not data entry. Early adopters are already seeing double-digit margin gains and faster onboarding cycles — without increasing headcount. Real Proof — Not Theory When Chad Robinson , CISO at Secure Cyber Defense, applied Cynomi's AI platform, his team stopped drowning in manual checklists. He didn't ju...
Chainlit AI Framework Flaws Enable Data Theft via File Read and SSRF Bugs

Chainlit AI Framework Flaws Enable Data Theft via File Read and SSRF Bugs

Jan 21, 2026 Vulnerability / Artificial Intelligence
Security vulnerabilities were uncovered in the popular open-source artificial intelligence (AI) framework Chainlit that could allow attackers to steal sensitive data, which may allow for lateral movement within a susceptible organization. Zafran Security said the high-severity flaws, collectively dubbed ChainLeak , could be abused to leak cloud environment API keys and steal sensitive files, or perform server-side request forgery (SSRF) attacks against servers hosting AI applications. Chainlit is a framework for creating conversational chatbots. According to statistics shared by the Python Software Foundation, the package has been downloaded over 220,000 times over the past week. It has attracted a total of 7.3 million downloads to date. Details of the two vulnerabilities are as follows - CVE-2026-22218 (CVSS score: 7.1) - An arbitrary file read vulnerability in the "/project/element" update flow that allows an authenticated attacker to access the contents of any ...
cyber security

2025 Cloud Security Risk Report

websiteSentinelOneCloud Security / Artificial Intelligence
Learn 5 key risks to cloud security such as cloud credential theft, lateral movements, AI services, and more.
cyber security

Most AI Risk Isn't in Models, It's in Your SaaS Stack

websiteRecoAI Security / (SaaS Security
Your models aren't the problem. The sprawl of your SaaS apps, AI and agents are. Here's where to start.
VoidLink Linux Malware Framework Built with AI Assistance Reaches 88,000 Lines of Code

VoidLink Linux Malware Framework Built with AI Assistance Reaches 88,000 Lines of Code

Jan 21, 2026 Artificial Intelligence / Cybercrime
The recently discovered sophisticated Linux malware framework known as VoidLink is assessed to have been developed by a single person with assistance from an artificial intelligence (AI) model. That's according to new findings from Check Point Research, which identified operational security blunders by malware's author that provided clues to its developmental origins. The latest insight makes VoidLink one of the first instances of an advanced malware largely generated using AI. "These materials provide clear evidence that the malware was produced predominantly through AI-driven development, reaching a first functional implant in under a week," the cybersecurity company said, adding it reached more than 88,000 lines of code by early December 2025. VoidLink, first publicly documented last week, is a feature-rich malware framework written in Zig that's specifically designed for long-term, stealthy access to Linux-based cloud environments. The malware is said...
Three Flaws in Anthropic MCP Git Server Enable File Access and Code Execution

Three Flaws in Anthropic MCP Git Server Enable File Access and Code Execution

Jan 20, 2026 Vulnerability / Artificial Intelligence
A set of three security vulnerabilities has been disclosed in mcp-server-git , the official Git Model Context Protocol ( MCP ) server maintained by Anthropic, that could be exploited to read or delete arbitrary files and execute code under certain conditions. "These flaws can be exploited through prompt injection, meaning an attacker who can influence what an AI assistant reads (a malicious README, a poisoned issue description, a compromised webpage) can weaponize these vulnerabilities without any direct access to the victim's system," Cyata researcher Yarden Porat said in a report shared with The Hacker News. Mcp-server-git is a Python package and an MCP server that provides a set of built-in tools to read, search, and manipulate Git repositories programmatically via large language models (LLMs). The security issues, which have been addressed in versions 2025.9.25 and 2025.12.18 following responsible disclosure in June 2025, are listed below - CVE-2025-68143 (CV...
Tudou Guarantee Marketplace Halts Telegram Transactions After Processing Over $12 Billion

Tudou Guarantee Marketplace Halts Telegram Transactions After Processing Over $12 Billion

Jan 20, 2026 Cryptocurrency / Artificial Intelligence
A Telegram-based guarantee marketplace known for advertising a broad range of illicit services appears to be winding down its operations, according to new findings from Elliptic. The blockchain intelligence company said Tudou Guarantee has effectively ceased transactions through its public Telegram groups following a period of significant growth. The marketplace is estimated to have processed over $12 billion in transactions, making it the third-largest illicit marketplace of all time. "Other parts of Tudou Guarantee, such as its gambling operations, continue to function, so it remains to be seen whether this represents the first stages of a full shutdown or a pivot away from fraud-related activity," the company said . Tudou Guarantee is just one of the many Telegram-based marketplaces serving cyber fraudsters, the others being HuiOne Guarantee and Xinbi Guarantee , which collectively engaged in over $35 billion in USDT transactions. Thousands of channels associated with...
Google Gemini Prompt Injection Flaw Exposed Private Calendar Data via Malicious Invites

Google Gemini Prompt Injection Flaw Exposed Private Calendar Data via Malicious Invites

Jan 19, 2026 Artificial Intelligence / Vulnerability
Cybersecurity researchers have disclosed details of a security flaw that leverages indirect prompt injection targeting Google Gemini as a way to bypass authorization guardrails and use Google Calendar as a data extraction mechanism. The vulnerability, Miggo Security's Head of Research, Liad Eliyahu, said, made it possible to circumvent Google Calendar's privacy controls by hiding a dormant malicious payload within a standard calendar invite. "This bypass enabled unauthorized access to private meeting data and the creation of deceptive calendar events without any direct user interaction," Eliyahu said in a report shared with The Hacker News. The starting point of the attack chain is a new calendar event that's crafted by the threat actor and sent to a target. The invite's description embeds a natural language prompt that's designed to do their bidding, resulting in a prompt injection. The attack gets activated when a user asks Gemini a completely inno...
OpenAI to Show Ads in ChatGPT for Logged-In U.S. Adults on Free and Go Plans

OpenAI to Show Ads in ChatGPT for Logged-In U.S. Adults on Free and Go Plans

Jan 17, 2026 Artificial Intelligence / Data Privacy
OpenAI on Friday said it would start showing ads in ChatGPT to logged-in adult U.S. users in both the free and ChatGPT Go tiers in the coming weeks, as the artificial intelligence (AI) company expanded access to its low-cost subscription globally. "You need to know that your data and conversations are protected and never sold to advertisers," OpenAI said . "And we need to keep a high bar and give you control over your experience, so you see truly relevant, high-quality ads—and can turn off personalization if you want." The company has positioned advertising as a way to ensure that the benefits of artificial general intelligence – a term used to describe a stage in machine learning when an AI system can reach or surpass human-level intelligence – can be made more accessible to the masses. In addition, it can be "transformative" for small businesses and emerging brands trying to compete, it added. It also emphasized that ads do not influence responses...
Researchers Reveal Reprompt Attack Allowing Single-Click Data Exfiltration From Microsoft Copilot

Researchers Reveal Reprompt Attack Allowing Single-Click Data Exfiltration From Microsoft Copilot

Jan 15, 2026 Prompt Injection / Enterprise Security
Cybersecurity researchers have disclosed details of a new attack method dubbed Reprompt that could allow bad actors to exfiltrate sensitive data from artificial intelligence (AI) chatbots like Microsoft Copilot in a single click, while bypassing enterprise security controls entirely. "Only a single click on a legitimate Microsoft link is required to compromise victims," Varonis security researcher Dolev Taler said in a report published Wednesday. "No plugins, no user interaction with Copilot." "The attacker maintains control even when the Copilot chat is closed, allowing the victim's session to be silently exfiltrated with no interaction beyond that first click." Following responsible disclosure, Microsoft has addressed the security issue. The attack does not affect enterprise customers using Microsoft 365 Copilot. At a high level, Reprompt employs three techniques to achieve a data‑exfiltration chain - Using the "q" URL parameter in...
ThreatsDay Bulletin: AI Voice Cloning Exploit, Wi-Fi Kill Switch, PLC Vulns, and 14 More Stories

ThreatsDay Bulletin: AI Voice Cloning Exploit, Wi-Fi Kill Switch, PLC Vulns, and 14 More Stories

Jan 15, 2026 Cybersecurity / Hacking News
The internet never stays quiet. Every week, new hacks, scams, and security problems show up somewhere. This week's stories show how fast attackers change their tricks, how small mistakes turn into big risks, and how the same old tools keep finding new ways to break in. Read on to catch up before the next wave hits. Unauthenticated RCE risk Security Flaw in Redis A high-severity security flaw has been disclosed in Redis (CVE-2025-62507, CVSS score: 8.8) that could potentially lead to remote code execution by means of a stack buffer overflow. It was fixed in version 8.3.2. JFrog's analysis of the flaw has revealed that the vulnerability is triggered when using the new Redis 8.2 XACKDEL command, which was introduced to simplify and optimize stream cleanup. Specifically, it resides in the implementation of xackdelCommand(), a function responsible for parsing and processing the list of stream IDs supplied by the user. "The core ...
Model Security Is the Wrong Frame – The Real Risk Is Workflow Security

Model Security Is the Wrong Frame – The Real Risk Is Workflow Security

Jan 15, 2026 Data Security / Artificial Intelligence
As AI copilots and assistants become embedded in daily work, security teams are still focused on protecting the models themselves. But recent incidents suggest the bigger risk lies elsewhere: in the workflows that surround those models. Two Chrome extensions posing as AI helpers were recently caught stealing ChatGPT and DeepSeek chat data from over 900,000 users. Separately, researchers demonstrated how prompt injections hidden in code repositories could trick IBM's AI coding assistant into executing malware on a developer's machine. Neither attack broke the AI algorithms themselves.  They exploited the context in which the AI operates. That's the pattern worth paying attention to. When AI systems are embedded in real business processes, summarizing documents, drafting emails, and pulling data from internal tools, securing the model alone isn't enough. The workflow itself becomes the target. AI Models Are Becoming Workflow Engines To understand why this matters,...
Microsoft Legal Action Disrupts RedVDS Cybercrime Infrastructure Used for Online Fraud

Microsoft Legal Action Disrupts RedVDS Cybercrime Infrastructure Used for Online Fraud

Jan 15, 2026 Cybercrime / Artificial Intelligence
Microsoft on Wednesday announced that it has taken a " coordinated legal action " in the U.S. and the U.K. to disrupt a cybercrime subscription service called RedVDS that has allegedly fueled millions in fraud losses. The effort, per the tech giant, is part of a broader law enforcement effort in collaboration with law enforcement authorities that has allowed it to confiscate the malicious infrastructure and take the illegal service (redvds[.]com, redvds[.]pro, and vdspanel[.]space) offline. "For as little as US $24 a month, RedVDS provides criminals with access to disposable virtual computers that make fraud cheap, scalable, and difficult to trace," said Steven Masada, assistant general counsel of Microsoft's Digital Crimes Unit. "Since March 2025, RedVDS‑enabled activity has driven roughly US $40 million in reported fraud losses in the United States alone." Crimeware-as-a-service (CaaS) offerings have increasingly become a lucrative business mod...
AI Agents Are Becoming Authorization Bypass Paths

AI Agents Are Becoming Authorization Bypass Paths

Jan 14, 2026 Artificial Intelligence / SaaS Security
Not long ago, AI agents were harmless. They wrote snippets of code. They answered questions. They helped individuals move a little faster. Then organizations got ambitious. Instead of personal copilots, companies started deploying shared organizational AI agents - agents embedded into HR, IT, engineering, customer support, and operations. Agents that don't just suggest, but act. Agents that touch real systems, change real configurations, and move real data: An HR agent who provisions and deprovisions access across IAM, SaaS apps, VPNs, and cloud platforms. A change management agent that approves requests, updates production configs, logs actions in ServiceNow, and updates Confluence. A support agent that pulls customer data from CRM, checks billing status, triggers backend fixes, and updates tickets automatically. These agents warrant deliberate control and oversight. They're now part of our operational infrastructure. And to make them useful, we made them powerful ...
[Webinar] Securing Agentic AI: From MCPs and Tool Access to Shadow API Key Sprawl

[Webinar] Securing Agentic AI: From MCPs and Tool Access to Shadow API Key Sprawl

Jan 13, 2026 Artificial Intelligence / Automation Security
AI agents are no longer just writing code. They are executing it. Tools like Copilot, Claude Code, and Codex can now build, test, and deploy software end-to-end in minutes. That speed is reshaping engineering—but it's also creating a security gap most teams don't see until something breaks. Behind every agentic workflow sits a layer few organizations are actively securing: Machine Control Protocols (MCPs) . These systems quietly decide what an AI agent can run, which tools it can call, which APIs it can access, and what infrastructure it can touch. Once that control plane is compromised or misconfigured, the agent doesn't just make mistakes—it acts with authority. Ask the teams impacted by CVE-2025-6514 . One flaw turned a trusted OAuth proxy used by more than 500,000 developers into a remote code execution path. No exotic exploit chain. No noisy breach. Just automation doing exactly what it was allowed to do—at scale. That incident made one thing clear: if an AI agent can execute...
ServiceNow Patches Critical AI Platform Flaw Allowing Unauthenticated User Impersonation

ServiceNow Patches Critical AI Platform Flaw Allowing Unauthenticated User Impersonation

Jan 13, 2026 Vulnerability / SaaS Security
ServiceNow has disclosed details of a now-patched critical security flaw impacting its ServiceNow artificial intelligence (AI) Platform that could enable an unauthenticated user to impersonate another user and perform arbitrary actions as that user. The vulnerability, tracked as CVE-2025-12420 , carries a CVSS score of 9.3 out of 10.0. It has been codenamed BodySnatcher by AppOmni. "This issue [...] could enable an unauthenticated user to impersonate another user and perform the operations that the impersonated user is entitled to perform," the company said in an advisory released Monday. The shortcoming was addressed by ServiceNow on October 30, 2025, by deploying a security update to the majority of hosted instances, with the company also sharing the patches with ServiceNow partners and self-hosted customers. The following versions include a fix for CVE-2025-12420 - Now Assist AI Agents (sn_aia) - 5.1.18 or later and 5.2.19 or later Virtual Agent API (sn_va_as_ser...
GoBruteforcer Botnet Targets Crypto Project Databases by Exploiting Weak Credentials

GoBruteforcer Botnet Targets Crypto Project Databases by Exploiting Weak Credentials

Jan 12, 2026 Cryptocurrency / Artificial Intelligence
A new wave of GoBruteforcer attacks has targeted databases of cryptocurrency and blockchain projects to co-opt them into a botnet that's capable of brute-forcing user passwords for services such as FTP, MySQL, PostgreSQL, and phpMyAdmin on Linux servers. "The current wave of campaigns is driven by two factors: the mass reuse of AI-generated server deployment examples that propagate common usernames and weak defaults, and the persistence of legacy web stacks such as XAMPP that expose FTP and admin interfaces with minimal hardening," Check Point Research said in an analysis published last week. GoBruteforcer, also called GoBrut, was first documented by Palo Alto Networks Unit 42 in March 2023, documenting its ability to target Unix-like platforms running x86, x64, and ARM architectures to deploy an Internet Relay Chat (IRC) bot and a web shell for remote access, along with fetching a brute-force module to scan for vulnerable systems and expand the botnet's reach. ...
Expert Insights Articles Videos
Cybersecurity Resources