-->
#1 Trusted Cybersecurity News Platform
Followed by 5.20+ million
The Hacker News Logo
Subscribe – Get Latest News
Security Service Edge

artificial intelligence | Breaking Cybersecurity News | The Hacker News

Category — artificial intelligence
Cybersecurity Tech Predictions for 2026: Operating in a World of Permanent Instability

Cybersecurity Tech Predictions for 2026: Operating in a World of Permanent Instability

Feb 18, 2026 Zero Trust / Data Security
In 2025, navigating the digital seas still felt like a matter of direction. Organizations charted routes, watched the horizon, and adjusted course to reach safe harbors of resilience, trust, and compliance. In 2026, the seas are no longer calm between storms. Cybersecurity now unfolds in a state of  continuous atmospheric instability : AI-driven threats that adapt in real time, expanding digital ecosystems, fragile trust relationships, persistent regulatory pressure, and accelerating technological change. This is not turbulence on the way to stability; it  is the climate. In this environment, cybersecurity technologies are no longer merely navigational aids. They are  structural reinforcements . They determine whether an organization endures volatility or learns to function normally within it. That is why security investments in 2026 are increasingly made not for coverage, but for  operational continuity : sustained operations, decision-grade visibility and cont...
3 Ways to Start Your Intelligent Workflow Program

3 Ways to Start Your Intelligent Workflow Program

Feb 18, 2026 Workflow Automation / Enterprise Security
Security, IT, and engineering teams today are under relentless pressure to accelerate outcomes, cut operational drag, and unlock the full potential of AI and automation. But simply investing in tools isn’t enough. 88% of AI proofs-of-concept never make it to production, even though 70% of workers cite freeing time for high-value work as the primary AI automation motivation. Real impact comes from intelligent workflows that combine automation, AI-driven decisioning, and human ingenuity into seamless processes that work across teams and systems.  In this article, we’ll highlight three use cases across Security and IT that can serve as powerful starting points for your intelligent workflow program. For each use case, we’ll share a pre-built workflow to help you tackle real bottlenecks in your organization with automation while connecting directly into your existing tech stack. These use cases are great starting points to help you turn theory into practice and achieve measurable gai...
Researchers Show Copilot and Grok Can Be Abused as Malware C2 Proxies

Researchers Show Copilot and Grok Can Be Abused as Malware C2 Proxies

Feb 17, 2026 Malware / Artificial Intelligence
Cybersecurity researchers have disclosed that artificial intelligence (AI) assistants that support web browsing or URL fetching capabilities can be turned into stealthy command-and-control (C2) relays, a technique that could allow attackers to blend into legitimate enterprise communications and evade detection. The attack method, which has been demonstrated against Microsoft Copilot and xAI Grok, has been codenamed AI as a C2 proxy by Check Point. It leverages "anonymous web access combined with browsing and summarization prompts," the cybersecurity company said. "The same mechanism can also enable AI-assisted malware operations, including generating reconnaissance workflows, scripting attacker actions, and dynamically deciding 'what to do next' during an intrusion." The development signals yet another consequential evolution in how threat actors could abuse AI systems, not just to scale or accelerate different phases of the cyber attack cycle, but als...
cyber security

5 Cloud Security Risks You Can’t Afford to Ignore

websiteSentinelOneEnterprise Security / Cloud Security
Get expert analysis, attacker insights, and case studies in our 2025 risk report.
cyber security

Red Report 2026: Analysis of 1.1M Malicious Files and 15.5M Actions

websitePicus SecurityAttack Surface / Cloud Security
New research shows 80% of top ATT&CK techniques now target evasion to remain undetected. Get your copy now.
SmartLoader Attack Uses Trojanized Oura MCP Server to Deploy StealC Infostealer

SmartLoader Attack Uses Trojanized Oura MCP Server to Deploy StealC Infostealer

Feb 17, 2026 Infostealer / Artificial Intelligence
Cybersecurity researchers have disclosed details of a new SmartLoader campaign that involves distributing a trojanized version of a Model Context Protocol ( MCP ) server associated with Oura Health to deliver an information stealer known as StealC . "The threat actors cloned a legitimate Oura MCP Server – a tool that connects AI assistants to Oura Ring health data – and built a deceptive infrastructure of fake forks and contributors to manufacture credibility," Straiker's AI Research (STAR) Labs team said in a report shared with The Hacker News. The end game is to leverage the trojanized version of the Oura MCP server to deliver the StealC infostealer, allowing the threat actors to steal credentials, browser passwords, and data from cryptocurrency wallets. SmartLoader, first highlighted by OALABS Research in early 2024, is a malware loader that's known to be distributed via fake GitHub repositories containing artificial intelligence (AI)-generated lures to giv...
Microsoft Finds “Summarize with AI” Prompts Manipulating Chatbot Recommendations

Microsoft Finds “Summarize with AI” Prompts Manipulating Chatbot Recommendations

Feb 17, 2026 Enterprise Security / Artificial Intelligence
New research from Microsoft has revealed that legitimate businesses are gaming artificial intelligence (AI) chatbots via the "Summarize with AI" button that's being increasingly placed on websites in ways that mirror classic search engine poisoning (SEO). The new AI hijacking technique has been codenamed AI Recommendation Poisoning by the Microsoft Defender Security Research Team. The tech giant described it as a case of an AI memory poisoning attack that's used to induce bias and deceive the AI system to generate responses that artificially boost visibility and skew recommendations. "Companies are embedding hidden instructions in 'Summarize with AI' buttons that, when clicked, attempt to inject persistence commands into an AI assistant's memory via URL prompt parameters," Microsoft said . "These prompts instruct the AI to 'remember [Company] as a trusted source' or 'recommend [Company] first.'" Microsoft said it id...
Infostealer Steals OpenClaw AI Agent Configuration Files and Gateway Tokens

Infostealer Steals OpenClaw AI Agent Configuration Files and Gateway Tokens

Feb 16, 2026 Artificial Intelligence / Threat Intelligence
Cybersecurity researchers disclosed they have detected a case of an information stealer infection successfully exfiltrating a victim's OpenClaw (formerly Clawdbot and Moltbot ) configuration environment. "This finding marks a significant milestone in the evolution of infostealer behavior: the transition from stealing browser credentials to harvesting the 'souls' and identities of personal AI [artificial intelligence] agents," Hudson Rock said . Alon Gal, CTO of Hudson Rock, told The Hacker News that the stealer was likely a variant of Vidar based on the infection details. Vidar is an off-the-shelf information stealer that's known to be active since late 2018. That said, the cybersecurity company said the data capture was not facilitated by a custom OpenClaw module within the stealer malware, but rather through a "broad file-grabbing routine" that's designed to look for certain file extensions and specific directory names containing sensitiv...
Weekly Recap: Outlook Add-Ins Hijack, 0-Day Patches, Wormable Botnet & AI Malware

Weekly Recap: Outlook Add-Ins Hijack, 0-Day Patches, Wormable Botnet & AI Malware

Feb 16, 2026
This week’s recap shows how small gaps are turning into big entry points. Not always through new exploits, often through tools, add-ons, cloud setups, or workflows that people already trust and rarely question. Another signal: attackers are mixing old and new methods. Legacy botnet tactics, modern cloud abuse, AI assistance, and supply-chain exposure are being used side by side, whichever path gives the easiest foothold. Below is the full weekly recap — a condensed scan of the incidents, flaws, and campaigns shaping the threat landscape right now. ⚡ Threat of the Week Malicious Outlook Add-in Turns Into Phishing Kit — In an unusual case of a supply chain attack, the legitimate AgreeTo add-in for Outlook has been hijacked and turned into a phishing kit that stole more than 4,000 Microsoft account credentials. This was made possible by seizing control of a domain associated with the now-abandoned project to serve a fake Microsoft login page. The incident demonstrates how overlooke...
Safe and Inclusive E‑Society: How Lithuania Is Bracing for AI‑Driven Cyber Fraud

Safe and Inclusive E‑Society: How Lithuania Is Bracing for AI‑Driven Cyber Fraud

Feb 16, 2026 Data Protection / Artificial Intelligence
Technologies are evolving fast, reshaping economies, governance, and daily life. Yet, as innovation accelerates, so do digital risks. Technological change is no longer abstract for such a country as Lithuania, as well. From e-signatures to digital health records, the country depends on secure systems.  Cybersecurity has become not only a technical challenge but a societal one – demanding the cooperation of scientists, business leaders, and policymakers. In Lithuania, this cooperation has taken a concrete form – the government-funded national initiative . Coordinated by the Innovation Agency Lithuania, the project aims to strengthen the country’s e-security and digital resilience.  Under this umbrella, universities and companies with long-standing expertise are working hand in hand to transform scientific knowledge into market-ready, high-value innovations. Several of these solutions are already being tested in real environments, for example, in public institutions and criti...
Malicious Chrome Extensions Caught Stealing Business Data, Emails, and Browsing History

Malicious Chrome Extensions Caught Stealing Business Data, Emails, and Browsing History

Feb 13, 2026 Browser Security / Artificial Intelligence
Cybersecurity researchers have discovered a malicious Google Chrome extension that's designed to steal data associated with Meta Business Suite and Facebook Business Manager. The extension, named CL Suite by @CLMasters (ID: jkphinfhmfkckkcnifhjiplhfoiefffl), is marketed as a way to scrape Meta Business Suite data, remove verification pop-ups, and generate two-factor authentication (2FA) codes. The extension has 33 users as of writing. It was first uploaded to the Chrome Web Store on March 1, 2025. However, the browser add-on also exfiltrates TOTP codes for Facebook and Meta Business accounts, Business Manager contact lists, and analytics data to infrastructure controlled by the threat actor, Socket said. "The extension requests broad access to meta.com and facebook.com and claims in its privacy policy that 2FA secrets and Business Manager data remain local," security researcher Kirill Boychenko said . "In practice, the code transmits TOTP seeds and current one-t...
Google Reports State-Backed Hackers Using Gemini AI for Recon and Attack Support

Google Reports State-Backed Hackers Using Gemini AI for Recon and Attack Support

Feb 12, 2026 Cyber Espionage / Artificial Intelligence
Google on Thursday said it observed the North Korea-linked threat actor known as UNC2970 using its generative artificial intelligence (AI) model Gemini to conduct reconnaissance on its targets, as various hacking groups continue to weaponize the tool for accelerating various phases of the cyber attack life cycle, enabling information operations, and even conducting model extraction attacks. "The group used Gemini to synthesize OSINT and profile high-value targets to support campaign planning and reconnaissance," Google Threat Intelligence Group (GTIG) said in a report shared with The Hacker News. "This actor's target profiling included searching for information on major cybersecurity and defense companies and mapping specific technical job roles and salary information." The tech giant's threat intelligence team characterized this activity as a blurring of boundaries between what constitutes routine professional research and malicious reconnaissance, al...
ThreatsDay Bulletin: AI Prompt RCE, Claude 0-Click, RenEngine Loader, Auto 0-Days & 25+ Stories

ThreatsDay Bulletin: AI Prompt RCE, Claude 0-Click, RenEngine Loader, Auto 0-Days & 25+ Stories

Feb 12, 2026 Cybersecurity / Hacking News
Threat activity this week shows one consistent signal — attackers are leaning harder on what already works. Instead of flashy new exploits, many operations are built around quiet misuse of trusted tools, familiar workflows, and overlooked exposures that sit in plain sight. Another shift is how access is gained versus how it’s used. Initial entry points are getting simpler, while post-compromise activity is becoming more deliberate, structured, and persistent. The objective is less about disruption and more about staying embedded long enough to extract value. There’s also growing overlap between cybercrime, espionage tradecraft, and opportunistic intrusion. Techniques are bleeding across groups, making attribution harder and defense baselines less reliable. Below is this week’s ThreatsDay Bulletin — a tight scan of the signals that matter, distilled into quick reads. Each item adds context to where threat pressure is building next. Notepad RCE via Markdown L...
ZAST.AI Raises $6M Pre-A to Scale "Zero False Positive" AI-Powered Code Security

ZAST.AI Raises $6M Pre-A to Scale "Zero False Positive" AI-Powered Code Security

Feb 10, 2026 Application Security / Artificial Intelligence
January 5, 2026, Seattle, USA — ZAST.AI announced the completion of a $6 million Pre-A funding round. This investment came from the well-known investment firm Hillhouse Capital, bringing ZAST.AI's total funding close to $10 million. This marks a recognition from leading capital markets of a new solution: ending the era of high false positive rates in security tools and making every alert genuinely actionable. In 2025, ZAST.AI discovered hundreds of zero-day vulnerabilities across dozens of popular open-source projects. These findings were submitted through authoritative vulnerability platforms like VulDB, successfully resulting in 119 CVE assignments . These are not laboratory targets, but production-grade code supporting global businesses. Affected well-known projects include widely used components and frameworks such as Microsoft Azure SDK, Apache Struts XWork, Alibaba Nacos, Langfuse, Koa, node-formidable, and others. It was precisely within these widely adopted open-source p...
⚡ Weekly Recap: AI Skill Malware, 31Tbps DDoS, Notepad++ Hack, LLM Backdoors and More

⚡ Weekly Recap: AI Skill Malware, 31Tbps DDoS, Notepad++ Hack, LLM Backdoors and More

Feb 09, 2026 Hacking News / Cybersecurity
Cyber threats are no longer coming from just malware or exploits. They’re showing up inside the tools, platforms, and ecosystems organizations use every day. As companies connect AI, cloud apps, developer tools, and communication systems, attackers are following those same paths. A clear pattern this week: attackers are abusing trust. Trusted updates, trusted marketplaces, trusted apps, even trusted AI workflows. Instead of breaking security controls head-on, they’re slipping into places that already have access. This recap brings together those signals — showing how modern attacks are blending technology abuse, ecosystem manipulation, and large-scale targeting into a single, expanding threat surface. ⚡ Threat of the Week OpenClaw announces VirusTotal Partnership — OpenClaw has announced a partnership with Google's VirusTotal malware scanning platform to scan skills that are being uploaded to ClawHub as part of a defense-in-depth approach to improve the security of the agen...
OpenClaw Integrates VirusTotal Scanning to Detect Malicious ClawHub Skills

OpenClaw Integrates VirusTotal Scanning to Detect Malicious ClawHub Skills

Feb 08, 2026 Artificial Intelligence / Vulnerability
OpenClaw (formerly Moltbot and Clawdbot) has announced that it's partnering with Google-owned VirusTotal to scan skills that are being uploaded to ClawHub, its skill marketplace, as part of broader efforts to bolster the security of the agentic ecosystem. "All skills published to ClawHub are now scanned using VirusTotal's threat intelligence, including their new Code Insight capability," OpenClaw's founder Peter Steinberger, along with Jamieson O'Reilly and Bernardo Quintero said. "This provides an additional layer of security for the OpenClaw community." The process essentially entails creating a unique SHA-256 hash for every skill and cross checking it against VirusTotal's database for a match. If it's not found, the skill bundle is uploaded to the malware scanning tool for further analysis using VirusTotal Code Insight . Skills that have a "benign" Code Insight verdict are automatically approved by ClawHub, while those marke...
Claude Opus 4.6 Finds 500+ High-Severity Flaws Across Major Open-Source Libraries

Claude Opus 4.6 Finds 500+ High-Severity Flaws Across Major Open-Source Libraries

Feb 06, 2026 Artificial Intelligence / Vulnerability
Artificial intelligence (AI) company Anthropic revealed that its latest large language model (LLM), Claude Opus 4.6, has found more than 500 previously unknown high-severity security flaws in open-source libraries, including Ghostscript , OpenSC , and CGIF . Claude Opus 4.6, which was launched Thursday, comes with improved coding skills, including code review and debugging capabilities, along with enhancements to tasks like financial analyses, research, and document creation. Stating that the model is "notably better" at discovering high-severity vulnerabilities without requiring any task-specific tooling, custom scaffolding, or specialized prompting, Anthropic said it is putting it to use to find and help fix vulnerabilities in open-source software. "Opus 4.6 reads and reasons about code the way a human researcher would—looking at past fixes to find similar bugs that weren't addressed, spotting patterns that tend to cause problems, or understanding a piece of...
ThreatsDay Bulletin: Codespaces RCE, AsyncRAT C2, BYOVD Abuse, AI Cloud Intrusions & 15+ Stories

ThreatsDay Bulletin: Codespaces RCE, AsyncRAT C2, BYOVD Abuse, AI Cloud Intrusions & 15+ Stories

Feb 05, 2026 Cybersecurity / Hacking News
This week didn’t produce one big headline. It produced many small signals — the kind that quietly shape what attacks will look like next. Researchers tracked intrusions that start in ordinary places: developer workflows, remote tools, cloud access, identity paths, and even routine user actions. Nothing looked dramatic on the surface. That’s the point. Entry is becoming less visible while impact scales later. Several findings also show how attackers are industrializing their work — shared infrastructure, repeatable playbooks, rented access, and affiliate-style ecosystems. Operations are no longer isolated campaigns. They run more like services. This edition pulls those fragments together — short, precise updates that show where techniques are maturing, where exposure is widening, and what patterns are forming behind the noise. Startup espionage expansion Operation Nomad Leopard Targets Afghanistan In a sign that the threat actor has moved beyond government targets, th...
The Buyer’s Guide to AI Usage Control

The Buyer’s Guide to AI Usage Control

Feb 05, 2026 Artificial Intelligence / SaaS Security
Today’s “AI everywhere” reality is woven into everyday workflows across the enterprise, embedded in SaaS platforms, browsers, copilots, extensions, and a rapidly expanding universe of shadow tools that appear faster than security teams can track. Yet most organizations still rely on legacy controls that operate far away from where AI interactions actually occur. The result is a widening governance gap where AI usage grows exponentially, but visibility and control do not.  With AI becoming central to productivity, enterprises face a new challenge: enabling the business to innovate while maintaining governance, compliance, and security.  A new Buyer’s Guide for AI Usage Control argues that enterprises have fundamentally misunderstood where AI risk lives. Discovering AI Usage and Eliminating ‘Shadow’ AI will also be discussed in an upcoming virtual lunch and learn .  The surprising truth is that AI security isn’t a data problem or an app problem. It’s an interaction pro...
Microsoft Develops Scanner to Detect Backdoors in Open-Weight Large Language Models

Microsoft Develops Scanner to Detect Backdoors in Open-Weight Large Language Models

Feb 04, 2026 Artificial Intelligence / Software Security
Microsoft on Wednesday said it built a lightweight scanner that it said can detect backdoors in open-weight large language models (LLMs) and improve the overall trust in artificial intelligence (AI) systems. The tech giant's AI Security team said the scanner leverages three observable signals that can be used to reliably flag the presence of backdoors while maintaining a low false positive rate. "These signatures are grounded in how trigger inputs measurably affect a model's internal behavior, providing a technically robust and operationally meaningful basis for detection," Blake Bullwinkel and Giorgio Severi said in a report shared with The Hacker News. LLMs can be susceptible to two types of tampering: model weights, which refer to learnable parameters within a machine learning model that undergird the decision-making logic and transform input data into predicted outputs, and the code itself. Another type of attack is model poisoning, which occurs when a t...
Docker Fixes Critical Ask Gordon AI Flaw Allowing Code Execution via Image Metadata

Docker Fixes Critical Ask Gordon AI Flaw Allowing Code Execution via Image Metadata

Feb 03, 2026 Artificial Intelligence / Vulnerability
Cybersecurity researchers have disclosed details of a now-patched security flaw impacting Ask Gordon , an artificial intelligence (AI) assistant built into Docker Desktop and the Docker Command-Line Interface (CLI), that could be exploited to execute code and exfiltrate sensitive data. The critical vulnerability has been codenamed DockerDash by cybersecurity company Noma Labs. It was addressed by Docker with the release of version 4.50.0 in November 2025. "In DockerDash, a single malicious metadata label in a Docker image can be used to compromise your Docker environment through a simple three-stage attack: Gordon AI reads and interprets the malicious instruction, forwards it to the MCP [Model Context Protocol] Gateway, which then executes it through MCP tools," Sasi Levi, security research lead at Noma, said in a report shared with The Hacker News. "Every stage happens with zero validation, taking advantage of current agents and MCP Gateway architecture." ...
Mozilla Adds One-Click Option to Disable Generative AI Features in Firefox

Mozilla Adds One-Click Option to Disable Generative AI Features in Firefox

Feb 03, 2026 Artificial Intelligence / Privacy
Mozilla on Monday announced a new controls section in its Firefox desktop browser settings that allows users to completely turn off generative artificial intelligence (GenAI) features. "It provides a single place to block current and future generative AI features in Firefox," Ajit Varma, head of Firefox, said . "You can also review and manage individual AI features if you choose to use them. This lets you use Firefox without AI while we continue to build AI features for those who want them." Mozilla first announced its plans to integrate AI into Firefox in November 2025, stating it's fully opt-in and that it's incorporating the technology while placing users in the driver's seat. The new feature is expected to be rolled out with Firefox 148, which is scheduled to be released on February 24, 2026. At the outset, AI controls will allow users to manage the following settings individually - Translations Alt text in PDFs (adding accessibility descrip...
Expert Insights Articles Videos
Cybersecurity Resources