#1 Trusted Cybersecurity News Platform
Followed by 5.20+ million
The Hacker News Logo
Subscribe – Get Latest News
AWS EKS Security Best Practices

machine learning | Breaking Cybersecurity News | The Hacker News

Category — machine learning
Kerberoasting Detections: A New Approach to a Decade-Old Challenge

Kerberoasting Detections: A New Approach to a Decade-Old Challenge

Jul 23, 2025 Threat Detection / Identity Security
Security experts have been talking about Kerberoasting for over a decade, yet this attack continues to evade typical defense methods. Why? It's because existing detections rely on brittle heuristics and static rules, which don't hold up for detecting potential attack patterns in highly variable Kerberos traffic. They frequently generate false positives or miss "low-and-slow" attacks altogether.  Is there a better and more accurate way for modern organizations to detect subtle anomalies within irregular Kerberos traffic? The BeyondTrust research team sought to answer this question by combining security research insights with advanced statistics. This article offers a high-level look into the driving forces behind our research and our process of developing and testing a new statistical framework for improving Kerberos anomaly detection accuracy and reducing false positives. An Introduction to Kerberoasting Attacks  Kerberoasting attacks take advantage of the Kerberos netwo...
Assessing the Role of AI in Zero Trust

Assessing the Role of AI in Zero Trust

Jul 21, 2025 Artificial Intelligence / Zero Trust
By 2025, Zero Trust has evolved from a conceptual framework into an essential pillar of modern security. No longer merely theoretical, it's now a requirement that organizations must adopt. A robust, defensible architecture built on Zero Trust principles does more than satisfy baseline regulatory mandates. It underpins cyber resilience, secures third-party partnerships, and ensures uninterrupted business operations. In turn, more than 80% of organizations plan to implement Zero Trust strategies by 2026, according to a recent Zscaler report .  In the context of Zero Trust, artificial intelligence (AI) can assist greatly as a tool for implementing automation around adaptive trust and continuous risk evaluation. In a Zero Trust architecture, access decisions must adapt continuously to changing factors such as device posture, user behavior, location, workload sensitivity, and more. This constant evaluation generates massive volumes of data, far beyond what human teams can process alo...
GPUHammer: New RowHammer Attack Variant Degrades AI Models on NVIDIA GPUs

GPUHammer: New RowHammer Attack Variant Degrades AI Models on NVIDIA GPUs

Jul 12, 2025 AI Security / Vulnerability
NVIDIA is urging customers to enable System-level Error Correction Codes (ECC) as a defense against a variant of a RowHammer attack demonstrated against its graphics processing units (GPUs). "Risk of successful exploitation from RowHammer attacks varies based on DRAM device, platform, design specification, and system settings," the GPU maker said in an advisory released this week. Dubbed GPUHammer , the attacks mark the first-ever RowHammer exploit demonstrated against NVIDIA's GPUs (e.g., NVIDIA A6000 GPU with GDDR6 Memory), causing malicious GPU users to tamper with other users' data by triggering bit flips in GPU memory. The most concerning consequence of this behavior, University of Toronto researchers found, is the degradation of an artificial intelligence (AI) model's accuracy from 80% to less than 1%. RowHammer is to modern DRAMs just like how Spectre and Meltdown are to contemporary CPUs. While both are hardware-level security vulnerabilities, Row...
cyber security

Secure your LLMs Against Real-World Threats

websiteWizLLM Security / Artificial Intelligence
LLMs move fast. So do the risks. Get practical, real-world steps to defend against prompt injection, model poisoning, and more.
cyber security

2025 Gartner® MQ Report for Endpoint Protection Platforms (July 2025 Edition)

websiteSentinelOneEndpoint Protection / Unified Security
Compare leading Endpoint Protection vendors and see why SentinelOne is named a 5x Leader
Your AI Agents Might Be Leaking Data — Watch this Webinar to Learn How to Stop It

Your AI Agents Might Be Leaking Data — Watch this Webinar to Learn How to Stop It

Jul 04, 2025 AI Security / Enterprise Security
Generative AI is changing how businesses work, learn, and innovate. But beneath the surface, something dangerous is happening. AI agents and custom GenAI workflows are creating new, hidden ways for sensitive enterprise data to leak —and most teams don't even realize it. If you're building, deploying, or managing AI systems, now is the time to ask: Are your AI agents exposing confidential data without your knowledge? Most GenAI models don't intentionally leak data. But here's the problem: these agents are often plugged into corporate systems—pulling from SharePoint, Google Drive, S3 buckets, and internal tools to give smart answers. And that's where the risks begin. Without tight access controls, governance policies, and oversight, a well-meaning AI can accidentally expose sensitive information to the wrong users—or worse, to the internet. Imagine a chatbot revealing internal salary data. Or an assistant surfacing unreleased product designs during a casual query. This isn't hypot...
The Hidden Weaknesses in AI SOC Tools that No One Talks About

The Hidden Weaknesses in AI SOC Tools that No One Talks About

Jul 03, 2025 Security Operations / Machine Learning
If you're evaluating AI-powered SOC platforms, you've likely seen bold claims: faster triage, smarter remediation, and less noise. But under the hood, not all AI is created equal. Many solutions rely on pre-trained AI models that are hardwired for a handful of specific use cases. While that might work for yesterday's SOC, today's reality is different. Modern security operations teams face a sprawling and ever-changing landscape of alerts. From cloud to endpoint, identity to OT, insider threats to phishing, network to DLP, and so many more, the list goes on and is continuously growing. CISOs and SOC managers are rightly skeptical. Can this AI actually handle all of my alerts, or is it just another rules engine in disguise? In this post, we'll examine the divide between two types of AI SOC platforms. Those built on adaptive AI, which learns to triage and respond to any alert type, and those that rely on pre-trained AI, limited to handling predefined use cases only. Understanding t...
That Network Traffic Looks Legit, But it Could be Hiding a Serious Threat

That Network Traffic Looks Legit, But it Could be Hiding a Serious Threat

Jul 02, 2025 Network Security / Threat Detection
With nearly 80% of cyber threats now mimicking legitimate user behavior, how are top SOCs determining what's legitimate traffic and what is potentially dangerous? Where do you turn when firewalls and endpoint detection and response (EDR) fall short at detecting the most important threats to your organization? Breaches at edge devices and VPN gateways have risen from 3% to 22%, according to Verizon's latest Data Breach Investigations report. EDR solutions are struggling to catch zero-day exploits, living-off-the-land techniques, and malware-free attacks. Nearly 80% of detected threats use malware-free techniques that mimic normal user behavior, as highlighted in CrowdStrike's 2025 Global Threat Report. The stark reality is that conventional detection methods are no longer sufficient as threat actors adapt their strategies, using clever techniques like credential theft or DLL hijacking to avoid discovery.  In response, security operations centers (SOCs) are turning to a multi-lay...
Facebook’s New AI Tool Asks to Upload Your Photos for Story Ideas, Sparking Privacy Concerns

Facebook's New AI Tool Asks to Upload Your Photos for Story Ideas, Sparking Privacy Concerns

Jun 28, 2025 Privacy / Data Protection
Facebook, the social network platform owned by Meta, is asking for users to upload pictures from their phones to suggest collages, recaps, and other ideas using artificial intelligence (AI), including those that have not been directly uploaded to the service. According to TechCrunch, which first reported the feature, users are being served a new pop-up message asking for permission to "allow cloud processing" when they are attempting to create a new Story on Facebook. "To create ideas for you, we'll select media from your camera roll and upload it to our cloud on an ongoing basis, based on info like time, location or themes," the company notes in the pop-up. "Only you can see suggestions. Your media won't be used for ads targeting. We'll check it for safety and integrity purposes." Should users consent to their photos being processed on the cloud, Meta also states that they are agreeing to its AI terms , which allow it to analyze their med...
WhatsApp Adds AI-Powered Message Summaries for Faster Chat Previews

WhatsApp Adds AI-Powered Message Summaries for Faster Chat Previews

Jun 26, 2025 Artificial Intelligence / Data Protection
Popular messaging platform WhatsApp has added a new artificial intelligence (AI)-powered feature that leverages its in-house solution Meta AI to summarize unread messages in chats. The feature, called Message Summaries, is currently rolling out in the English language to users in the United States, with plans to bring it to other regions and languages later this year. It "uses Meta AI to privately and quickly summarize unread messages in a chat, so you can get an idea of what is happening, before reading the details in your unread messages," WhatsApp said in a post. Message Summaries is optional and is disabled by default. The Meta-owned service said users can also enable " Advanced Chat Privacy " to choose which chats can be shared for providing AI-related features. Most importantly, it's made possible by Private Processing , which WhatsApp launched back in April as a way to enable AI capabilities in a privacy-preserving manner. Private Processing is de...
Google Adds Multi-Layered Defenses to Secure GenAI from Prompt Injection Attacks

Google Adds Multi-Layered Defenses to Secure GenAI from Prompt Injection Attacks

Jun 23, 2025 Artificial Intelligence / AI Security
Google has revealed the various safety measures that are being incorporated into its generative artificial intelligence (AI) systems to mitigate emerging attack vectors like indirect prompt injections and improve the overall security posture for agentic AI systems. "Unlike direct prompt injections, where an attacker directly inputs malicious commands into a prompt, indirect prompt injections involve hidden malicious instructions within external data sources," Google's GenAI security team said . These external sources can take the form of email messages, documents, or even calendar invites that trick the AI systems into exfiltrating sensitive data or performing other malicious actions. The tech giant said it has implemented what it described as a "layered" defense strategy that is designed to increase the difficulty, expense, and complexity required to pull off an attack against its systems. These efforts span model hardening, introducing purpose-built mac...
Secure Vibe Coding: The Complete New Guide

Secure Vibe Coding: The Complete New Guide

Jun 19, 2025 Application Security / LLM Security
DALL-E for coders? That's the promise behind vibe coding, a term describing the use of natural language to create software. While this ushers in a new era of AI-generated code, it introduces "silent killer" vulnerabilities: exploitable flaws that evade traditional security tools despite perfect test performance. A detailed analysis of secure vibe coding practices is available here . TL;DR: Secure Vibe Coding Vibe coding, using natural language to generate software with AI, is revolutionizing development in 2025. But while it accelerates prototyping and democratizes coding, it also introduces "silent killer" vulnerabilities: exploitable flaws that pass tests but evade traditional security tools. This article explores: Real-world examples of AI-generated code in production Shocking stats: 40% higher secret exposure in AI-assisted repos Why LLMs omit security unless explicitly prompted Secure prompting techniques and tool comparisons (GPT-4, Claude, Cursor, etc.) Reg...
Empower Users and Protect Against GenAI Data Loss

Empower Users and Protect Against GenAI Data Loss

Jun 06, 2025 Artificial Intelligence / Zero Trust
When generative AI tools became widely available in late 2022, it wasn't just technologists who paid attention. Employees across all industries immediately recognized the potential of generative AI to boost productivity, streamline communication and accelerate work. Like so many waves of consumer-first IT innovation before it—file sharing, cloud storage and collaboration platforms—AI landed in the enterprise not through official channels, but through the hands of employees eager to work smarter. Faced with the risk of sensitive data being fed into public AI interfaces, many organizations responded with urgency and force: They blocked access. While understandable as an initial defensive measure, blocking public AI apps is not a long-term strategy—it's a stopgap. And in most cases, it's not even effective. Shadow AI: The Unseen Risk The Zscaler ThreatLabz team has been tracking AI and machine learning (ML) traffic across enterprises, and the numbers tell a compelling story. In 2024 ...
Cybersecurity in the AI Era: Evolve Faster Than the Threats or Get Left Behind

Cybersecurity in the AI Era: Evolve Faster Than the Threats or Get Left Behind

Apr 14, 2025 Cybersecurity / Security Training
AI is changing cybersecurity faster than many defenders realize. Attackers are already using AI to automate reconnaissance, generate sophisticated phishing lures, and exploit vulnerabilities before security teams can react. Meanwhile, defenders are overwhelmed by massive amounts of data and alerts, struggling to process information quickly enough to identify real threats. AI offers a way to level the playing field, but only if security professionals learn to apply it effectively. Organizations are beginning to integrate AI into security workflows, from digital forensics to vulnerability assessments and endpoint detection. AI allows security teams to ingest and analyze more data than ever before, transforming traditional security tools into powerful intelligence engines. AI has already demonstrated its ability to accelerate investigations and uncover unknown attack paths, but many companies are hesitant to fully embrace it. Many AI models are implemented with such velocity that they r...
The Identities Behind AI Agents: A Deep Dive Into AI & NHI

The Identities Behind AI Agents: A Deep Dive Into AI & NHI

Apr 10, 2025 AI Security / Enterprise Security
AI agents have rapidly evolved from experimental technology to essential business tools. The OWASP framework explicitly recognizes that Non-Human Identities play a key role in agentic AI security. Their analysis highlights how these autonomous software entities can make decisions, chain complex actions together, and operate continuously without human intervention. They're no longer just tools, but an integral and significant part of your organization's workforce. Consider this reality: Today's AI agents can analyze customer data, generate reports, manage system resources, and even deploy code, all without a human clicking a single button. This shift represents both tremendous opportunity and unprecedented risk. AI Agents are only as secure as their NHIs Here's what security leaders are not necessarily considering: AI agents don't operate in isolation . To function, they need access to data, systems, and resources. This highly privileged, often overlooked acces...
Expert Insights Articles Videos
Cybersecurity Resources
//]]>