#1 Trusted Cybersecurity News Platform
Followed by 5.20+ million
The Hacker News Logo
Subscribe – Get Latest News
AWS EKS Security Best Practices

machine learning | Breaking Cybersecurity News | The Hacker News

Category — machine learning
Secure Vibe Coding: The Complete New Guide

Secure Vibe Coding: The Complete New Guide

июн. 19, 2025 Application Security / LLM Security
DALL-E for coders? That's the promise behind vibe coding, a term describing the use of natural language to create software. While this ushers in a new era of AI-generated code, it introduces "silent killer" vulnerabilities: exploitable flaws that evade traditional security tools despite perfect test performance. A detailed analysis of secure vibe coding practices is available here . TL;DR: Secure Vibe Coding Vibe coding, using natural language to generate software with AI, is revolutionizing development in 2025. But while it accelerates prototyping and democratizes coding, it also introduces "silent killer" vulnerabilities: exploitable flaws that pass tests but evade traditional security tools. This article explores: Real-world examples of AI-generated code in production Shocking stats: 40% higher secret exposure in AI-assisted repos Why LLMs omit security unless explicitly prompted Secure prompting techniques and tool comparisons (GPT-4, Claude, Cursor, etc.) Reg...
Empower Users and Protect Against GenAI Data Loss

Empower Users and Protect Against GenAI Data Loss

июн. 06, 2025 Artificial Intelligence / Zero Trust
When generative AI tools became widely available in late 2022, it wasn't just technologists who paid attention. Employees across all industries immediately recognized the potential of generative AI to boost productivity, streamline communication and accelerate work. Like so many waves of consumer-first IT innovation before it—file sharing, cloud storage and collaboration platforms—AI landed in the enterprise not through official channels, but through the hands of employees eager to work smarter. Faced with the risk of sensitive data being fed into public AI interfaces, many organizations responded with urgency and force: They blocked access. While understandable as an initial defensive measure, blocking public AI apps is not a long-term strategy—it's a stopgap. And in most cases, it's not even effective. Shadow AI: The Unseen Risk The Zscaler ThreatLabz team has been tracking AI and machine learning (ML) traffic across enterprises, and the numbers tell a compelling story. In 2024 ...
Cybersecurity in the AI Era: Evolve Faster Than the Threats or Get Left Behind

Cybersecurity in the AI Era: Evolve Faster Than the Threats or Get Left Behind

апр. 14, 2025 Cybersecurity / Security Training
AI is changing cybersecurity faster than many defenders realize. Attackers are already using AI to automate reconnaissance, generate sophisticated phishing lures, and exploit vulnerabilities before security teams can react. Meanwhile, defenders are overwhelmed by massive amounts of data and alerts, struggling to process information quickly enough to identify real threats. AI offers a way to level the playing field, but only if security professionals learn to apply it effectively. Organizations are beginning to integrate AI into security workflows, from digital forensics to vulnerability assessments and endpoint detection. AI allows security teams to ingest and analyze more data than ever before, transforming traditional security tools into powerful intelligence engines. AI has already demonstrated its ability to accelerate investigations and uncover unknown attack paths, but many companies are hesitant to fully embrace it. Many AI models are implemented with such velocity that they r...
cyber security

Stop Lateral Movement Now

websiteElisityIdentity Security / Microsegmentation
Download your expert's buying checklist
cyber security

Make the Most of the Security Tools You Already Have

websitePelude SecurityThreat Management
Identify critical gaps and misconfigurations in your security tools with Prelude's 14-day free trial.
The Identities Behind AI Agents: A Deep Dive Into AI & NHI

The Identities Behind AI Agents: A Deep Dive Into AI & NHI

апр. 10, 2025 AI Security / Enterprise Security
AI agents have rapidly evolved from experimental technology to essential business tools. The OWASP framework explicitly recognizes that Non-Human Identities play a key role in agentic AI security. Their analysis highlights how these autonomous software entities can make decisions, chain complex actions together, and operate continuously without human intervention. They're no longer just tools, but an integral and significant part of your organization's workforce. Consider this reality: Today's AI agents can analyze customer data, generate reports, manage system resources, and even deploy code, all without a human clicking a single button. This shift represents both tremendous opportunity and unprecedented risk. AI Agents are only as secure as their NHIs Here's what security leaders are not necessarily considering: AI agents don't operate in isolation . To function, they need access to data, systems, and resources. This highly privileged, often overlooked acces...
12,000+ API Keys and Passwords Found in Public Datasets Used for LLM Training

12,000+ API Keys and Passwords Found in Public Datasets Used for LLM Training

февр. 28, 2025 Machine Learning / Data Privacy
A dataset used to train large language models (LLMs) has been found to contain nearly 12,000 live secrets, which allow for successful authentication. The findings once again highlight how hard-coded credentials pose a severe security risk to users and organizations alike, not to mention compounding the problem when LLMs end up suggesting insecure coding practices to their users. Truffle Security said it downloaded a December 2024 archive from Common Crawl , which maintains a free, open repository of web crawl data. The massive dataset contains over 250 billion pages spanning 18 years.  The archive specifically contains 400TB of compressed web data, 90,000 WARC files (Web ARChive format), and data from 47.5 million hosts across 38.3 million registered domains. The company's analysis found that there are 219 different secret types in the Common Crawl archive, including Amazon Web Services (AWS) root keys, Slack webhooks, and Mailchimp API keys. "'Live' secrets ar...
SOC 3.0 - The Evolution of the SOC and How AI is Empowering Human Talent

SOC 3.0 - The Evolution of the SOC and How AI is Empowering Human Talent

февр. 26, 2025 Machine Learning / Threat Detection
Organizations today face relentless cyber attacks, with high-profile breaches hitting the headlines almost daily. Reflecting on a long journey in the security field, it's clear this isn't just a human problem—it's a math problem. There are simply too many threats and security tasks for any SOC to manually handle in a reasonable timeframe. Yet, there is a solution. Many refer to it as SOC 3.0—an AI-augmented environment that finally lets analysts do more with less and shifts security operations from a reactive posture to a proactive force. The transformative power of SOC 3.0 will be detailed later in this article, showcasing how artificial intelligence can dramatically reduce workload and risk, delivering world-class security operations that every CISO dreams of. However, to appreciate this leap forward, it's important to understand how the SOC evolved over time and why the steps leading up to 3.0 set the stage for a new era of security operations. A brief history of the SOC For deca...
AI and Security - A New Puzzle to Figure Out

AI and Security - A New Puzzle to Figure Out

февр. 13, 2025 AI Security / Data Protection
AI is everywhere now, transforming how businesses operate and how users engage with apps, devices, and services. A lot of applications now have some Artificial Intelligence inside, whether supporting a chat interface, intelligently analyzing data or matching user preferences. No question AI benefits users, but it also brings new security challenges, especially Identity-related security challenges. Let's explore what these challenges are and what you can do to face them with Okta. Which AI? Everyone talks about AI, but this term is very general, and several technologies fall under this umbrella. For example, symbolic AI uses technologies such as logic programming, expert systems, and semantic networks. Other approaches use neural networks, Bayesian networks, and other tools. Newer Generative AI uses Machine Learning (ML) and Large Language Models (LLM) as core technologies to generate content such as text, images, video, audio, etc. Many of the applications we use most often toda...
Google Confirms Android SafetyCore Enables AI-Powered On-Device Content Classification

Google Confirms Android SafetyCore Enables AI-Powered On-Device Content Classification

февр. 11, 2025 Mobile Security / Machine Learning
Google has stepped in to clarify that a newly introduced Android System SafetyCore app does not perform any client-side scanning of content. "Android provides many on-device protections that safeguard users against threats like malware, messaging spam and abuse protections, and phone scam protections, while preserving user privacy and keeping users in control of their data," a spokesperson for the company told The Hacker News when reached for comment. "SafetyCore is a new Google system service for Android 9+ devices that provides the on-device infrastructure for securely and privately performing classification to help users detect unwanted content. Users are in control over SafetyCore and SafetyCore only classifies specific content when an app requests it through an optionally enabled feature." SafetyCore (package name "com.google.android.safetycore") was first introduced by Google in October 2024, as part of a set of security measures designed to...
Malicious ML Models on Hugging Face Leverage Broken Pickle Format to Evade Detection

Malicious ML Models on Hugging Face Leverage Broken Pickle Format to Evade Detection

февр. 08, 2025 Artificial Intelligence / Supply Chain Security
Cybersecurity researchers have uncovered two malicious machine learning (ML) models on Hugging Face that leveraged an unusual technique of "broken" pickle files to evade detection. "The pickle files extracted from the mentioned PyTorch archives revealed the malicious Python content at the beginning of the file," ReversingLabs researcher Karlo Zanki said in a report shared with The Hacker News. "In both cases, the malicious payload was a typical platform-aware reverse shell that connects to a hard-coded IP address." The approach has been dubbed nullifAI, as it involves clearcut attempts to sidestep existing safeguards put in place to identify malicious models. The Hugging Face repositories have been listed below - glockr1/ballr7 who-r-u0000/0000000000000000000000000000000000000 It's believed that the models are more of a proof-of-concept (PoC) than an active supply chain attack scenario. The pickle serialization format, used commonly for dis...
Italy Bans Chinese DeepSeek AI Over Data Privacy and Ethical Concerns

Italy Bans Chinese DeepSeek AI Over Data Privacy and Ethical Concerns

янв. 31, 2025 AI Ethics / Machine Learning
Italy's data protection watchdog has blocked Chinese artificial intelligence (AI) firm DeepSeek's service within the country, citing a lack of information on its use of users' personal data. The development comes days after the authority, the Garante, sent a series of questions to DeepSeek, asking about its data handling practices and where it obtained its training data. In particular, it wanted to know what personal data is collected by its web platform and mobile app, from which sources, for what purposes, on what legal basis, and whether it is stored in China. In a statement issued January 30, 2025, the Garante said it arrived at the decision after DeepSeek provided information that it said was "completely insufficient." The entities behind the service, Hangzhou DeepSeek Artificial Intelligence and Beijing DeepSeek Artificial Intelligence, have "declared that they do not operate in Italy and that European legislation does not apply to them," it...
New AI Jailbreak Method 'Bad Likert Judge' Boosts Attack Success Rates by Over 60%

New AI Jailbreak Method 'Bad Likert Judge' Boosts Attack Success Rates by Over 60%

янв. 03, 2025 Machine Learning / Vulnerability
Cybersecurity researchers have shed light on a new jailbreak technique that could be used to get past a large language model's (LLM) safety guardrails and produce potentially harmful or malicious responses. The multi-turn (aka many-shot) attack strategy has been codenamed Bad Likert Judge by Palo Alto Networks Unit 42 researchers Yongzhe Huang, Yang Ji, Wenjun Hu, Jay Chen, Akshata Rao, and Danny Tsechansky. "The technique asks the target LLM to act as a judge scoring the harmfulness of a given response using the Likert scale , a rating scale measuring a respondent's agreement or disagreement with a statement," the Unit 42 team said . "It then asks the LLM to generate responses that contain examples that align with the scales. The example that has the highest Likert scale can potentially contain the harmful content." The explosion in popularity of artificial intelligence in recent years has also led to a new class of security exploits called prompt in...
AI Could Generate 10,000 Malware Variants, Evading Detection in 88% of Case

AI Could Generate 10,000 Malware Variants, Evading Detection in 88% of Case

дек. 23, 2024 Machine Learning / Threat Analysis
Cybersecurity researchers have found that it's possible to use large language models (LLMs) to generate new variants of malicious JavaScript code at scale in a manner that can better evade detection. "Although LLMs struggle to create malware from scratch, criminals can easily use them to rewrite or obfuscate existing malware, making it harder to detect," Palo Alto Networks Unit 42 researchers said in a new analysis. "Criminals can prompt LLMs to perform transformations that are much more natural-looking, which makes detecting this malware more challenging." With enough transformations over time, the approach could have the advantage of degrading the performance of malware classification systems, tricking them into believing that a piece of nefarious code is actually benign. While LLM providers have increasingly enforced security guardrails to prevent them from going off the rails and producing unintended output, bad actors have advertised tools like WormGPT...
Expert Insights Articles Videos
Cybersecurity Resources