#1 Trusted Cybersecurity News Platform
Followed by 5.20+ million
The Hacker News Logo
Subscribe – Get Latest News
DevSecOps

machine learning | Breaking Cybersecurity News | The Hacker News

Category — machine learning
12,000+ API Keys and Passwords Found in Public Datasets Used for LLM Training

12,000+ API Keys and Passwords Found in Public Datasets Used for LLM Training

Feb 28, 2025 Machine Learning / Data Privacy
A dataset used to train large language models (LLMs) has been found to contain nearly 12,000 live secrets, which allow for successful authentication. The findings once again highlight how hard-coded credentials pose a severe security risk to users and organizations alike, not to mention compounding the problem when LLMs end up suggesting insecure coding practices to their users. Truffle Security said it downloaded a December 2024 archive from Common Crawl , which maintains a free, open repository of web crawl data. The massive dataset contains over 250 billion pages spanning 18 years.  The archive specifically contains 400TB of compressed web data, 90,000 WARC files (Web ARChive format), and data from 47.5 million hosts across 38.3 million registered domains. The company's analysis found that there are 219 different secret types in the Common Crawl archive, including Amazon Web Services (AWS) root keys, Slack webhooks, and Mailchimp API keys. "'Live' secrets ar...
SOC 3.0 - The Evolution of the SOC and How AI is Empowering Human Talent

SOC 3.0 - The Evolution of the SOC and How AI is Empowering Human Talent

Feb 26, 2025 Machine Learning / Threat Detection
Organizations today face relentless cyber attacks, with high-profile breaches hitting the headlines almost daily. Reflecting on a long journey in the security field, it's clear this isn't just a human problem—it's a math problem. There are simply too many threats and security tasks for any SOC to manually handle in a reasonable timeframe. Yet, there is a solution. Many refer to it as SOC 3.0—an AI-augmented environment that finally lets analysts do more with less and shifts security operations from a reactive posture to a proactive force. The transformative power of SOC 3.0 will be detailed later in this article, showcasing how artificial intelligence can dramatically reduce workload and risk, delivering world-class security operations that every CISO dreams of. However, to appreciate this leap forward, it's important to understand how the SOC evolved over time and why the steps leading up to 3.0 set the stage for a new era of security operations. A brief history of the SOC For deca...
Why Most Microsegmentation Projects Fail—And How Andelyn Biosciences Got It Right

Why Most Microsegmentation Projects Fail—And How Andelyn Biosciences Got It Right

Mar 14, 2025Zero Trust / Network Security
Most microsegmentation projects fail before they even get off the ground—too complex, too slow, too disruptive. But Andelyn Biosciences proved it doesn't have to be that way.  Microsegmentation: The Missing Piece in Zero Trust Security   Security teams today are under constant pressure to defend against increasingly sophisticated cyber threats. Perimeter-based defenses alone can no longer provide sufficient protection as attackers shift their focus to lateral movement within enterprise networks. With over 70% of successful breaches involving attackers moving laterally, organizations are rethinking how they secure internal traffic.  Microsegmentation has emerged as a key strategy in achieving Zero Trust security by restricting access to critical assets based on identity rather than network location. However, traditional microsegmentation approaches—often involving VLAN reconfigurations, agent deployments, or complex firewall rules—tend to be slow, operationally disrupt...
AI and Security - A New Puzzle to Figure Out

AI and Security - A New Puzzle to Figure Out

Feb 13, 2025 AI Security / Data Protection
AI is everywhere now, transforming how businesses operate and how users engage with apps, devices, and services. A lot of applications now have some Artificial Intelligence inside, whether supporting a chat interface, intelligently analyzing data or matching user preferences. No question AI benefits users, but it also brings new security challenges, especially Identity-related security challenges. Let's explore what these challenges are and what you can do to face them with Okta. Which AI? Everyone talks about AI, but this term is very general, and several technologies fall under this umbrella. For example, symbolic AI uses technologies such as logic programming, expert systems, and semantic networks. Other approaches use neural networks, Bayesian networks, and other tools. Newer Generative AI uses Machine Learning (ML) and Large Language Models (LLM) as core technologies to generate content such as text, images, video, audio, etc. Many of the applications we use most often toda...
cyber security

The State of GRC 2025: From Cost Center to Strategic Business Driver

websiteDrataGovernance / Compliance
Drata's new report takes a look at how GRC professionals are approaching data protection regulations, AI, and the ability to maintain customer trust.
Google Confirms Android SafetyCore Enables AI-Powered On-Device Content Classification

Google Confirms Android SafetyCore Enables AI-Powered On-Device Content Classification

Feb 11, 2025 Mobile Security / Machine Learning
Google has stepped in to clarify that a newly introduced Android System SafetyCore app does not perform any client-side scanning of content. "Android provides many on-device protections that safeguard users against threats like malware, messaging spam and abuse protections, and phone scam protections, while preserving user privacy and keeping users in control of their data," a spokesperson for the company told The Hacker News when reached for comment. "SafetyCore is a new Google system service for Android 9+ devices that provides the on-device infrastructure for securely and privately performing classification to help users detect unwanted content. Users are in control over SafetyCore and SafetyCore only classifies specific content when an app requests it through an optionally enabled feature." SafetyCore (package name "com.google.android.safetycore") was first introduced by Google in October 2024, as part of a set of security measures designed to...
Malicious ML Models on Hugging Face Leverage Broken Pickle Format to Evade Detection

Malicious ML Models on Hugging Face Leverage Broken Pickle Format to Evade Detection

Feb 08, 2025 Artificial Intelligence / Supply Chain Security
Cybersecurity researchers have uncovered two malicious machine learning (ML) models on Hugging Face that leveraged an unusual technique of "broken" pickle files to evade detection. "The pickle files extracted from the mentioned PyTorch archives revealed the malicious Python content at the beginning of the file," ReversingLabs researcher Karlo Zanki said in a report shared with The Hacker News. "In both cases, the malicious payload was a typical platform-aware reverse shell that connects to a hard-coded IP address." The approach has been dubbed nullifAI, as it involves clearcut attempts to sidestep existing safeguards put in place to identify malicious models. The Hugging Face repositories have been listed below - glockr1/ballr7 who-r-u0000/0000000000000000000000000000000000000 It's believed that the models are more of a proof-of-concept (PoC) than an active supply chain attack scenario. The pickle serialization format, used commonly for dis...
Italy Bans Chinese DeepSeek AI Over Data Privacy and Ethical Concerns

Italy Bans Chinese DeepSeek AI Over Data Privacy and Ethical Concerns

Jan 31, 2025 AI Ethics / Machine Learning
Italy's data protection watchdog has blocked Chinese artificial intelligence (AI) firm DeepSeek's service within the country, citing a lack of information on its use of users' personal data. The development comes days after the authority, the Garante, sent a series of questions to DeepSeek, asking about its data handling practices and where it obtained its training data. In particular, it wanted to know what personal data is collected by its web platform and mobile app, from which sources, for what purposes, on what legal basis, and whether it is stored in China. In a statement issued January 30, 2025, the Garante said it arrived at the decision after DeepSeek provided information that it said was "completely insufficient." The entities behind the service, Hangzhou DeepSeek Artificial Intelligence and Beijing DeepSeek Artificial Intelligence, have "declared that they do not operate in Italy and that European legislation does not apply to them," it...
New AI Jailbreak Method 'Bad Likert Judge' Boosts Attack Success Rates by Over 60%

New AI Jailbreak Method 'Bad Likert Judge' Boosts Attack Success Rates by Over 60%

Jan 03, 2025 Machine Learning / Vulnerability
Cybersecurity researchers have shed light on a new jailbreak technique that could be used to get past a large language model's (LLM) safety guardrails and produce potentially harmful or malicious responses. The multi-turn (aka many-shot) attack strategy has been codenamed Bad Likert Judge by Palo Alto Networks Unit 42 researchers Yongzhe Huang, Yang Ji, Wenjun Hu, Jay Chen, Akshata Rao, and Danny Tsechansky. "The technique asks the target LLM to act as a judge scoring the harmfulness of a given response using the Likert scale , a rating scale measuring a respondent's agreement or disagreement with a statement," the Unit 42 team said . "It then asks the LLM to generate responses that contain examples that align with the scales. The example that has the highest Likert scale can potentially contain the harmful content." The explosion in popularity of artificial intelligence in recent years has also led to a new class of security exploits called prompt in...
AI Could Generate 10,000 Malware Variants, Evading Detection in 88% of Case

AI Could Generate 10,000 Malware Variants, Evading Detection in 88% of Case

Dec 23, 2024 Machine Learning / Threat Analysis
Cybersecurity researchers have found that it's possible to use large language models (LLMs) to generate new variants of malicious JavaScript code at scale in a manner that can better evade detection. "Although LLMs struggle to create malware from scratch, criminals can easily use them to rewrite or obfuscate existing malware, making it harder to detect," Palo Alto Networks Unit 42 researchers said in a new analysis. "Criminals can prompt LLMs to perform transformations that are much more natural-looking, which makes detecting this malware more challenging." With enough transformations over time, the approach could have the advantage of degrading the performance of malware classification systems, tricking them into believing that a piece of nefarious code is actually benign. While LLM providers have increasingly enforced security guardrails to prevent them from going off the rails and producing unintended output, bad actors have advertised tools like WormGPT...
Researchers Uncover Flaws in Popular Open-Source Machine Learning Frameworks

Researchers Uncover Flaws in Popular Open-Source Machine Learning Frameworks

Dec 06, 2024 Artificial Intelligence / Vulnerability
Cybersecurity researchers have disclosed multiple security flaws impacting open-source machine learning (ML) tools and frameworks such as MLflow, H2O, PyTorch, and MLeap that could pave the way for code execution. The vulnerabilities, discovered by JFrog, are part of a broader collection of 22 security shortcomings the supply chain security company first disclosed last month. Unlike the first set that involved flaws on the server-side, the newly detailed ones allow exploitation of ML clients and reside in libraries that handle safe model formats like Safetensors . "Hijacking an ML client in an organization can allow the attackers to perform extensive lateral movement within the organization," the company said . "An ML client is very likely to have access to important ML services such as ML Model Registries or MLOps Pipelines." This, in turn, could expose sensitive information such as model registry credentials, effectively permitting a malicious actor to back...
Researchers Warn of Privilege Escalation Risks in Google's Vertex AI ML Platform

Researchers Warn of Privilege Escalation Risks in Google's Vertex AI ML Platform

Nov 15, 2024 Artificial Intelligence / Vulnerability
Cybersecurity researchers have disclosed two security flaws in Google's Vertex machine learning (ML) platform that, if successfully exploited, could allow malicious actors to escalate privileges and exfiltrate models from the cloud. "By exploiting custom job permissions, we were able to escalate our privileges and gain unauthorized access to all data services in the project," Palo Alto Networks Unit 42 researchers Ofir Balassiano and Ofir Shaty said in an analysis published earlier this week. "Deploying a poisoned model in Vertex AI led to the exfiltration of all other fine-tuned models, posing a serious proprietary and sensitive data exfiltration attack risk." Vertex AI is Google's ML platform for training and deploying custom ML models and artificial intelligence (AI) applications at scale. It was first introduced in May 2021. Crucial to leveraging the privilege escalation flaw is a feature called Vertex AI Pipelines , which allows users to automat...
How AI Is Transforming IAM and Identity Security

How AI Is Transforming IAM and Identity Security

Nov 15, 2024 Machine Learning / Identity Security
In recent years, artificial intelligence (AI) has begun revolutionizing Identity Access Management (IAM), reshaping how cybersecurity is approached in this crucial field. Leveraging AI in IAM is about tapping into its analytical capabilities to monitor access patterns and identify anomalies that could signal a potential security breach. The focus has expanded beyond merely managing human identities — now, autonomous systems, APIs, and connected devices also fall within the realm of AI-driven IAM, creating a dynamic security ecosystem that adapts and evolves in response to sophisticated cyber threats. The Role of AI and Machine Learning in IAM AI and machine learning (ML) are creating a more robust, proactive IAM system that continuously learns from the environment to enhance security. Let's explore how AI impacts key IAM components: Intelligent Monitoring and Anomaly Detection AI enables continuous monitoring of both human and non-human identities , including APIs, service acc...
Security Flaws in Popular ML Toolkits Enable Server Hijacks, Privilege Escalation

Security Flaws in Popular ML Toolkits Enable Server Hijacks, Privilege Escalation

Nov 11, 2024 Machine Learning / Vulnerability
Cybersecurity researchers have uncovered nearly two dozen security flaws spanning 15 different machine learning (ML) related open-source projects. These comprise vulnerabilities discovered both on the server- and client-side, software supply chain security firm JFrog said in an analysis published last week. The server-side weaknesses "allow attackers to hijack important servers in the organization such as ML model registries, ML databases and ML pipelines," it said . The vulnerabilities, discovered in Weave, ZenML, Deep Lake, Vanna.AI, and Mage AI, have been broken down into broader sub-categories that allow for remotely hijacking model registries, ML database frameworks, and taking over ML Pipelines. A brief description of the identified flaws is below - CVE-2024-7340 (CVSS score: 8.8) - A directory traversal vulnerability in the Weave ML toolkit that allows for reading files across the whole filesystem, effectively allowing a low-privileged authenticated user to es...
Google’s AI Tool Big Sleep Finds Zero-Day Vulnerability in SQLite Database Engine

Google's AI Tool Big Sleep Finds Zero-Day Vulnerability in SQLite Database Engine

Nov 04, 2024 Artificial Intelligence / Vulnerability
Google said it discovered a zero-day vulnerability in the SQLite open-source database engine using its large language model (LLM) assisted framework called Big Sleep (formerly Project Naptime). The tech giant described the development as the "first real-world vulnerability" uncovered using the artificial intelligence (AI) agent. "We believe this is the first public example of an AI agent finding a previously unknown exploitable memory-safety issue in widely used real-world software," the Big Sleep team said in a blog post shared with The Hacker News. The vulnerability in question is a stack buffer underflow in SQLite, which occurs when a piece of software references a memory location prior to the beginning of the memory buffer, thereby resulting in a crash or arbitrary code execution. "This typically occurs when a pointer or its index is decremented to a position before the buffer, when pointer arithmetic results in a position before the beginning of t...
Researchers Uncover Vulnerabilities in Open-Source AI and ML Models

Researchers Uncover Vulnerabilities in Open-Source AI and ML Models

Oct 29, 2024 AI Security / Vulnerability
A little over three dozen security vulnerabilities have been disclosed in various open-source artificial intelligence (AI) and machine learning (ML) models, some of which could lead to remote code execution and information theft. The flaws, identified in tools like ChuanhuChatGPT, Lunary, and LocalAI, have been reported as part of Protect AI's Huntr bug bounty platform. The most severe of the flaws are two shortcomings impacting Lunary, a production toolkit for large language models (LLMs) - CVE-2024-7474 (CVSS score: 9.1) - An Insecure Direct Object Reference (IDOR) vulnerability that could allow an authenticated user to view or delete external users, resulting in unauthorized data access and potential data loss CVE-2024-7475 (CVSS score: 9.1) - An improper access control vulnerability that allows an attacker to update the SAML configuration, thereby making it possible to log in as an unauthorized user and access sensitive information Also discovered in Lunary is anot...
Expert Insights / Articles Videos
Cybersecurity Resources