#1 Trusted Cybersecurity News Platform
Followed by 5.20+ million
The Hacker News Logo
Subscribe – Get Latest News
DevSecOps

machine learning | Breaking Cybersecurity News | The Hacker News

Category — machine learning
Google Confirms Android SafetyCore Enables AI-Powered On-Device Content Classification

Google Confirms Android SafetyCore Enables AI-Powered On-Device Content Classification

Feb 11, 2025 Mobile Security / Machine Learning
Google has stepped in to clarify that a newly introduced Android System SafetyCore app does not perform any client-side scanning of content. "Android provides many on-device protections that safeguard users against threats like malware, messaging spam and abuse protections, and phone scam protections, while preserving user privacy and keeping users in control of their data," a spokesperson for the company told The Hacker News when reached for comment. "SafetyCore is a new Google system service for Android 9+ devices that provides the on-device infrastructure for securely and privately performing classification to help users detect unwanted content. Users are in control over SafetyCore and SafetyCore only classifies specific content when an app requests it through an optionally enabled feature." SafetyCore (package name "com.google.android.safetycore") was first introduced by Google in October 2024, as part of a set of security measures designed to...
Malicious ML Models on Hugging Face Leverage Broken Pickle Format to Evade Detection

Malicious ML Models on Hugging Face Leverage Broken Pickle Format to Evade Detection

Feb 08, 2025 Artificial Intelligence / Supply Chain Security
Cybersecurity researchers have uncovered two malicious machine learning (ML) models on Hugging Face that leveraged an unusual technique of "broken" pickle files to evade detection. "The pickle files extracted from the mentioned PyTorch archives revealed the malicious Python content at the beginning of the file," ReversingLabs researcher Karlo Zanki said in a report shared with The Hacker News. "In both cases, the malicious payload was a typical platform-aware reverse shell that connects to a hard-coded IP address." The approach has been dubbed nullifAI, as it involves clearcut attempts to sidestep existing safeguards put in place to identify malicious models. The Hugging Face repositories have been listed below - glockr1/ballr7 who-r-u0000/0000000000000000000000000000000000000 It's believed that the models are more of a proof-of-concept (PoC) than an active supply chain attack scenario. The pickle serialization format, used commonly for dis...
Watch Out For These 8 Cloud Security Shifts in 2025

Watch Out For These 8 Cloud Security Shifts in 2025

Feb 04, 2025Threat Detection / Cloud Security
As cloud security evolves in 2025 and beyond, organizations must adapt to both new and evolving realities, including the increasing reliance on cloud infrastructure for AI-driven workflows and the vast quantities of data being migrated to the cloud. But there are other developments that could impact your organizations and drive the need for an even more robust security strategy. Let's take a look… #1: Increased Threat Landscape Encourages Market Consolidation Cyberattacks targeting cloud environments are becoming more sophisticated, emphasizing the need for security solutions that go beyond detection. Organizations will need proactive defense mechanisms to prevent risks from reaching production. Because of this need, the market will favor vendors offering comprehensive, end-to-end security platforms that streamline risk mitigation and enhance operational efficiency. #2: Cloud Security Unifies with SOC Priorities Security operations centers (SOC) and cloud security functions are c...
Italy Bans Chinese DeepSeek AI Over Data Privacy and Ethical Concerns

Italy Bans Chinese DeepSeek AI Over Data Privacy and Ethical Concerns

Jan 31, 2025 AI Ethics / Machine Learning
Italy's data protection watchdog has blocked Chinese artificial intelligence (AI) firm DeepSeek's service within the country, citing a lack of information on its use of users' personal data. The development comes days after the authority, the Garante, sent a series of questions to DeepSeek, asking about its data handling practices and where it obtained its training data. In particular, it wanted to know what personal data is collected by its web platform and mobile app, from which sources, for what purposes, on what legal basis, and whether it is stored in China. In a statement issued January 30, 2025, the Garante said it arrived at the decision after DeepSeek provided information that it said was "completely insufficient." The entities behind the service, Hangzhou DeepSeek Artificial Intelligence and Beijing DeepSeek Artificial Intelligence, have "declared that they do not operate in Italy and that European legislation does not apply to them," it...
cyber security

Webinar: 5 Ways New AI Agents Can Automate Identity Attacks | Register Now

websitePush SecurityAI Agents / Identity Security
Watch how Computer-Using Agents can be used by attackers to automate account takeover and exploitation.
New AI Jailbreak Method 'Bad Likert Judge' Boosts Attack Success Rates by Over 60%

New AI Jailbreak Method 'Bad Likert Judge' Boosts Attack Success Rates by Over 60%

Jan 03, 2025 Machine Learning / Vulnerability
Cybersecurity researchers have shed light on a new jailbreak technique that could be used to get past a large language model's (LLM) safety guardrails and produce potentially harmful or malicious responses. The multi-turn (aka many-shot) attack strategy has been codenamed Bad Likert Judge by Palo Alto Networks Unit 42 researchers Yongzhe Huang, Yang Ji, Wenjun Hu, Jay Chen, Akshata Rao, and Danny Tsechansky. "The technique asks the target LLM to act as a judge scoring the harmfulness of a given response using the Likert scale , a rating scale measuring a respondent's agreement or disagreement with a statement," the Unit 42 team said . "It then asks the LLM to generate responses that contain examples that align with the scales. The example that has the highest Likert scale can potentially contain the harmful content." The explosion in popularity of artificial intelligence in recent years has also led to a new class of security exploits called prompt in...
AI Could Generate 10,000 Malware Variants, Evading Detection in 88% of Case

AI Could Generate 10,000 Malware Variants, Evading Detection in 88% of Case

Dec 23, 2024 Machine Learning / Threat Analysis
Cybersecurity researchers have found that it's possible to use large language models (LLMs) to generate new variants of malicious JavaScript code at scale in a manner that can better evade detection. "Although LLMs struggle to create malware from scratch, criminals can easily use them to rewrite or obfuscate existing malware, making it harder to detect," Palo Alto Networks Unit 42 researchers said in a new analysis. "Criminals can prompt LLMs to perform transformations that are much more natural-looking, which makes detecting this malware more challenging." With enough transformations over time, the approach could have the advantage of degrading the performance of malware classification systems, tricking them into believing that a piece of nefarious code is actually benign. While LLM providers have increasingly enforced security guardrails to prevent them from going off the rails and producing unintended output, bad actors have advertised tools like WormGPT...
Researchers Uncover Flaws in Popular Open-Source Machine Learning Frameworks

Researchers Uncover Flaws in Popular Open-Source Machine Learning Frameworks

Dec 06, 2024 Artificial Intelligence / Vulnerability
Cybersecurity researchers have disclosed multiple security flaws impacting open-source machine learning (ML) tools and frameworks such as MLflow, H2O, PyTorch, and MLeap that could pave the way for code execution. The vulnerabilities, discovered by JFrog, are part of a broader collection of 22 security shortcomings the supply chain security company first disclosed last month. Unlike the first set that involved flaws on the server-side, the newly detailed ones allow exploitation of ML clients and reside in libraries that handle safe model formats like Safetensors . "Hijacking an ML client in an organization can allow the attackers to perform extensive lateral movement within the organization," the company said . "An ML client is very likely to have access to important ML services such as ML Model Registries or MLOps Pipelines." This, in turn, could expose sensitive information such as model registry credentials, effectively permitting a malicious actor to back...
Researchers Warn of Privilege Escalation Risks in Google's Vertex AI ML Platform

Researchers Warn of Privilege Escalation Risks in Google's Vertex AI ML Platform

Nov 15, 2024 Artificial Intelligence / Vulnerability
Cybersecurity researchers have disclosed two security flaws in Google's Vertex machine learning (ML) platform that, if successfully exploited, could allow malicious actors to escalate privileges and exfiltrate models from the cloud. "By exploiting custom job permissions, we were able to escalate our privileges and gain unauthorized access to all data services in the project," Palo Alto Networks Unit 42 researchers Ofir Balassiano and Ofir Shaty said in an analysis published earlier this week. "Deploying a poisoned model in Vertex AI led to the exfiltration of all other fine-tuned models, posing a serious proprietary and sensitive data exfiltration attack risk." Vertex AI is Google's ML platform for training and deploying custom ML models and artificial intelligence (AI) applications at scale. It was first introduced in May 2021. Crucial to leveraging the privilege escalation flaw is a feature called Vertex AI Pipelines , which allows users to automat...
How AI Is Transforming IAM and Identity Security

How AI Is Transforming IAM and Identity Security

Nov 15, 2024 Machine Learning / Identity Security
In recent years, artificial intelligence (AI) has begun revolutionizing Identity Access Management (IAM), reshaping how cybersecurity is approached in this crucial field. Leveraging AI in IAM is about tapping into its analytical capabilities to monitor access patterns and identify anomalies that could signal a potential security breach. The focus has expanded beyond merely managing human identities — now, autonomous systems, APIs, and connected devices also fall within the realm of AI-driven IAM, creating a dynamic security ecosystem that adapts and evolves in response to sophisticated cyber threats. The Role of AI and Machine Learning in IAM AI and machine learning (ML) are creating a more robust, proactive IAM system that continuously learns from the environment to enhance security. Let's explore how AI impacts key IAM components: Intelligent Monitoring and Anomaly Detection AI enables continuous monitoring of both human and non-human identities , including APIs, service acc...
Security Flaws in Popular ML Toolkits Enable Server Hijacks, Privilege Escalation

Security Flaws in Popular ML Toolkits Enable Server Hijacks, Privilege Escalation

Nov 11, 2024 Machine Learning / Vulnerability
Cybersecurity researchers have uncovered nearly two dozen security flaws spanning 15 different machine learning (ML) related open-source projects. These comprise vulnerabilities discovered both on the server- and client-side, software supply chain security firm JFrog said in an analysis published last week. The server-side weaknesses "allow attackers to hijack important servers in the organization such as ML model registries, ML databases and ML pipelines," it said . The vulnerabilities, discovered in Weave, ZenML, Deep Lake, Vanna.AI, and Mage AI, have been broken down into broader sub-categories that allow for remotely hijacking model registries, ML database frameworks, and taking over ML Pipelines. A brief description of the identified flaws is below - CVE-2024-7340 (CVSS score: 8.8) - A directory traversal vulnerability in the Weave ML toolkit that allows for reading files across the whole filesystem, effectively allowing a low-privileged authenticated user to es...
Google’s AI Tool Big Sleep Finds Zero-Day Vulnerability in SQLite Database Engine

Google's AI Tool Big Sleep Finds Zero-Day Vulnerability in SQLite Database Engine

Nov 04, 2024 Artificial Intelligence / Vulnerability
Google said it discovered a zero-day vulnerability in the SQLite open-source database engine using its large language model (LLM) assisted framework called Big Sleep (formerly Project Naptime). The tech giant described the development as the "first real-world vulnerability" uncovered using the artificial intelligence (AI) agent. "We believe this is the first public example of an AI agent finding a previously unknown exploitable memory-safety issue in widely used real-world software," the Big Sleep team said in a blog post shared with The Hacker News. The vulnerability in question is a stack buffer underflow in SQLite, which occurs when a piece of software references a memory location prior to the beginning of the memory buffer, thereby resulting in a crash or arbitrary code execution. "This typically occurs when a pointer or its index is decremented to a position before the buffer, when pointer arithmetic results in a position before the beginning of t...
Researchers Uncover Vulnerabilities in Open-Source AI and ML Models

Researchers Uncover Vulnerabilities in Open-Source AI and ML Models

Oct 29, 2024 AI Security / Vulnerability
A little over three dozen security vulnerabilities have been disclosed in various open-source artificial intelligence (AI) and machine learning (ML) models, some of which could lead to remote code execution and information theft. The flaws, identified in tools like ChuanhuChatGPT, Lunary, and LocalAI, have been reported as part of Protect AI's Huntr bug bounty platform. The most severe of the flaws are two shortcomings impacting Lunary, a production toolkit for large language models (LLMs) - CVE-2024-7474 (CVSS score: 9.1) - An Insecure Direct Object Reference (IDOR) vulnerability that could allow an authenticated user to view or delete external users, resulting in unauthorized data access and potential data loss CVE-2024-7475 (CVSS score: 9.1) - An improper access control vulnerability that allows an attacker to update the SAML configuration, thereby making it possible to log in as an unauthorized user and access sensitive information Also discovered in Lunary is anot...
Researchers Reveal 'Deceptive Delight' Method to Jailbreak AI Models

Researchers Reveal 'Deceptive Delight' Method to Jailbreak AI Models

Oct 23, 2024 Artificial Intelligence / Vulnerability
Cybersecurity researchers have shed light on a new adversarial technique that could be used to jailbreak large language models (LLMs) during the course of an interactive conversation by sneaking in an undesirable instruction between benign ones. The approach has been codenamed Deceptive Delight by Palo Alto Networks Unit 42, which described it as both simple and effective, achieving an average attack success rate (ASR) of 64.6% within three interaction turns. "Deceptive Delight is a multi-turn technique that engages large language models (LLM) in an interactive conversation, gradually bypassing their safety guardrails and eliciting them to generate unsafe or harmful content," Unit 42's Jay Chen and Royce Lu said. It's also a little different from multi-turn jailbreak (aka many-shot jailbreak) methods like Crescendo , wherein unsafe or restricted topics are sandwiched between innocuous instructions, as opposed to gradually leading the model to produce harmful outpu...
The Rise of Zero-Day Vulnerabilities: Why Traditional Security Solutions Fall Short

The Rise of Zero-Day Vulnerabilities: Why Traditional Security Solutions Fall Short

Oct 15, 2024 Threat Detection / Machine Learning
In recent years, the number and sophistication of zero-day vulnerabilities have surged, posing a critical threat to organizations of all sizes. A zero-day vulnerability is a security flaw in software that is unknown to the vendor and remains unpatched at the time of discovery. Attackers exploit these flaws before any defensive measures can be implemented, making zero-days a potent weapon for cybercriminals. A recent example is, for instance, CVE-2024-0519 in Google Chrome: this high-severity vulnerability was actively exploited in the wild and involved an out-of-bounds memory access issue in the V8 JavaScript engine. It allowed remote attackers to access sensitive information or trigger a crash by exploiting heap corruption.  Also, the zero-day vulnerability at Rackspace caused massive trouble. This incident was a zero-day remote code execution vulnerability in ScienceLogic's monitoring application that led to the compromise of Rackspace's internal systems. The breach expose...
The Value of AI-Powered Identity

The Value of AI-Powered Identity

Oct 08, 2024 Machine Learning / Data Security
Introduction Artificial intelligence (AI) deepfakes and misinformation may cause worry in the world of technology and investment, but this powerful, foundational technology has the potential to benefit organizations of all kinds when harnessed appropriately. In the world of cybersecurity, one of the most important areas of application of AI is augmenting and enhancing identity management systems. AI-powered identity lifecycle management is at the vanguard of digital identity and is used to enhance security, streamline governance and improve the UX of an identity system. Benefits of an AI-powered identity AI is a technology that crosses barriers between traditionally opposing business area drivers, bringing previously conflicting areas together: AI enables better operational efficiency by reducing risk and improving security AI enables businesses to achieve goals by securing cyber-resilience AI facilitates agile and secure access by ensuring regulatory compliance AI and unifi...
EPSS vs. CVSS: What’s the Best Approach to Vulnerability Prioritization?

EPSS vs. CVSS: What's the Best Approach to Vulnerability Prioritization?

Sep 26, 2024 Vulnerability Management / Security Automation
Many businesses rely on the Common Vulnerability Scoring System (CVSS) to assess the severity of vulnerabilities for prioritization. While these scores provide some insight into the potential impact of a vulnerability, they don't factor in real-world threat data, such as the likelihood of exploitation. With new vulnerabilities discovered daily, teams don't have the time - or the budget - to waste on fixing vulnerabilities that won't actually reduce risk. Read on to learn more about how CVSS and EPSS compare and why using EPSS is a game changer for your vulnerability prioritization process.  What is vulnerability prioritization? Vulnerability prioritization is the process of evaluating and ranking vulnerabilities based on the potential impact they could have on an organization. The goal is to help security teams determine which vulnerabilities should be addressed, in what timeframe, or if they need to be fixed at all. This process ensures that the most critical risks are mitigat...
Expert Insights / Articles Videos
Cybersecurity Resources