#1 Trusted Cybersecurity News Platform
Followed by 5.20+ million
The Hacker News Logo
Subscribe – Get Latest News

machine learning | Breaking Cybersecurity News | The Hacker News

Category — machine learning
Researchers Uncover Flaws in Popular Open-Source Machine Learning Frameworks

Researchers Uncover Flaws in Popular Open-Source Machine Learning Frameworks

Dec 06, 2024 Artificial Intelligence / Vulnerability
Cybersecurity researchers have disclosed multiple security flaws impacting open-source machine learning (ML) tools and frameworks such as MLflow, H2O, PyTorch, and MLeap that could pave the way for code execution. The vulnerabilities, discovered by JFrog, are part of a broader collection of 22 security shortcomings the supply chain security company first disclosed last month. Unlike the first set that involved flaws on the server-side, the newly detailed ones allow exploitation of ML clients and reside in libraries that handle safe model formats like Safetensors . "Hijacking an ML client in an organization can allow the attackers to perform extensive lateral movement within the organization," the company said . "An ML client is very likely to have access to important ML services such as ML Model Registries or MLOps Pipelines." This, in turn, could expose sensitive information such as model registry credentials, effectively permitting a malicious actor to back...
Researchers Warn of Privilege Escalation Risks in Google's Vertex AI ML Platform

Researchers Warn of Privilege Escalation Risks in Google's Vertex AI ML Platform

Nov 15, 2024 Artificial Intelligence / Vulnerability
Cybersecurity researchers have disclosed two security flaws in Google's Vertex machine learning (ML) platform that, if successfully exploited, could allow malicious actors to escalate privileges and exfiltrate models from the cloud. "By exploiting custom job permissions, we were able to escalate our privileges and gain unauthorized access to all data services in the project," Palo Alto Networks Unit 42 researchers Ofir Balassiano and Ofir Shaty said in an analysis published earlier this week. "Deploying a poisoned model in Vertex AI led to the exfiltration of all other fine-tuned models, posing a serious proprietary and sensitive data exfiltration attack risk." Vertex AI is Google's ML platform for training and deploying custom ML models and artificial intelligence (AI) applications at scale. It was first introduced in May 2021. Crucial to leveraging the privilege escalation flaw is a feature called Vertex AI Pipelines , which allows users to automat...
Unlocking Google Workspace Security: Are You Doing Enough to Protect Your Data?

Crowdstrike Named A Leader In Endpoint Protection Platforms

Nov 22, 2024Endpoint Security / Threat Detection
CrowdStrike is named a Leader in the 2024 Gartner® Magic Quadrant™ for Endpoint Protection Platforms for the fifth consecutive time, positioned highest on Ability to Execute and furthest to the right on Completeness of Vision.
How AI Is Transforming IAM and Identity Security

How AI Is Transforming IAM and Identity Security

Nov 15, 2024 Machine Learning / Identity Security
In recent years, artificial intelligence (AI) has begun revolutionizing Identity Access Management (IAM), reshaping how cybersecurity is approached in this crucial field. Leveraging AI in IAM is about tapping into its analytical capabilities to monitor access patterns and identify anomalies that could signal a potential security breach. The focus has expanded beyond merely managing human identities — now, autonomous systems, APIs, and connected devices also fall within the realm of AI-driven IAM, creating a dynamic security ecosystem that adapts and evolves in response to sophisticated cyber threats. The Role of AI and Machine Learning in IAM AI and machine learning (ML) are creating a more robust, proactive IAM system that continuously learns from the environment to enhance security. Let's explore how AI impacts key IAM components: Intelligent Monitoring and Anomaly Detection AI enables continuous monitoring of both human and non-human identities , including APIs, service acc...
cyber security

Innovate Securely: Top Strategies to Harmonize AppSec and R&D Teams

websiteBackslashApplication Security
Tackle common challenges to make security and innovation work seamlessly.
Security Flaws in Popular ML Toolkits Enable Server Hijacks, Privilege Escalation

Security Flaws in Popular ML Toolkits Enable Server Hijacks, Privilege Escalation

Nov 11, 2024 Machine Learning / Vulnerability
Cybersecurity researchers have uncovered nearly two dozen security flaws spanning 15 different machine learning (ML) related open-source projects. These comprise vulnerabilities discovered both on the server- and client-side, software supply chain security firm JFrog said in an analysis published last week. The server-side weaknesses "allow attackers to hijack important servers in the organization such as ML model registries, ML databases and ML pipelines," it said . The vulnerabilities, discovered in Weave, ZenML, Deep Lake, Vanna.AI, and Mage AI, have been broken down into broader sub-categories that allow for remotely hijacking model registries, ML database frameworks, and taking over ML Pipelines. A brief description of the identified flaws is below - CVE-2024-7340 (CVSS score: 8.8) - A directory traversal vulnerability in the Weave ML toolkit that allows for reading files across the whole filesystem, effectively allowing a low-privileged authenticated user to es...
Google’s AI Tool Big Sleep Finds Zero-Day Vulnerability in SQLite Database Engine

Google's AI Tool Big Sleep Finds Zero-Day Vulnerability in SQLite Database Engine

Nov 04, 2024 Artificial Intelligence / Vulnerability
Google said it discovered a zero-day vulnerability in the SQLite open-source database engine using its large language model (LLM) assisted framework called Big Sleep (formerly Project Naptime). The tech giant described the development as the "first real-world vulnerability" uncovered using the artificial intelligence (AI) agent. "We believe this is the first public example of an AI agent finding a previously unknown exploitable memory-safety issue in widely used real-world software," the Big Sleep team said in a blog post shared with The Hacker News. The vulnerability in question is a stack buffer underflow in SQLite, which occurs when a piece of software references a memory location prior to the beginning of the memory buffer, thereby resulting in a crash or arbitrary code execution. "This typically occurs when a pointer or its index is decremented to a position before the buffer, when pointer arithmetic results in a position before the beginning of t...
Researchers Uncover Vulnerabilities in Open-Source AI and ML Models

Researchers Uncover Vulnerabilities in Open-Source AI and ML Models

Oct 29, 2024 AI Security / Vulnerability
A little over three dozen security vulnerabilities have been disclosed in various open-source artificial intelligence (AI) and machine learning (ML) models, some of which could lead to remote code execution and information theft. The flaws, identified in tools like ChuanhuChatGPT, Lunary, and LocalAI, have been reported as part of Protect AI's Huntr bug bounty platform. The most severe of the flaws are two shortcomings impacting Lunary, a production toolkit for large language models (LLMs) - CVE-2024-7474 (CVSS score: 9.1) - An Insecure Direct Object Reference (IDOR) vulnerability that could allow an authenticated user to view or delete external users, resulting in unauthorized data access and potential data loss CVE-2024-7475 (CVSS score: 9.1) - An improper access control vulnerability that allows an attacker to update the SAML configuration, thereby making it possible to log in as an unauthorized user and access sensitive information Also discovered in Lunary is anot...
Researchers Reveal 'Deceptive Delight' Method to Jailbreak AI Models

Researchers Reveal 'Deceptive Delight' Method to Jailbreak AI Models

Oct 23, 2024 Artificial Intelligence / Vulnerability
Cybersecurity researchers have shed light on a new adversarial technique that could be used to jailbreak large language models (LLMs) during the course of an interactive conversation by sneaking in an undesirable instruction between benign ones. The approach has been codenamed Deceptive Delight by Palo Alto Networks Unit 42, which described it as both simple and effective, achieving an average attack success rate (ASR) of 64.6% within three interaction turns. "Deceptive Delight is a multi-turn technique that engages large language models (LLM) in an interactive conversation, gradually bypassing their safety guardrails and eliciting them to generate unsafe or harmful content," Unit 42's Jay Chen and Royce Lu said. It's also a little different from multi-turn jailbreak (aka many-shot jailbreak) methods like Crescendo , wherein unsafe or restricted topics are sandwiched between innocuous instructions, as opposed to gradually leading the model to produce harmful outpu...
The Rise of Zero-Day Vulnerabilities: Why Traditional Security Solutions Fall Short

The Rise of Zero-Day Vulnerabilities: Why Traditional Security Solutions Fall Short

Oct 15, 2024 Threat Detection / Machine Learning
In recent years, the number and sophistication of zero-day vulnerabilities have surged, posing a critical threat to organizations of all sizes. A zero-day vulnerability is a security flaw in software that is unknown to the vendor and remains unpatched at the time of discovery. Attackers exploit these flaws before any defensive measures can be implemented, making zero-days a potent weapon for cybercriminals. A recent example is, for instance, CVE-2024-0519 in Google Chrome: this high-severity vulnerability was actively exploited in the wild and involved an out-of-bounds memory access issue in the V8 JavaScript engine. It allowed remote attackers to access sensitive information or trigger a crash by exploiting heap corruption.  Also, the zero-day vulnerability at Rackspace caused massive trouble. This incident was a zero-day remote code execution vulnerability in ScienceLogic's monitoring application that led to the compromise of Rackspace's internal systems. The breach expose...
The Value of AI-Powered Identity

The Value of AI-Powered Identity

Oct 08, 2024 Machine Learning / Data Security
Introduction Artificial intelligence (AI) deepfakes and misinformation may cause worry in the world of technology and investment, but this powerful, foundational technology has the potential to benefit organizations of all kinds when harnessed appropriately. In the world of cybersecurity, one of the most important areas of application of AI is augmenting and enhancing identity management systems. AI-powered identity lifecycle management is at the vanguard of digital identity and is used to enhance security, streamline governance and improve the UX of an identity system. Benefits of an AI-powered identity AI is a technology that crosses barriers between traditionally opposing business area drivers, bringing previously conflicting areas together: AI enables better operational efficiency by reducing risk and improving security AI enables businesses to achieve goals by securing cyber-resilience AI facilitates agile and secure access by ensuring regulatory compliance AI and unifi...
EPSS vs. CVSS: What’s the Best Approach to Vulnerability Prioritization?

EPSS vs. CVSS: What's the Best Approach to Vulnerability Prioritization?

Sep 26, 2024 Vulnerability Management / Security Automation
Many businesses rely on the Common Vulnerability Scoring System (CVSS) to assess the severity of vulnerabilities for prioritization. While these scores provide some insight into the potential impact of a vulnerability, they don't factor in real-world threat data, such as the likelihood of exploitation. With new vulnerabilities discovered daily, teams don't have the time - or the budget - to waste on fixing vulnerabilities that won't actually reduce risk. Read on to learn more about how CVSS and EPSS compare and why using EPSS is a game changer for your vulnerability prioritization process.  What is vulnerability prioritization? Vulnerability prioritization is the process of evaluating and ranking vulnerabilities based on the potential impact they could have on an organization. The goal is to help security teams determine which vulnerabilities should be addressed, in what timeframe, or if they need to be fixed at all. This process ensures that the most critical risks are mitigat...
Researchers Identify Over 20 Supply Chain Vulnerabilities in MLOps Platforms

Researchers Identify Over 20 Supply Chain Vulnerabilities in MLOps Platforms

Aug 26, 2024 ML Security / Artificial Intelligence
Cybersecurity researchers are warning about the security risks in the machine learning (ML) software supply chain following the discovery of more than 20 vulnerabilities that could be exploited to target MLOps platforms. These vulnerabilities, which are described as inherent- and implementation-based flaws, could have severe consequences, ranging from arbitrary code execution to loading malicious datasets. MLOps platforms offer the ability to design and execute an ML model pipeline, with a model registry acting as a repository used to store and version-trained ML models. These models can then be embedded within an application or allow other clients to query them using an API (aka model-as-a-service). "Inherent vulnerabilities are vulnerabilities that are caused by the underlying formats and processes used in the target technology," JFrog researchers said in a detailed report. Some examples of inherent vulnerabilities include abusing ML models to run code of the attacker...
The AI Hangover is Here – The End of the Beginning

The AI Hangover is Here – The End of the Beginning

Aug 12, 2024 AI Technology / Machine Learning
After a good year of sustained exuberance, the hangover is finally here. It's a gentle one (for now), as the market corrects the share price of the major players (like Nvidia, Microsoft, and Google), while other players reassess the market and adjust priorities. Gartner calls it the trough of disillusionment , when interest wanes and implementations fail to deliver the promised breakthroughs. Producers of the technology shake out or fail. Investment continues only if the surviving providers improve their products to the satisfaction of early adopters.  Let's be clear, this was always going to be the case: the post-human revolution promised by the AI cheerleaders was never a realistic goal, and the incredible excitement triggered by the early LLMs was not based on market success.  AI is here to stay  What's next for AI then? Well, if it follows the Gartner hype cycle, the deep crash is followed by the slope of enlightenment where the maturing technology regains its foo...
Safeguard Personal and Corporate Identities with Identity Intelligence

Safeguard Personal and Corporate Identities with Identity Intelligence

Jul 19, 2024 Machine Learning / Corporate Security
Learn about critical threats that can impact your organization and the bad actors behind them from Cybersixgill's threat experts. Each story shines a light on underground activities, the threat actors involved, and why you should care, along with what you can do to mitigate risk.  In the current cyber threat landscape, the protection of personal and corporate identities has become vital. Once in the hands of cybercriminals, compromised credentials and accounts provide unauthorized access to corporations' sensitive information and an entry point to launch costly ransomware and other malware attacks. To properly mitigate threats stemming from compromised credentials and accounts, organizations need identity intelligence. Understanding the significance of identity intelligence and the benefits it delivers is foundational to maintaining a secure posture and minimizing risk.  There is a perception that security teams and threat analysts are already overloaded by too much data. ...
Expert Insights / Articles Videos
Cybersecurity Resources