#1 Trusted Cybersecurity News Platform
Followed by 5.20+ million
The Hacker News Logo
Subscribe – Get Latest News

AI-as-a-Service | Breaking Cybersecurity News | The Hacker News

Category — AI-as-a-Service
AI Company Hugging Face Detects Unauthorized Access to Its Spaces Platform

AI Company Hugging Face Detects Unauthorized Access to Its Spaces Platform

Jun 01, 2024 AI-as-a-Service / Data Breach
Artificial Intelligence (AI) company Hugging Face on Friday disclosed that it detected unauthorized access to its Spaces platform earlier this week. "We have suspicions that a subset of Spaces' secrets could have been accessed without authorization," it said in an advisory. Spaces offers a way for users to create, host, and share AI and machine learning (ML) applications. It also functions as a discovery service to look up AI apps made by other users on the platform. In response to the security event, Hugging Space said it is taking the step of revoking a number of HF tokens present in those secrets and that it's notifying users who had their tokens revoked via email. "We recommend you refresh any key or token and consider switching your HF tokens to fine-grained access tokens which are the new default," it added. Hugging Face, however, did not disclose how many users are impacted by the incident, which is currently under further investigation. It has...
Experts Find Flaw in Replicate AI Service Exposing Customers' Models and Data

Experts Find Flaw in Replicate AI Service Exposing Customers' Models and Data

May 25, 2024 Machine Learning / Data Breach
Cybersecurity researchers have discovered a critical security flaw in an artificial intelligence (AI)-as-a-service provider  Replicate  that could have allowed threat actors to gain access to proprietary AI models and sensitive information. "Exploitation of this vulnerability would have allowed unauthorized access to the AI prompts and results of all Replicate's platform customers," cloud security firm Wiz  said  in a report published this week. The issue stems from the fact that AI models are typically packaged in formats that allow arbitrary code execution, which an attacker could weaponize to perform cross-tenant attacks by means of a malicious model. Replicate makes use of an open-source tool called  Cog  to containerize and package machine learning models that could then be deployed either in a self-hosted environment or to Replicate. Wiz said that it created a rogue Cog container and uploaded it to Replicate, ultimately employing it to...
Want to Grow Vulnerability Management into Exposure Management? Start Here!

Want to Grow Vulnerability Management into Exposure Management? Start Here!

Dec 05, 2024Attack Surface / Exposure Management
Vulnerability Management (VM) has long been a cornerstone of organizational cybersecurity. Nearly as old as the discipline of cybersecurity itself, it aims to help organizations identify and address potential security issues before they become serious problems. Yet, in recent years, the limitations of this approach have become increasingly evident.  At its core, Vulnerability Management processes remain essential for identifying and addressing weaknesses. But as time marches on and attack avenues evolve, this approach is beginning to show its age. In a recent report, How to Grow Vulnerability Management into Exposure Management (Gartner, How to Grow Vulnerability Management Into Exposure Management, 8 November 2024, Mitchell Schneider Et Al.), we believe Gartner® addresses this point precisely and demonstrates how organizations can – and must – shift from a vulnerability-centric strategy to a broader Exposure Management (EM) framework. We feel it's more than a worthwhile read an...
AI-as-a-Service Providers Vulnerable to PrivEsc and Cross-Tenant Attacks

AI-as-a-Service Providers Vulnerable to PrivEsc and Cross-Tenant Attacks

Apr 05, 2024 Artificial Intelligence / Supply Chain Attack
New research has found that artificial intelligence (AI)-as-a-service providers such as Hugging Face are susceptible to two critical risks that could allow threat actors to escalate privileges, gain cross-tenant access to other customers' models, and even take over the continuous integration and continuous deployment (CI/CD) pipelines. "Malicious models represent a major risk to AI systems, especially for AI-as-a-service providers because potential attackers may leverage these models to perform cross-tenant attacks," Wiz researchers Shir Tamari and Sagi Tzadik  said . "The potential impact is devastating, as attackers may be able to access the millions of private AI models and apps stored within AI-as-a-service providers." The development comes as machine learning pipelines have emerged as a brand new supply chain attack vector, with repositories like Hugging Face becoming an attractive target for staging adversarial attacks designed to glean sensitive infor...
cyber security

Innovate Securely: Top Strategies to Harmonize AppSec and R&D Teams

websiteBackslashApplication Security
Tackle common challenges to make security and innovation work seamlessly.
Expert Insights / Articles Videos
Cybersecurity Resources