#1 Trusted Cybersecurity News Platform
Followed by 5.20+ million
The Hacker News Logo
Subscribe – Get Latest News

AI Security | Breaking Cybersecurity News | The Hacker News

Category — AI Security
A Guide to Securing AI App Development: Join This Cybersecurity Webinar

A Guide to Securing AI App Development: Join This Cybersecurity Webinar

Dec 02, 2024 AI Security / Data Protection
Artificial Intelligence (AI) is no longer a far-off dream—it's here, changing the way we live. From ordering coffee to diagnosing diseases, it's everywhere. But while you're creating the next big AI-powered app, hackers are already figuring out ways to break it. Every AI app is an opportunity—and a potential risk. The stakes are huge: data leaks, downtime, and even safety threats if security isn't built in. With AI adoption moving fast, securing your projects is no longer optional—it's a must. Join Liqian Lim, Senior Product Marketing Manager at Snyk, for an exclusive webinar that's all about securing the future of AI development. Titled " Building Tomorrow, Securely: Securing the Use of AI in App Development ," this session will arm you with the knowledge and tools to tackle the challenges of AI-powered innovation. What You'll Learn: Get AI-Ready: How to make your AI projects secure from the start. Spot Hidden Risks: Uncover threats you might not see coming. Understand the Ma...
Microsoft Fixes AI, Cloud, and ERP Security Flaws; One Exploited in Active Attacks

Microsoft Fixes AI, Cloud, and ERP Security Flaws; One Exploited in Active Attacks

Nov 29, 2024 AI Security / Cloud Security
Microsoft has addressed four security flaws impacting its artificial intelligence (AI), cloud, enterprise resource planning, and Partner Center offerings, including one that it said has been exploited in the wild. The vulnerability that has been tagged with an "Exploitation Detected" assessment is CVE-2024-49035 (CVSS score: 8.7), a privilege escalation flaw in partner.microsoft[.]com. "An improper access control vulnerability in partner.microsoft[.]com allows an unauthenticated attacker to elevate privileges over a network," the tech giant said in an advisory released this week. Microsoft credited Gautam Peri, Apoorv Wadhwa, and an anonymous researcher for reporting the flaw, but did not reveal any specifics on how it's being exploited in real-world attacks. Fixes for the shortcomings are being rolled out automatically as part of updates to the online version of Microsoft Power Apps. Also addressed by Redmond are three other vulnerabilities, two of which...
Want to Grow Vulnerability Management into Exposure Management? Start Here!

Want to Grow Vulnerability Management into Exposure Management? Start Here!

Dec 05, 2024Attack Surface / Exposure Management
Vulnerability Management (VM) has long been a cornerstone of organizational cybersecurity. Nearly as old as the discipline of cybersecurity itself, it aims to help organizations identify and address potential security issues before they become serious problems. Yet, in recent years, the limitations of this approach have become increasingly evident.  At its core, Vulnerability Management processes remain essential for identifying and addressing weaknesses. But as time marches on and attack avenues evolve, this approach is beginning to show its age. In a recent report, How to Grow Vulnerability Management into Exposure Management (Gartner, How to Grow Vulnerability Management Into Exposure Management, 8 November 2024, Mitchell Schneider Et Al.), we believe Gartner® addresses this point precisely and demonstrates how organizations can – and must – shift from a vulnerability-centric strategy to a broader Exposure Management (EM) framework. We feel it's more than a worthwhile read an...
Security Flaws in Popular ML Toolkits Enable Server Hijacks, Privilege Escalation

Security Flaws in Popular ML Toolkits Enable Server Hijacks, Privilege Escalation

Nov 11, 2024 Machine Learning / Vulnerability
Cybersecurity researchers have uncovered nearly two dozen security flaws spanning 15 different machine learning (ML) related open-source projects. These comprise vulnerabilities discovered both on the server- and client-side, software supply chain security firm JFrog said in an analysis published last week. The server-side weaknesses "allow attackers to hijack important servers in the organization such as ML model registries, ML databases and ML pipelines," it said . The vulnerabilities, discovered in Weave, ZenML, Deep Lake, Vanna.AI, and Mage AI, have been broken down into broader sub-categories that allow for remotely hijacking model registries, ML database frameworks, and taking over ML Pipelines. A brief description of the identified flaws is below - CVE-2024-7340 (CVSS score: 8.8) - A directory traversal vulnerability in the Weave ML toolkit that allows for reading files across the whole filesystem, effectively allowing a low-privileged authenticated user to es...
cyber security

Innovate Securely: Top Strategies to Harmonize AppSec and R&D Teams

websiteBackslashApplication Security
Tackle common challenges to make security and innovation work seamlessly.
Critical Flaws in Ollama AI Framework Could Enable DoS, Model Theft, and Poisoning

Critical Flaws in Ollama AI Framework Could Enable DoS, Model Theft, and Poisoning

Nov 04, 2024 Vulnerability / Cyber Threat
Cybersecurity researchers have disclosed six security flaws in the Ollama artificial intelligence (AI) framework that could be exploited by a malicious actor to perform various actions, including denial-of-service, model poisoning, and model theft. "Collectively, the vulnerabilities could allow an attacker to carry out a wide-range of malicious actions with a single HTTP request, including denial-of-service (DoS) attacks, model poisoning, model theft, and more," Oligo Security researcher Avi Lumelsky said in a report published last week. Ollama is an open-source application that allows users to deploy and operate large language models (LLMs) locally on Windows, Linux, and macOS devices. Its project repository on GitHub has been forked 7,600 times to date. A brief description of the six vulnerabilities is below - CVE-2024-39719 (CVSS score: 7.5) - A vulnerability that an attacker can exploit using /api/create an endpoint to determine the existence of a file in the se...
Researchers Uncover Vulnerabilities in Open-Source AI and ML Models

Researchers Uncover Vulnerabilities in Open-Source AI and ML Models

Oct 29, 2024 AI Security / Vulnerability
A little over three dozen security vulnerabilities have been disclosed in various open-source artificial intelligence (AI) and machine learning (ML) models, some of which could lead to remote code execution and information theft. The flaws, identified in tools like ChuanhuChatGPT, Lunary, and LocalAI, have been reported as part of Protect AI's Huntr bug bounty platform. The most severe of the flaws are two shortcomings impacting Lunary, a production toolkit for large language models (LLMs) - CVE-2024-7474 (CVSS score: 9.1) - An Insecure Direct Object Reference (IDOR) vulnerability that could allow an authenticated user to view or delete external users, resulting in unauthorized data access and potential data loss CVE-2024-7475 (CVSS score: 9.1) - An improper access control vulnerability that allows an attacker to update the SAML configuration, thereby making it possible to log in as an unauthorized user and access sensitive information Also discovered in Lunary is anot...
OpenAI Blocks 20 Global Malicious Campaigns Using AI for Cybercrime and Disinformation

OpenAI Blocks 20 Global Malicious Campaigns Using AI for Cybercrime and Disinformation

Oct 10, 2024 Cybercrime / Disinformation
OpenAI on Wednesday said it has disrupted more than 20 operations and deceptive networks across the world that attempted to use its platform for malicious purposes since the start of the year. This activity encompassed debugging malware, writing articles for websites, generating biographies for social media accounts, and creating AI-generated profile pictures for fake accounts on X. "Threat actors continue to evolve and experiment with our models, but we have not seen evidence of this leading to meaningful breakthroughs in their ability to create substantially new malware or build viral audiences," the artificial intelligence (AI) company said . It also said it disrupted activity that generated social media content related to elections in the U.S., Rwanda, and to a lesser extent India and the European Union, and that none of these networks attracted viral engagement or sustained audiences. This included efforts undertaken by an Israeli commercial company named STOIC (als...
5 Actionable Steps to Prevent GenAI Data Leaks Without Fully Blocking AI Usage

5 Actionable Steps to Prevent GenAI Data Leaks Without Fully Blocking AI Usage

Oct 01, 2024 Generative AI / Data Protection
Since its emergence, Generative AI has revolutionized enterprise productivity. GenAI tools enable faster and more effective software development, financial analysis, business planning, and customer engagement. However, this business agility comes with significant risks, particularly the potential for sensitive data leakage. As organizations attempt to balance productivity gains with security concerns, many have been forced to choose between unrestricted GenAI usage to banning it altogether. A new e-guide by LayerX titled 5 Actionable Measures to Prevent Data Leakage Through Generative AI Tools is designed to help organizations navigate the challenges of GenAI usage in the workplace. The guide offers practical steps for security managers to protect sensitive corporate data while still reaping the productivity benefits of GenAI tools like ChatGPT. This approach is intended to allow companies to strike the right balance between innovation and security. Why Worry About ChatGPT? The e...
ChatGPT macOS Flaw Could've Enabled Long-Term Spyware via Memory Function

ChatGPT macOS Flaw Could've Enabled Long-Term Spyware via Memory Function

Sep 25, 2024 Artificial Intelligence / Vulnerability
A now-patched security vulnerability in OpenAI's ChatGPT app for macOS could have made it possible for attackers to plant long-term persistent spyware into the artificial intelligence (AI) tool's memory. The technique, dubbed SpAIware , could be abused to facilitate "continuous data exfiltration of any information the user typed or responses received by ChatGPT, including any future chat sessions," security researcher Johann Rehberger said . The issue, at its core, abuses a feature called memory , which OpenAI introduced earlier this February before rolling it out to ChatGPT Free, Plus, Team, and Enterprise users at the start of the month. What it does is essentially allow ChatGPT to remember certain things across chats so that it saves users the effort of repeating the same information over and over again. Users also have the option to instruct the program to forget something. "ChatGPT's memories evolve with your interactions and aren't linked to s...
Microsoft Fixes ASCII Smuggling Flaw That Enabled Data Theft from Microsoft 365 Copilot

Microsoft Fixes ASCII Smuggling Flaw That Enabled Data Theft from Microsoft 365 Copilot

Aug 27, 2024 AI Security / Vulnerability
Details have emerged about a now-patched vulnerability in Microsoft 365 Copilot that could enable the theft of sensitive user information using a technique called ASCII smuggling. " ASCII Smuggling is a novel technique that uses special Unicode characters that mirror ASCII but are actually not visible in the user interface," security researcher Johann Rehberger said . "This means that an attacker can have the [large language model] render, to the user, invisible data, and embed them within clickable hyperlinks. This technique basically stages the data for exfiltration!" The entire attack strings together a number of attack methods to fashion them into a reliable exploit chain. This includes the following steps - Trigger prompt injection via malicious content concealed in a document shared on the chat to seize control of the chatbot Using a prompt injection payload to instruct Copilot to search for more emails and documents, a technique called automatic too...
Researchers Uncover Vulnerabilities in AI-Powered Azure Health Bot Service

Researchers Uncover Vulnerabilities in AI-Powered Azure Health Bot Service

Aug 13, 2024 Healthcare / Vulnerability
Cybersecurity researchers have discovered two security flaws in Microsoft's Azure Health Bot Service that, if exploited, could permit a malicious actor to achieve lateral movement within customer environments and access sensitive patient data. The critical issues, now patched by Microsoft, could have allowed access to cross-tenant resources within the service, Tenable said in a new report shared with The Hacker News. The Azure AI Health Bot Service is a cloud platform that enables developers in healthcare organizations to build and deploy AI-powered virtual health assistants and create copilots to manage administrative workloads and engage with their patients. This includes bots created by insurance service providers to allow customers to look up the status of a claim and ask questions about benefits and services, as well as bots managed by healthcare entities to help patients find appropriate care or look up nearby doctors. Tenable's research specifically focuses on on...
Experts Uncover Severe AWS Flaws Leading to RCE, Data Theft, and Full-Service Takeovers

Experts Uncover Severe AWS Flaws Leading to RCE, Data Theft, and Full-Service Takeovers

Aug 09, 2024 Cloud Security / Data Protection
Cybersecurity researchers have discovered multiple critical flaws in Amazon Web Services (AWS) offerings that, if successfully exploited, could result in serious consequences. "The impact of these vulnerabilities range between remote code execution (RCE), full-service user takeover (which might provide powerful administrative access), manipulation of AI modules, exposing sensitive data, data exfiltration, and denial-of-service," cloud security firm Aqua said in a detailed report shared with The Hacker News. Following responsible disclosure in February 2024, Amazon addressed the shortcomings over several months from March to June. The findings were presented at Black Hat USA 2024. Central to the issue, dubbed Bucket Monopoly, is an attack vector referred to as Shadow Resource, which, in this case, refers to the automatic creation of an AWS S3 bucket when using services like CloudFormation, Glue, EMR, SageMaker, ServiceCatalog, and CodeStar. The S3 bucket name created in...
SAP AI Core Vulnerabilities Expose Customer Data to Cyber Attacks

SAP AI Core Vulnerabilities Expose Customer Data to Cyber Attacks

Jul 18, 2024 Cloud Security / Enterprise Security
Cybersecurity researchers have uncovered security shortcomings in SAP AI Core cloud-based platform for creating and deploying predictive artificial intelligence (AI) workflows that could be exploited to get hold of access tokens and customer data. The five vulnerabilities have been collectively dubbed SAPwned by cloud security firm Wiz. "The vulnerabilities we found could have allowed attackers to access customers' data and contaminate internal artifacts – spreading to related services and other customers' environments," security researcher Hillai Ben-Sasson said in a report shared with The Hacker News. Following responsible disclosure on January 25, 2024, the weaknesses were addressed by SAP as of May 15, 2024. In a nutshell, the flaws make it possible to obtain unauthorized access to customers' private artifacts and credentials to cloud environments like Amazon Web Services (AWS), Microsoft Azure, and SAP HANA Cloud. They could also be used to modify D...
Prompt Injection Flaw in Vanna AI Exposes Databases to RCE Attacks

Prompt Injection Flaw in Vanna AI Exposes Databases to RCE Attacks

Jun 27, 2024 Artificial Intelligence / Vulnerability
Cybersecurity researchers have disclosed a high-severity security flaw in the Vanna.AI library that could be exploited to achieve remote code execution vulnerability via prompt injection techniques. The vulnerability, tracked as CVE-2024-5565 (CVSS score: 8.1), relates to a case of prompt injection in the "ask" function that could be exploited to trick the library into executing arbitrary commands, supply chain security firm JFrog said . Vanna is a Python-based machine learning library that allows users to chat with their SQL database to glean insights by "just asking questions" (aka prompts) that are translated into an equivalent SQL query using a large language model (LLM). The rapid rollout of generative artificial intelligence (AI) models in recent years has brought to the fore the risks of exploitation by malicious actors, who can weaponize the tools by providing adversarial inputs that bypass the safety mechanisms built into them. One such prominent clas...
How to Accelerate Vendor Risk Assessments in the Age of SaaS Sprawl

How to Accelerate Vendor Risk Assessments in the Age of SaaS Sprawl

Mar 21, 2024 SaaS Security / Endpoint Security
In today's digital-first business environment dominated by SaaS applications, organizations increasingly depend on third-party vendors for essential cloud services and software solutions. As more vendors and services are added to the mix, the complexity and potential vulnerabilities within the  SaaS supply chain  snowball quickly. That's why effective vendor risk management (VRM) is a critical strategy in identifying, assessing, and mitigating risks to protect organizational assets and data integrity. Meanwhile, common approaches to vendor risk assessments are too slow and static for the modern world of SaaS. Most organizations have simply adapted their legacy evaluation techniques for on-premise software to apply to SaaS providers. This not only creates massive bottlenecks, but also causes organizations to inadvertently accept far too much risk. To effectively adapt to the realities of modern work, two major aspects need to change: the timeline of initial assessment must s...
Over 100 Malicious AI/ML Models Found on Hugging Face Platform

Over 100 Malicious AI/ML Models Found on Hugging Face Platform

Mar 04, 2024 AI Security / Vulnerability
As many as 100 malicious artificial intelligence (AI)/machine learning (ML) models have been discovered in the Hugging Face platform. These include instances where loading a  pickle file  leads to code execution, software supply chain security firm JFrog said. "The model's payload grants the attacker a shell on the compromised machine, enabling them to gain full control over victims' machines through what is commonly referred to as a 'backdoor,'" senior security researcher David Cohen  said . "This silent infiltration could potentially grant access to critical internal systems and pave the way for large-scale data breaches or even corporate espionage, impacting not just individual users but potentially entire organizations across the globe, all while leaving victims utterly unaware of their compromised state." Specifically, the rogue model initiates a reverse shell connection to 210.117.212[.]93, an IP address that belongs to the Korea Research...
Expert Insights / Articles Videos
Cybersecurity Resources