#1 Trusted Cybersecurity News Platform
Followed by 4.50+ million
The Hacker News Logo
Subscribe – Get Latest News
AI Security

Prompt Injection | Breaking Cybersecurity News | The Hacker News

Prompt Injection Flaw in Vanna AI Exposes Databases to RCE Attacks

Prompt Injection Flaw in Vanna AI Exposes Databases to RCE Attacks

Jun 27, 2024 Artificial Intelligence / Vulnerability
Cybersecurity researchers have disclosed a high-severity security flaw in the Vanna.AI library that could be exploited to achieve remote code execution vulnerability via prompt injection techniques. The vulnerability, tracked as CVE-2024-5565 (CVSS score: 8.1), relates to a case of prompt injection in the "ask" function that could be exploited to trick the library into executing arbitrary commands, supply chain security firm JFrog said . Vanna is a Python-based machine learning library that allows users to chat with their SQL database to glean insights by "just asking questions" (aka prompts) that are translated into an equivalent SQL query using a large language model (LLM). The rapid rollout of generative artificial intelligence (AI) models in recent years has brought to the fore the risks of exploitation by malicious actors, who can weaponize the tools by providing adversarial inputs that bypass the safety mechanisms built into them. One such prominent clas
Researchers Highlight Google's Gemini AI Susceptibility to LLM Threats

Researchers Highlight Google's Gemini AI Susceptibility to LLM Threats

Mar 13, 2024 Large Language Model / AI Security
Google's  Gemini  large language model (LLM) is susceptible to security threats that could cause it to divulge system prompts, generate harmful content, and carry out indirect injection attacks. The findings come from HiddenLayer, which said the issues impact consumers using Gemini Advanced with Google Workspace as well as companies using the LLM API. The first vulnerability involves getting around security guardrails to leak the system prompts (or a system message), which are designed to set conversation-wide instructions to the LLM to help it generate more useful responses, by asking the model to output its "foundational instructions" in a markdown block. "A system message can be used to inform the LLM about the context," Microsoft  notes  in its documentation about LLM prompt engineering. "The context may be the type of conversation it is engaging in, or the function it is supposed to perform. It helps the LLM generate more appropriate responses.&qu
How to Increase Engagement with Your Cybersecurity Clients Through vCISO Reporting

How to Increase Engagement with Your Cybersecurity Clients Through vCISO Reporting

Jul 22, 2024vCISO / Business Security
As a vCISO, you are responsible for your client's cybersecurity strategy and risk governance. This incorporates multiple disciplines, from research to execution to reporting. Recently, we published a comprehensive playbook for vCISOs, "Your First 100 Days as a vCISO – 5 Steps to Success" , which covers all the phases entailed in launching a successful vCISO engagement, along with recommended actions to take, and step-by-step examples.  Following the success of the playbook and the requests that have come in from the MSP/MSSP community, we decided to drill down into specific parts of vCISO reporting and provide more color and examples. In this article, we focus on how to create compelling narratives within a report, which has a significant impact on the overall MSP/MSSP value proposition.  This article brings the highlights of a recent guided workshop we held, covering what makes a successful report and how it can be used to enhance engagement with your cyber security clients.
Over 100 Malicious AI/ML Models Found on Hugging Face Platform

Over 100 Malicious AI/ML Models Found on Hugging Face Platform

Mar 04, 2024 AI Security / Vulnerability
As many as 100 malicious artificial intelligence (AI)/machine learning (ML) models have been discovered in the Hugging Face platform. These include instances where loading a  pickle file  leads to code execution, software supply chain security firm JFrog said. "The model's payload grants the attacker a shell on the compromised machine, enabling them to gain full control over victims' machines through what is commonly referred to as a 'backdoor,'" senior security researcher David Cohen  said . "This silent infiltration could potentially grant access to critical internal systems and pave the way for large-scale data breaches or even corporate espionage, impacting not just individual users but potentially entire organizations across the globe, all while leaving victims utterly unaware of their compromised state." Specifically, the rogue model initiates a reverse shell connection to 210.117.212[.]93, an IP address that belongs to the Korea Research
cyber security

Free OAuth Investigation Checklist - How to Uncover Risky or Malicious Grants

websiteNudge SecuritySaaS Security / Supply Chain
OAuth grants provide yet another way for attackers to compromise identities. Download our free checklist to learn what to look for and where when reviewing OAuth grants for potential risks.
Expert Insights
Cybersecurity Resources