#1 Trusted Cybersecurity News Platform
Followed by 4.50+ million
The Hacker News Logo
Subscribe – Get Latest News
AI Security

Large language model | Breaking Cybersecurity News | The Hacker News

Researchers Uncover 'LLMjacking' Scheme Targeting Cloud-Hosted AI Models

Researchers Uncover 'LLMjacking' Scheme Targeting Cloud-Hosted AI Models

May 10, 2024 Vulnerability / Cloud Security
Cybersecurity researchers have discovered a novel attack that employs stolen cloud credentials to target cloud-hosted large language model (LLM) services with the goal of selling access to other threat actors. The attack technique has been codenamed  LLMjacking  by the Sysdig Threat Research Team. "Once initial access was obtained, they exfiltrated cloud credentials and gained access to the cloud environment, where they attempted to access local LLM models hosted by cloud providers," security researcher Alessandro Brucato  said . "In this instance, a local Claude (v2/v3) LLM model from Anthropic was targeted." The intrusion pathway used to pull off the scheme entails breaching a system running a vulnerable version of the Laravel Framework (e.g.,  CVE-2021-3129 ), followed by getting hold of Amazon Web Services (AWS) credentials to access the LLM services. Among the tools used is an  open-source Python script  that checks and validates keys for various offering
U.S. Government Releases New AI Security Guidelines for Critical Infrastructure

U.S. Government Releases New AI Security Guidelines for Critical Infrastructure

Apr 30, 2024 Machine Learning / National Security
The U.S. government has unveiled new security guidelines aimed at bolstering critical infrastructure against artificial intelligence (AI)-related threats. "These guidelines are informed by the whole-of-government effort to assess AI risks across all sixteen critical infrastructure sectors, and address threats both to and from, and involving AI systems," the Department of Homeland Security (DHS)  said  Monday. In addition, the agency said it's working to facilitate safe, responsible, and trustworthy use of the technology in a manner that does not infringe on individuals' privacy, civil rights, and civil liberties. The new guidance concerns the use of AI to augment and scale attacks on critical infrastructure, adversarial manipulation of AI systems, and shortcomings in such tools that could result in unintended consequences, necessitating the need for transparency and secure by design practices to evaluate and mitigate AI risks. Specifically, this spans four diffe
How to Increase Engagement with Your Cybersecurity Clients Through vCISO Reporting

How to Increase Engagement with Your Cybersecurity Clients Through vCISO Reporting

Jul 22, 2024vCISO / Business Security
As a vCISO, you are responsible for your client's cybersecurity strategy and risk governance. This incorporates multiple disciplines, from research to execution to reporting. Recently, we published a comprehensive playbook for vCISOs, "Your First 100 Days as a vCISO – 5 Steps to Success" , which covers all the phases entailed in launching a successful vCISO engagement, along with recommended actions to take, and step-by-step examples.  Following the success of the playbook and the requests that have come in from the MSP/MSSP community, we decided to drill down into specific parts of vCISO reporting and provide more color and examples. In this article, we focus on how to create compelling narratives within a report, which has a significant impact on the overall MSP/MSSP value proposition.  This article brings the highlights of a recent guided workshop we held, covering what makes a successful report and how it can be used to enhance engagement with your cyber security clients.
Microsoft Warns: North Korean Hackers Turn to AI-Fueled Cyber Espionage

Microsoft Warns: North Korean Hackers Turn to AI-Fueled Cyber Espionage

Apr 22, 2024 Cryptocurrency / Artificial Intelligence
Microsoft has revealed that North Korea-linked state-sponsored cyber actors have begun to use artificial intelligence (AI) to make their operations more effective and efficient. "They are learning to use tools powered by AI large language models (LLM) to make their operations more efficient and effective," the tech giant  said  in its latest report on East Asia hacking groups. The company specifically highlighted a group named  Emerald Sleet  (aka Kimusky or TA427), which has been observed using LLMs to bolster spear-phishing efforts aimed at Korean Peninsula experts. The adversary is also said to have relied on the latest advancements in AI to research vulnerabilities and conduct reconnaissance on organizations and experts focused on North Korea, joining  hacking crews from China , who have turned to AI-generated content for influence operations. It further employed LLMs to troubleshoot technical issues, conduct basic scripting tasks, and draft content for spear-phishi
cyber security

Free OAuth Investigation Checklist - How to Uncover Risky or Malicious Grants

websiteNudge SecuritySaaS Security / Supply Chain
OAuth grants provide yet another way for attackers to compromise identities. Download our free checklist to learn what to look for and where when reviewing OAuth grants for potential risks.
Third-Party ChatGPT Plugins Could Lead to Account Takeovers

Third-Party ChatGPT Plugins Could Lead to Account Takeovers

Mar 15, 2024 Data Privacy / Artificial Intelligence
Cybersecurity researchers have found that third-party plugins available for OpenAI ChatGPT could act as a new attack surface for threat actors looking to gain unauthorized access to sensitive data. According to  new research  published by Salt Labs, security flaws found directly in ChatGPT and within the ecosystem could allow attackers to install malicious plugins without users' consent and hijack accounts on third-party websites like GitHub. ChatGPT plugins , as the name implies, are tools designed to run on top of the large language model (LLM) with the aim of accessing up-to-date information, running computations, or accessing third-party services. OpenAI has since also introduced  GPTs , which are bespoke versions of ChatGPT tailored for specific use cases, while reducing third-party service dependencies. As of March 19, 2024, ChatGPT users  will no longer  be able to install new plugins or create new conversations with existing plugins. One of the flaws unearthed by Salt
Researchers Highlight Google's Gemini AI Susceptibility to LLM Threats

Researchers Highlight Google's Gemini AI Susceptibility to LLM Threats

Mar 13, 2024 Large Language Model / AI Security
Google's  Gemini  large language model (LLM) is susceptible to security threats that could cause it to divulge system prompts, generate harmful content, and carry out indirect injection attacks. The findings come from HiddenLayer, which said the issues impact consumers using Gemini Advanced with Google Workspace as well as companies using the LLM API. The first vulnerability involves getting around security guardrails to leak the system prompts (or a system message), which are designed to set conversation-wide instructions to the LLM to help it generate more useful responses, by asking the model to output its "foundational instructions" in a markdown block. "A system message can be used to inform the LLM about the context," Microsoft  notes  in its documentation about LLM prompt engineering. "The context may be the type of conversation it is engaging in, or the function it is supposed to perform. It helps the LLM generate more appropriate responses.&qu
Three Tips to Protect Your Secrets from AI Accidents

Three Tips to Protect Your Secrets from AI Accidents

Feb 26, 2024 Data Privacy / Machine Learning
Last year, the Open Worldwide Application Security Project (OWASP) published multiple versions of the " OWASP Top 10 For Large Language Models ," reaching a 1.0 document in August and a 1.1 document in October. These documents not only demonstrate the rapidly evolving nature of Large Language Models, but the evolving ways in which they can be attacked and defended. We're going to talk in this article about four items in that top 10 that are most able to contribute to the accidental disclosure of secrets such as passwords, API keys, and more. We're already aware that LLMs can reveal secrets because it's happened. In early 2023, GitGuardian reported it found over 10 million secrets in public Github commits. Github's Copilot AI coding tool was trained on public commits, and in September of 2023, researchers at the University of Hong Kong published a paper on how they created an algorithm that generated 900 prompts designed to get Copilot to reveal secrets from
Microsoft Releases PyRIT - A Red Teaming Tool for Generative AI

Microsoft Releases PyRIT - A Red Teaming Tool for Generative AI

Feb 23, 2024 Red Teaming / Artificial Intelligence
Microsoft has released an open access automation framework called  PyRIT  (short for Python Risk Identification Tool) to proactively identify risks in generative artificial intelligence (AI) systems. The red teaming tool is designed to "enable every organization across the globe to innovate responsibly with the latest artificial intelligence advances," Ram Shankar Siva Kumar, AI red team lead at Microsoft,  said . The company said PyRIT could be used to assess the robustness of large language model (LLM) endpoints against different harm categories such as fabrication (e.g., hallucination), misuse (e.g., bias), and prohibited content (e.g., harassment). It can also be used to identify security harms ranging from malware generation to jailbreaking, as well as privacy harms like identity theft. PyRIT comes with five interfaces: target, datasets, scoring engine, the ability to support multiple attack strategies, and incorporating a memory component that can either take the
Cybersecurity
Expert Insights
Cybersecurity Resources