#1 Trusted Cybersecurity News Platform
Followed by 5.20+ million
The Hacker News Logo
Subscribe – Get Latest News
Cloud Security

LLM Security | Breaking Cybersecurity News | The Hacker News

Category — LLM Security
12,000+ API Keys and Passwords Found in Public Datasets Used for LLM Training

12,000+ API Keys and Passwords Found in Public Datasets Used for LLM Training

Feb 28, 2025 Machine Learning / Data Privacy
A dataset used to train large language models (LLMs) has been found to contain nearly 12,000 live secrets, which allow for successful authentication. The findings once again highlight how hard-coded credentials pose a severe security risk to users and organizations alike, not to mention compounding the problem when LLMs end up suggesting insecure coding practices to their users. Truffle Security said it downloaded a December 2024 archive from Common Crawl , which maintains a free, open repository of web crawl data. The massive dataset contains over 250 billion pages spanning 18 years.  The archive specifically contains 400TB of compressed web data, 90,000 WARC files (Web ARChive format), and data from 47.5 million hosts across 38.3 million registered domains. The company's analysis found that there are 219 different secret types in the Common Crawl archive, including Amazon Web Services (AWS) root keys, Slack webhooks, and Mailchimp API keys. "'Live' secrets ar...
New AI Jailbreak Method 'Bad Likert Judge' Boosts Attack Success Rates by Over 60%

New AI Jailbreak Method 'Bad Likert Judge' Boosts Attack Success Rates by Over 60%

Jan 03, 2025 Machine Learning / Vulnerability
Cybersecurity researchers have shed light on a new jailbreak technique that could be used to get past a large language model's (LLM) safety guardrails and produce potentially harmful or malicious responses. The multi-turn (aka many-shot) attack strategy has been codenamed Bad Likert Judge by Palo Alto Networks Unit 42 researchers Yongzhe Huang, Yang Ji, Wenjun Hu, Jay Chen, Akshata Rao, and Danny Tsechansky. "The technique asks the target LLM to act as a judge scoring the harmfulness of a given response using the Likert scale , a rating scale measuring a respondent's agreement or disagreement with a statement," the Unit 42 team said . "It then asks the LLM to generate responses that contain examples that align with the scales. The example that has the highest Likert scale can potentially contain the harmful content." The explosion in popularity of artificial intelligence in recent years has also led to a new class of security exploits called prompt in...
Product Walkthrough: A Look Inside Wing Security's Layered SaaS Identity Defense

Product Walkthrough: A Look Inside Wing Security's Layered SaaS Identity Defense

Apr 16, 2025SaaS Security / Identity Management
Intro: Why hack in when you can log in? SaaS applications are the backbone of modern organizations, powering productivity and operational efficiency. But every new app introduces critical security risks through app integrations and multiple users, creating easy access points for threat actors. As a result, SaaS breaches have increased, and according to a May 2024 XM Cyber report, identity and credential misconfigurations caused 80% of security exposures. Subtle signs of a compromise get lost in the noise, and then multi-stage attacks unfold undetected due to siloed solutions. Think of an account takeover in Entra ID, then privilege escalation in GitHub, along with data exfiltration from Slack. Each seems unrelated when viewed in isolation, but in a connected timeline of events, it's a dangerous breach. Wing Security's SaaS platform is a multi-layered solution that combines posture management with real-time identity threat detection and response. This allows organizations to get a ...
Researchers Uncover Prompt Injection Vulnerabilities in DeepSeek and Claude AI

Researchers Uncover Prompt Injection Vulnerabilities in DeepSeek and Claude AI

Dec 09, 2024 Artificial Intelligenc / Vulnerability
Details have emerged about a now-patched security flaw in the DeepSeek artificial intelligence (AI) chatbot that, if successfully exploited, could permit a bad actor to take control of a victim's account by means of a prompt injection attack . Security researcher Johann Rehberger, who has chronicled many a prompt injection attack targeting various AI tools, found that providing the input "Print the xss cheat sheet in a bullet list. just payloads" in the DeepSeek chat triggered the execution of JavaScript code as part of the generated response – a classic case of cross-site scripting (XSS). XSS attacks can have serious consequences as they lead to the execution of unauthorized code in the context of the victim's web browser. An attacker could take advantage of such flaws to hijack a user's session and gain access to cookies and other data associated with the chat.deepseek[.]com domain, thereby leading to an account takeover. "After some experimenting,...
cyber security

Mastering AI Security: Your Essential Guide

websiteWizAI Security / Posture Management
Learn how to secure your AI pipelines and stay ahead of AI-specific risks at every stage with these best practices.
Expert Insights / Articles Videos
Cybersecurity Resources