#1 Trusted Cybersecurity News Platform
Followed by 5.20+ million
The Hacker News Logo
Subscribe – Get Latest News
AWS EKS Security Best Practices

AI Security | Breaking Cybersecurity News | The Hacker News

Category — AI Security
Your AI Agents Might Be Leaking Data — Watch this Webinar to Learn How to Stop It

Your AI Agents Might Be Leaking Data — Watch this Webinar to Learn How to Stop It

Jul 04, 2025 AI Security / Enterprise Security
Generative AI is changing how businesses work, learn, and innovate. But beneath the surface, something dangerous is happening. AI agents and custom GenAI workflows are creating new, hidden ways for sensitive enterprise data to leak —and most teams don't even realize it. If you're building, deploying, or managing AI systems, now is the time to ask: Are your AI agents exposing confidential data without your knowledge? Most GenAI models don't intentionally leak data. But here's the problem: these agents are often plugged into corporate systems—pulling from SharePoint, Google Drive, S3 buckets, and internal tools to give smart answers. And that's where the risks begin. Without tight access controls, governance policies, and oversight, a well-meaning AI can accidentally expose sensitive information to the wrong users—or worse, to the internet. Imagine a chatbot revealing internal salary data. Or an assistant surfacing unreleased product designs during a casual query. This isn't hypot...
Hackers Using PDFs to Impersonate Microsoft, DocuSign, and More in Callback Phishing Campaigns

Hackers Using PDFs to Impersonate Microsoft, DocuSign, and More in Callback Phishing Campaigns

Jul 02, 2025 Vulnerability / Cybercrime
Cybersecurity researchers are calling attention to phishing campaigns that impersonate popular brands and trick targets into calling phone numbers operated by threat actors. "A significant portion of email threats with PDF payloads persuade victims to call adversary-controlled phone numbers, displaying another popular social engineering technique known as Telephone-Oriented Attack Delivery (TOAD), also known as callback phishing," Cisco Talos researcher Omid Mirzaei said in a report shared with The Hacker News. An analysis of phishing emails with PDF attachments between May 5 and June 5, 2025, has revealed Microsoft and Docusign to be the most impersonated brands. NortonLifeLock, PayPal, and Geek Squad are among the most impersonated brands in TOAD emails with PDF attachments. The activity is part of wider phishing attacks that attempt to leverage the trust people have with popular brands to initiate malicious actions. These messages typically incorporate PDF attachments...
Vercel's v0 AI Tool Weaponized by Cybercriminals to Rapidly Create Fake Login Pages at Scale

Vercel's v0 AI Tool Weaponized by Cybercriminals to Rapidly Create Fake Login Pages at Scale

Jul 02, 2025 AI Security / Phishing
Unknown threat actors have been observed weaponizing v0 , a generative artificial intelligence (AI) tool from Vercel, to design fake sign-in pages that impersonate their legitimate counterparts. "This observation signals a new evolution in the weaponization of Generative AI by threat actors who have demonstrated an ability to generate a functional phishing site from simple text prompts," Okta Threat Intelligence researchers Houssem Eddine Bordjiba and Paula De la Hoz said . v0 is an AI-powered offering from Vercel that allows users to create basic landing pages and full-stack apps using natural language prompts. The identity services provider said it has observed scammers using the technology to develop convincing replicas of login pages associated with multiple brands, including an unnamed customer of its own. Following responsible disclosure, Vercel has blocked access to these phishing sites. The threat actors behind the campaign have also been found to host other ...
cyber security

SaaS Security Made Simple

websiteAppomniSaaS Security / SSPM
Simplify SaaS security with a vendor checklist, RFP, and expert guidance.
The Hidden Risks of SaaS: Why Built-In Protections Aren't Enough for Modern Data Resilience

The Hidden Risks of SaaS: Why Built-In Protections Aren't Enough for Modern Data Resilience

Jun 26, 2025Data Protection / Compliance
SaaS Adoption is Skyrocketing, Resilience Hasn't Kept Pace SaaS platforms have revolutionized how businesses operate. They simplify collaboration, accelerate deployment, and reduce the overhead of managing infrastructure. But with their rise comes a subtle, dangerous assumption: that the convenience of SaaS extends to resilience. It doesn't. These platforms weren't built with full-scale data protection in mind . Most follow a shared responsibility model — wherein the provider ensures uptime and application security, but the data inside is your responsibility. In a world of hybrid architectures, global teams, and relentless cyber threats, that responsibility is harder than ever to manage. Modern organizations are being stretched across: Hybrid and multi-cloud environments with decentralized data sprawl Complex integration layers between IaaS, SaaS, and legacy systems Expanding regulatory pressure with steeper penalties for noncompliance Escalating ransomware threats and inside...
Critical Vulnerability in Anthropic's MCP Exposes Developer Machines to Remote Exploits

Critical Vulnerability in Anthropic's MCP Exposes Developer Machines to Remote Exploits

Jul 01, 2025 Vulnerability / AI Security
Cybersecurity researchers have discovered a critical security vulnerability in artificial intelligence (AI) company Anthropic's Model Context Protocol ( MCP ) Inspector project that could result in remote code execution (RCE) and allow an attacker to gain complete access to the hosts. The vulnerability, tracked as CVE-2025-49596 , carries a CVSS score of 9.4 out of a maximum of 10.0. "This is one of the first critical RCEs in Anthropic's MCP ecosystem, exposing a new class of browser-based attacks against AI developer tools," Oligo Security's Avi Lumelsky said in a report published last week. "With code execution on a developer's machine, attackers can steal data, install backdoors, and move laterally across networks - highlighting serious risks for AI teams, open-source projects, and enterprise adopters relying on MCP." MCP, introduced by Anthropic in November 2024, is an open protocol that standardizes the way large language model (LLM) appli...
Echo Chamber Jailbreak Tricks LLMs Like OpenAI and Google into Generating Harmful Content

Echo Chamber Jailbreak Tricks LLMs Like OpenAI and Google into Generating Harmful Content

Jun 23, 2025 LLM Security / AI Security
Cybersecurity researchers are calling attention to a new jailbreaking method called Echo Chamber that could be leveraged to trick popular large language models (LLMs) into generating undesirable responses, irrespective of the safeguards put in place. "Unlike traditional jailbreaks that rely on adversarial phrasing or character obfuscation, Echo Chamber weaponizes indirect references, semantic steering, and multi-step inference," NeuralTrust researcher Ahmad Alobaid said in a report shared with The Hacker News. "The result is a subtle yet powerful manipulation of the model's internal state, gradually leading it to produce policy-violating responses." While LLMs have steadily incorporated various guardrails to combat prompt injections and jailbreaks , the latest research shows that there exist techniques that can yield high success rates with little to no technical expertise. It also serves to highlight a persistent challenge associated with developing eth...
Google Adds Multi-Layered Defenses to Secure GenAI from Prompt Injection Attacks

Google Adds Multi-Layered Defenses to Secure GenAI from Prompt Injection Attacks

Jun 23, 2025 Artificial Intelligence / AI Security
Google has revealed the various safety measures that are being incorporated into its generative artificial intelligence (AI) systems to mitigate emerging attack vectors like indirect prompt injections and improve the overall security posture for agentic AI systems. "Unlike direct prompt injections, where an attacker directly inputs malicious commands into a prompt, indirect prompt injections involve hidden malicious instructions within external data sources," Google's GenAI security team said . These external sources can take the form of email messages, documents, or even calendar invites that trick the AI systems into exfiltrating sensitive data or performing other malicious actions. The tech giant said it has implemented what it described as a "layered" defense strategy that is designed to increase the difficulty, expense, and complexity required to pull off an attack against its systems. These efforts span model hardening, introducing purpose-built mac...
LangSmith Bug Could Expose OpenAI Keys and User Data via Malicious Agents

LangSmith Bug Could Expose OpenAI Keys and User Data via Malicious Agents

Jun 17, 2025 Vulnerability / LLM Security
Cybersecurity researchers have disclosed a now-patched security flaw in LangChain's LangSmith platform that could be exploited to capture sensitive data, including API keys and user prompts. The vulnerability, which carries a CVSS score of 8.8 out of a maximum of 10.0, has been codenamed AgentSmith by Noma Security. LangSmith is an observability and evaluation platform that allows users to develop, test, and monitor large language model (LLM) applications, including those built using LangChain. The service also offers what's called a LangChain Hub , which acts as a repository for all publicly listed prompts, agents, and models. "This newly identified vulnerability exploited unsuspecting users who adopt an agent containing a pre-configured malicious proxy server uploaded to 'Prompt Hub,'" researchers Sasi Levi and Gal Moyal said in a report shared with The Hacker News. "Once adopted, the malicious proxy discreetly intercepted all user communicatio...
New Flodrix Botnet Variant Exploits Langflow AI Server RCE Bug to Launch DDoS Attacks

New Flodrix Botnet Variant Exploits Langflow AI Server RCE Bug to Launch DDoS Attacks

Jun 17, 2025 Botnet / Vulnerability
Cybersecurity researchers have called attention to a new campaign that's actively exploiting a recently disclosed critical security flaw in Langflow to deliver the Flodrix botnet malware. "Attackers use the vulnerability to execute downloader scripts on compromised Langflow servers, which in turn fetch and install the Flodrix malware," Trend Micro researchers Aliakbar Zahravi, Ahmed Mohamed Ibrahim, Sunil Bharti, and Shubham Singh said in a technical report published today. The activity entails the exploitation of CVE-2025-3248 (CVSS score: 9.8), a missing authentication vulnerability in Langflow , a Python-based "visual framework" for building artificial intelligence (AI) applications. Successful exploitation of the flaw could enable unauthenticated attackers to execute arbitrary code via crafted HTTP requests. It was patched by Langflow in March 2025 with version 1.3.0. Last month, the U.S. Cybersecurity and Infrastructure Security Agency (CISA) flagg...
PyPI, npm, and AI Tools Exploited in Malware Surge Targeting DevOps and Cloud Environments

PyPI, npm, and AI Tools Exploited in Malware Surge Targeting DevOps and Cloud Environments

Jun 16, 2025 Malware / DevOps
Cybersecurity researchers from  SafeDep and Veracode detailed a number of malware-laced npm packages that are designed to execute remote code and download additional payloads. The packages in question are listed below - eslint-config-airbnb-compat (676 Downloads) ts-runtime-compat-check (1,588 Downloads) solders (983 Downloads) @mediawave/lib (386 Downloads) All the identified npm packages have since been taken down from npm, but not before they were downloaded hundreds of times from the package registry.  SafeDep's analysis of eslint-config-airbnb-compat found that the JavaScript library has ts-runtime-compat-check listed as a dependency, which, in turn, contacts an external server defined in the former package ("proxy.eslint-proxy[.]site") to retrieve and execute a Base64-encoded string. The exact nature of the payload is unknown. "It implements a multi-stage remote code execution attack using a transitive dependency to hide the malicious code,"...
Non-Human Identities: How to Address the Expanding Security Risk

Non-Human Identities: How to Address the Expanding Security Risk

Jun 12, 2025 DevOps / AI Security
Human identities management and control is pretty well done with its set of dedicated tools, frameworks, and best practices. This is a very different world when it comes to Non-human identities also referred to as machine identities. GitGuardian's end-to-end NHI security platform is here to close the gap. Enterprises are Losing Track of Their Machine Identities Machine identities–service accounts, API keys, bots, automation, and workload identities–that now outnumber humans by up to 100:1 are in fact a massive blind spot in companies' security landscape: Without robust governance, NHIs become a prime target for attackers. Orphaned credentials, over-privileged accounts, and "zombie" secrets are proliferating—especially as organizations accelerate cloud adoption, integrate AI-powered agents, and automate their infrastructure . Secrets Sprawl: The New Attack Surface GitGuardian's research shows that 70% of valid secrets detected in public repositories in 2022 remained active in ...
Redefining Cyber Value: Why Business Impact Should Lead the Security Conversation

Redefining Cyber Value: Why Business Impact Should Lead the Security Conversation

Jun 05, 2025 Risk Management / Operational Resilience
Security teams face growing demands with more tools, more data, and higher expectations than ever. Boards approve large security budgets, yet still ask the same question: what is the business getting in return? CISOs respond with reports on controls and vulnerability counts – but executives want to understand risk in terms of financial exposure, operational impact, and avoiding loss. The disconnect has become difficult to ignore. The average cost of a breach has reached $4.88 million, according to recent IBM data . That figure reflects not just incident response but also downtime, lost productivity, customer attrition, and the extended effort required to restore operations and trust. The fallout is rarely confined to security. Security leaders need a model that brings those consequences into view before they surface. A Business Value Assessment (BVA) offers that model. It links exposures to cost, prioritization to return, and prevention to tangible value. This article will explain ...
Your SaaS Data Isn't Safe: Why Traditional DLP Solutions Fail in the Browser Era

Your SaaS Data Isn't Safe: Why Traditional DLP Solutions Fail in the Browser Era

Jun 04, 2025 Browser Security / Enterprise Security
Traditional data leakage prevention (DLP) tools aren't keeping pace with the realities of how modern businesses use SaaS applications. Companies today rely heavily on SaaS platforms like Google Workspace, Salesforce, Slack, and generative AI tools, significantly altering the way sensitive information is handled. In these environments, data rarely appears as traditional files or crosses networks in ways endpoint or network-based DLP tools can monitor. Yet, most companies continue using legacy DLP systems, leaving critical security gaps. A new white paper, Rethinking DLP For The SaaS Era: Why Browser-Centric DLP is the New Mandate , identifies precisely why current DLP methods struggle to secure modern SaaS-driven workflows. It also explores how browser-native security addresses these gaps by focusing security efforts exactly where user interactions occur, in the browser. Why Traditional DLP Tools Fall Short Traditional DLP solutions were built for a simpler time when sensitive...
Malicious PyPI, npm, and Ruby Packages Exposed in Ongoing Open-Source Supply Chain Attacks

Malicious PyPI, npm, and Ruby Packages Exposed in Ongoing Open-Source Supply Chain Attacks

Jun 04, 2025 Supply Chain Attack / DevOps
Several malicious packages have been uncovered across the npm, Python, and Ruby package repositories that drain funds from cryptocurrency wallets, erase entire codebases after installation, and exfiltrate Telegram API tokens, once again demonstrating the variety of supply chain threats lurking in open-source ecosystems. The findings come from multiple reports published by Checkmarx, ReversingLabs, Safety, and Socket in recent weeks. The list of identified packages across these platforms are listed below - Socket noted that the two malicious gems were published by a threat actor under the aliases Bùi nam, buidanhnam, and si_mobile merely days after Vietnam ordered a nationwide ban on the Telegram messaging app late last month for allegedly not cooperating with the government to tackle illicit activities related to fraud, drug trafficking, and terrorism. "These gems silently exfiltrate all data sent to the Telegram API by redirecting traffic through a command-and-control (C2...
Expert Insights Articles Videos
Cybersecurity Resources