#1 Trusted Cybersecurity News Platform
Followed by 5.20+ million
The Hacker News Logo
Subscribe – Get Latest News
AWS EKS Security Best Practices

AI Security | Breaking Cybersecurity News | The Hacker News

Category — AI Security
Product Walkthrough: A Look Inside Pillar's AI Security Platform

Product Walkthrough: A Look Inside Pillar's AI Security Platform

Jul 30, 2025 DevSecOps / AI Security
In this article, we will provide a brief overview of Pillar Security's platform to better understand how they are tackling AI security challenges. Pillar Security is building a platform to cover the entire software development and deployment lifecycle with the goal of providing trust in AI systems. Using its holistic approach, the platform introduces new ways of detecting AI threats, beginning at pre-planning stages and going all the way through runtime. Along the way, users gain visibility into the security posture of their applications while enabling safe AI execution. Pillar is uniquely suited to the challenges inherent in AI security. Co-founder and CEO Dor Sarig comes from a cyber-offensive background, having spent a decade leading security operations for governmental and enterprise organizations. In contrast, co-founder and CTO Ziv Karlinger spent over ten years developing defensive techniques, securing against financial cybercrime and securing supply chains. Together, the...
Google Launches DBSC Open Beta in Chrome and Enhances Patch Transparency via Project Zero

Google Launches DBSC Open Beta in Chrome and Enhances Patch Transparency via Project Zero

Jul 30, 2025 Device Security / AI Security
Google has announced that it's making a security feature called Device Bound Session Credentials (DBSC) in open beta to ensure that users are safeguarded against session cookie theft attacks. DBSC, first introduced as a prototype in April 2024, is designed to bind authentication sessions to a device so as to prevent threat actors from using stolen cookies to sign-in to victims' accounts and gain unauthorized access from a separate device under their control. "Available in the Chrome browser on Windows, DBSC strengthens security after you are logged in and helps bind a session cookie – small files used by websites to remember user information – to the device a user authenticated from," Andy Wen, senior director of product management at Google Workspace, said . DBSC is not only meant to secure user accounts post-authentication. It makes it a lot more difficult for bad actors to reuse session cookies and improves session integrity. The company also noted that pas...
Wiz Uncovers Critical Access Bypass Flaw in AI-Powered Vibe Coding Platform Base44

Wiz Uncovers Critical Access Bypass Flaw in AI-Powered Vibe Coding Platform Base44

Jul 29, 2025 LLM Security / Vulnerability
Cybersecurity researchers have disclosed a now-patched critical security flaw in a popular vibe coding platform called Base44 that could allow unauthorized access to private applications built by its users. "The vulnerability we discovered was remarkably simple to exploit -- by providing only a non-secret 'app_id' value to undocumented registration and email verification endpoints, an attacker could have created a verified account for private applications on their platform," cloud security firm Wiz said in a report shared with The Hacker News. A net result of this issue is that it bypasses all authentication controls, including Single Sign-On (SSO) protections, granting full access to all the private applications and data contained within them. Following responsible disclosure on July 9, 2025, an official fix was rolled out by Wix, which owns Base44, within 24 hours. There is no evidence that the issue was ever maliciously exploited in the wild. While vibe codin...
cyber security

Master SaaS AI Risk: Your Complete Governance Playbook

websiteReco AIArtificial Intelligence / SaaS Security
95% use AI, but is it secure? Master SaaS AI governance with standards-aligned frameworks.
Watch This Webinar to Uncover Hidden Flaws in Login, AI, and Digital Trust — and Fix Them

Designing Identity for Trust at Scale—With Privacy, AI, and Seamless Logins in Mind

Jul 24, 2025
Is Managing Customer Logins and Data Giving You Headaches? You're Not Alone! Today, we all expect super-fast, secure, and personalized online experiences. But let's be honest, we're also more careful about how our data is used. If something feels off, trust can vanish in an instant. Add to that the lightning-fast changes AI is bringing to everything from how we log in to spotting online fraud, and it's a whole new ball game! If you're dealing with logins, data privacy, bringing new users on board, or building digital trust, this webinar is for you . Join us for " Navigating Customer Identity in the AI Era ," where we'll dive into the Auth0 2025 Customer Identity Trends Report . We'll show you what's working, what's not, and how to tweak your strategy for the year ahead. In just one session, you'll get practical answers to real-world challenges like: How AI is changing what users expect – and where they're starting to push ba...
Why React Didn't Kill XSS: The New JavaScript Injection Playbook

Why React Didn't Kill XSS: The New JavaScript Injection Playbook

Jul 29, 2025 AI Security /Software Engineering
React conquered XSS? Think again. That's the reality facing JavaScript developers in 2025, where attackers have quietly evolved their injection techniques to exploit everything from prototype pollution to AI-generated code, bypassing the very frameworks designed to keep applications secure. Full 47-page guide with framework-specific defenses (PDF, free). JavaScript conquered the web, but with that victory came new battlefields. While developers embraced React, Vue, and Angular, attackers evolved their tactics, exploiting AI prompt injection, supply chain compromises, and prototype pollution in ways traditional security measures can't catch. A Wake-up Call: The Polyfill.io Attack In June 2024, a single JavaScript injection attack compromised over 100,000 websites in the biggest JavaScript injection attack of the year. The Polyfill.io supply chain attack , where a Chinese company acquired a trusted JavaScript library and weaponized it to inject malicious code, affected major pl...
Hackers Breach Toptal GitHub, Publish 10 Malicious npm Packages With 5,000 Downloads

Hackers Breach Toptal GitHub, Publish 10 Malicious npm Packages With 5,000 Downloads

Jul 28, 2025 Malware / Developer Tools
In what's the latest instance of a software supply chain attack, unknown threat actors managed to compromise Toptal's GitHub organization account and leveraged that access to publish 10 malicious packages to the npm registry. The packages contained code to exfiltrate GitHub authentication tokens and destroy victim systems, Socket said in a report published last week. In addition, 73 repositories associated with the organization were made public. The list of affected packages is below - @toptal/picasso-tailwind @toptal/picasso-charts @toptal/picasso-shared @toptal/picasso-provider @toptal/picasso-select @toptal/picasso-quote @toptal/picasso-forms @xene/core @toptal/picasso-utils @toptal/picasso-typograph All the Node.js libraries were embedded with identical payloads in their package.json files, attracting a total of about 5,000 downloads before they were removed from the repository. The nefarious code has been found to specifically target the preinstall and p...
EncryptHub Targets Web3 Developers Using Fake AI Platforms to Deploy Fickle Stealer Malware

EncryptHub Targets Web3 Developers Using Fake AI Platforms to Deploy Fickle Stealer Malware

Jul 20, 2025 AI Security / Infostealers
The financially motivated threat actor known as EncryptHub (aka LARVA-208 and Water Gamayun) has been attributed to a new campaign that's targeting Web3 developers to infect them with information stealer malware. "LARVA-208 has evolved its tactics, using fake AI platforms (e.g., Norlax AI, mimicking Teampilot) to lure victims with job offers or portfolio review requests," Swiss cybersecurity company PRODAFT said in a statement shared with The Hacker News. While the group has a history of deploying ransomware, the latest findings demonstrate an evolution of its tactics and a diversification of its monetization methods by using stealer malware to harvest data from cryptocurrency wallets. EncryptHub's focus on Web3 developers isn't random—these individuals often manage crypto wallets, access to smart contract repositories, or sensitive test environments. Many operate as freelancers or work across multiple decentralized projects, making them harder to protect wit...
Critical NVIDIA Container Toolkit Flaw Allows Privilege Escalation on AI Cloud Services

Critical NVIDIA Container Toolkit Flaw Allows Privilege Escalation on AI Cloud Services

Jul 18, 2025 Cloud Security / AI Security
Cybersecurity researchers have disclosed a critical container escape vulnerability in the NVIDIA Container Toolkit that could pose a severe threat to managed AI cloud services. The vulnerability, tracked as CVE-2025-23266, carries a CVSS score of 9.0 out of 10.0. It has been codenamed NVIDIAScape by Google-owned cloud security company Wiz. "NVIDIA Container Toolkit for all platforms contains a vulnerability in some hooks used to initialize the container, where an attacker could execute arbitrary code with elevated permissions," NVIDIA said in an advisory for the bug. "A successful exploit of this vulnerability might lead to escalation of privileges, data tampering, information disclosure, and denial-of-service." The shortcoming impacts all versions of NVIDIA Container Toolkit up to and including 1.17.7 and NVIDIA GPU Operator up to and including 25.3.0. It has been addressed by the GPU maker in versions 1.17.8 and 25.3.1, respectively. The NVIDIA Container...
AI Agents Act Like Employees With Root Access—Here's How to Regain Control

AI Agents Act Like Employees With Root Access—Here's How to Regain Control

Jul 16, 2025 Identity Management / AI Security
The AI gold rush is on. But without identity-first security, every deployment becomes an open door. Most organizations secure native AI like a web app, but it behaves more like a junior employee with root access and no manager. From Hype to High Stakes Generative AI has moved beyond the hype cycle. Enterprises are: Deploying LLM copilots to accelerate software development Automating customer service workflows with AI agents Integrating AI into financial operations and decision-making Whether building with open-source models or plugging into platforms like OpenAI or Anthropic, the goal is speed and scale. But what most teams miss is this: Every LLM access point or website is a new identity edge. And every integration adds risk unless identity and device posture are enforced. What Is the AI Build vs. Buy Dilemma? Most enterprises face a pivotal decision: Build : Create in-house agents tailored to internal systems and workflows Buy : Adopt commercial AI tools and SaaS integ...
Deepfakes. Fake Recruiters. Cloned CFOs — Learn How to Stop AI-Driven Attacks in Real Time

Deepfakes. Fake Recruiters. Cloned CFOs — Learn How to Stop AI-Driven Attacks in Real Time

Jul 16, 2025 AI Security / Fraud Detection
Social engineering attacks have entered a new era—and they're coming fast, smart, and deeply personalized. It's no longer just suspicious emails in your spam folder. Today's attackers use generative AI, stolen branding assets, and deepfake tools to mimic your executives, hijack your social channels, and create convincing fakes of your website, emails, and even voice. They don't just spoof— they impersonate. Modern attackers aren't relying on chance. They're running long-term, multi-channel campaigns across email, LinkedIn, SMS, and even support portals—targeting your employees, customers, and partners. Whether it's a fake recruiter reaching out on LinkedIn, a lookalike login page sent via text, or a cloned CFO demanding a wire transfer, the tactics are faster, more adaptive, and increasingly automated using AI. The result? Even trained users are falling for sophisticated fakes—because they're not just phishing links anymore. They're operations. This Webinar Shows You How to Fight...
Securing Agentic AI: How to Protect the Invisible Identity Access

Securing Agentic AI: How to Protect the Invisible Identity Access

Jul 15, 2025 Automation / Risk Management
AI agents promise to automate everything from financial reconciliations to incident response. Yet every time an AI agent spins up a workflow, it has to authenticate somewhere; often with a high-privilege API key, OAuth token, or service account that defenders can't easily see. These "invisible" non-human identities (NHIs) now outnumber human accounts in most cloud environments, and they have become one of the ripest targets for attackers. Astrix's Field CTO Jonathan Sander put it bluntly in a recent Hacker News webinar : "One dangerous habit we've had for a long time is trusting application logic to act as the guardrails. That doesn't work when your AI agent is powered by LLMs that don't stop and think when they're about to do something wrong. They just do it." Why AI Agents Redefine Identity Risk Autonomy changes everything: An AI agent can chain multiple API calls and modify data without a human in the loop. If the underlying credential is exposed or overprivileged, each addit...
GPUHammer: New RowHammer Attack Variant Degrades AI Models on NVIDIA GPUs

GPUHammer: New RowHammer Attack Variant Degrades AI Models on NVIDIA GPUs

Jul 12, 2025 AI Security / Vulnerability
NVIDIA is urging customers to enable System-level Error Correction Codes (ECC) as a defense against a variant of a RowHammer attack demonstrated against its graphics processing units (GPUs). "Risk of successful exploitation from RowHammer attacks varies based on DRAM device, platform, design specification, and system settings," the GPU maker said in an advisory released this week. Dubbed GPUHammer , the attacks mark the first-ever RowHammer exploit demonstrated against NVIDIA's GPUs (e.g., NVIDIA A6000 GPU with GDDR6 Memory), causing malicious GPU users to tamper with other users' data by triggering bit flips in GPU memory. The most concerning consequence of this behavior, University of Toronto researchers found, is the degradation of an artificial intelligence (AI) model's accuracy from 80% to less than 1%. RowHammer is to modern DRAMs just like how Spectre and Meltdown are to contemporary CPUs. While both are hardware-level security vulnerabilities, Row...
Over 600 Laravel Apps Exposed to Remote Code Execution Due to Leaked APP_KEYs on GitHub

Over 600 Laravel Apps Exposed to Remote Code Execution Due to Leaked APP_KEYs on GitHub

Jul 12, 2025 Application Security / DevOps
Cybersecurity researchers have discovered a serious security issue that allows leaked Laravel APP_KEYs to be weaponized to gain remote code execution capabilities on hundreds of applications. "Laravel's APP_KEY, essential for encrypting sensitive data, is often leaked publicly (e.g., on GitHub)," GitGuardian said . "If attackers get access to this key, they can exploit a deserialization flaw to execute arbitrary code on the server – putting data and infrastructure at risk." The company, in collaboration with Synacktiv, said it was able to extract more than 260,000 APP_KEYs from GitHub from 2018 to May 30, 2025, identifying over 600 vulnerable Laravel applications in the process. GitGuardian said it observed over 10,000 unique APP_KEYs across GitHub, of which 400 APP_KEYs were validated as functional. APP_KEY is a random 32-byte encryption key that's generated during the installation of Laravel. Stored in the .env file of the application, it's used ...
Critical mcp-remote Vulnerability Enables Remote Code Execution, Impacting 437,000+ Downloads

Critical mcp-remote Vulnerability Enables Remote Code Execution, Impacting 437,000+ Downloads

Jul 10, 2025 Vulnerability / AI Security
Cybersecurity researchers have discovered a critical vulnerability in the open-source mcp-remote project that could result in the execution of arbitrary operating system (OS) commands. The vulnerability, tracked as CVE-2025-6514 , carries a CVSS score of 9.6 out of 10.0. "The vulnerability allows attackers to trigger arbitrary OS command execution on the machine running mcp-remote when it initiates a connection to an untrusted MCP server, posing a significant risk to users – a full system compromise," Or Peles, JFrog Vulnerability Research Team Leader, said . Mcp-remote is a tool that sprang forth following Anthropic's release of Model Context Protocol (MCP), an open-source framework that standardizes the way large language model (LLM) applications integrate and share data with external data sources and services. It acts as a local proxy, enabling MCP clients like Claude Desktop to communicate with remote MCP servers, as opposed to running them locally on the same...
SEO Poisoning Campaign Targets 8,500+ SMB Users with Malware Disguised as AI Tools

SEO Poisoning Campaign Targets 8,500+ SMB Users with Malware Disguised as AI Tools

Jul 07, 2025 Malware / Malvertising
Cybersecurity researchers have disclosed a malicious campaign that leverages search engine optimization ( SEO ) poisoning techniques to deliver a known malware loader called Oyster (aka Broomstick or CleanUpLoader). The malvertising activity, per Arctic Wolf, promotes fake websites hosting trojanized versions of legitimate tools like PuTTY and WinSCP, aiming to trick software professionals searching for these programs into installing them instead. "Upon execution, a backdoor known as Oyster/Broomstick is installed," the company said in a brief published last week. "Persistence is established by creating a scheduled task that runs every three minutes, executing a malicious DLL (twain_96.dll) via rundll32.exe using the DllRegisterServer export, indicating the use of DLL registration as part of the persistence mechanism." The names of some of the bogus websites are listed below - updaterputty[.]com zephyrhype[.]com putty[.]run putty[.]bet, and puttyy[.]org...
Your AI Agents Might Be Leaking Data — Watch this Webinar to Learn How to Stop It

Your AI Agents Might Be Leaking Data — Watch this Webinar to Learn How to Stop It

Jul 04, 2025 AI Security / Enterprise Security
Generative AI is changing how businesses work, learn, and innovate. But beneath the surface, something dangerous is happening. AI agents and custom GenAI workflows are creating new, hidden ways for sensitive enterprise data to leak —and most teams don't even realize it. If you're building, deploying, or managing AI systems, now is the time to ask: Are your AI agents exposing confidential data without your knowledge? Most GenAI models don't intentionally leak data. But here's the problem: these agents are often plugged into corporate systems—pulling from SharePoint, Google Drive, S3 buckets, and internal tools to give smart answers. And that's where the risks begin. Without tight access controls, governance policies, and oversight, a well-meaning AI can accidentally expose sensitive information to the wrong users—or worse, to the internet. Imagine a chatbot revealing internal salary data. Or an assistant surfacing unreleased product designs during a casual query. This isn't hypot...
Expert Insights Articles Videos
Cybersecurity Resources
//]]>