#1 Trusted Cybersecurity News Platform
Followed by 5.20+ million
The Hacker News Logo
Subscribe – Get Latest News
AWS EKS Security Best Practices

AI Security | Breaking Cybersecurity News | The Hacker News

Category — AI Security
AI Agents Act Like Employees With Root Access—Here's How to Regain Control

AI Agents Act Like Employees With Root Access—Here's How to Regain Control

7月 16, 2025 Identity Management / AI Security
The AI gold rush is on. But without identity-first security, every deployment becomes an open door. Most organizations secure native AI like a web app, but it behaves more like a junior employee with root access and no manager. From Hype to High Stakes Generative AI has moved beyond the hype cycle. Enterprises are: Deploying LLM copilots to accelerate software development Automating customer service workflows with AI agents Integrating AI into financial operations and decision-making Whether building with open-source models or plugging into platforms like OpenAI or Anthropic, the goal is speed and scale. But what most teams miss is this: Every LLM access point or website is a new identity edge. And every integration adds risk unless identity and device posture are enforced. What Is the AI Build vs. Buy Dilemma? Most enterprises face a pivotal decision: Build : Create in-house agents tailored to internal systems and workflows Buy : Adopt commercial AI tools and SaaS integ...
Deepfakes. Fake Recruiters. Cloned CFOs — Learn How to Stop AI-Driven Attacks in Real Time

Deepfakes. Fake Recruiters. Cloned CFOs — Learn How to Stop AI-Driven Attacks in Real Time

7月 16, 2025 AI Security / Fraud Detection
Social engineering attacks have entered a new era—and they're coming fast, smart, and deeply personalized. It's no longer just suspicious emails in your spam folder. Today's attackers use generative AI, stolen branding assets, and deepfake tools to mimic your executives, hijack your social channels, and create convincing fakes of your website, emails, and even voice. They don't just spoof— they impersonate. Modern attackers aren't relying on chance. They're running long-term, multi-channel campaigns across email, LinkedIn, SMS, and even support portals—targeting your employees, customers, and partners. Whether it's a fake recruiter reaching out on LinkedIn, a lookalike login page sent via text, or a cloned CFO demanding a wire transfer, the tactics are faster, more adaptive, and increasingly automated using AI. The result? Even trained users are falling for sophisticated fakes—because they're not just phishing links anymore. They're operations. This Webinar Shows You How to Fight...
Securing Agentic AI: How to Protect the Invisible Identity Access

Securing Agentic AI: How to Protect the Invisible Identity Access

7月 15, 2025 Automation / Risk Management
AI agents promise to automate everything from financial reconciliations to incident response. Yet every time an AI agent spins up a workflow, it has to authenticate somewhere; often with a high-privilege API key, OAuth token, or service account that defenders can't easily see. These "invisible" non-human identities (NHIs) now outnumber human accounts in most cloud environments, and they have become one of the ripest targets for attackers. Astrix's Field CTO Jonathan Sander put it bluntly in a recent Hacker News webinar : "One dangerous habit we've had for a long time is trusting application logic to act as the guardrails. That doesn't work when your AI agent is powered by LLMs that don't stop and think when they're about to do something wrong. They just do it." Why AI Agents Redefine Identity Risk Autonomy changes everything: An AI agent can chain multiple API calls and modify data without a human in the loop. If the underlying credential is exposed or overprivileged, each addit...
cyber security

New Webinar: Identity Attacks Have Changed — Have Your IR Playbooks?

websitePush SecurityThreat Detection / Identity Security
With modern identity sprawl, the blast radius of a breach is bigger than ever. Are you prepared? Sign up now.
cyber security

AI Can Personalize Everything—Except Trust. Here's How to Build It Anyway

websiteTHN WebinarIdentity Management / AI Security
We'll unpack how leading teams are using AI, privacy-first design, and seamless logins to earn user trust and stay ahead in 2025.
GPUHammer: New RowHammer Attack Variant Degrades AI Models on NVIDIA GPUs

GPUHammer: New RowHammer Attack Variant Degrades AI Models on NVIDIA GPUs

7月 12, 2025 AI Security / Vulnerability
NVIDIA is urging customers to enable System-level Error Correction Codes (ECC) as a defense against a variant of a RowHammer attack demonstrated against its graphics processing units (GPUs). "Risk of successful exploitation from RowHammer attacks varies based on DRAM device, platform, design specification, and system settings," the GPU maker said in an advisory released this week. Dubbed GPUHammer , the attacks mark the first-ever RowHammer exploit demonstrated against NVIDIA's GPUs (e.g., NVIDIA A6000 GPU with GDDR6 Memory), causing malicious GPU users to tamper with other users' data by triggering bit flips in GPU memory. The most concerning consequence of this behavior, University of Toronto researchers found, is the degradation of an artificial intelligence (AI) model's accuracy from 80% to less than 1%. RowHammer is to modern DRAMs just like how Spectre and Meltdown are to contemporary CPUs. While both are hardware-level security vulnerabilities, Row...
Over 600 Laravel Apps Exposed to Remote Code Execution Due to Leaked APP_KEYs on GitHub

Over 600 Laravel Apps Exposed to Remote Code Execution Due to Leaked APP_KEYs on GitHub

7月 12, 2025 Application Security / DevOps
Cybersecurity researchers have discovered a serious security issue that allows leaked Laravel APP_KEYs to be weaponized to gain remote code execution capabilities on hundreds of applications. "Laravel's APP_KEY, essential for encrypting sensitive data, is often leaked publicly (e.g., on GitHub)," GitGuardian said . "If attackers get access to this key, they can exploit a deserialization flaw to execute arbitrary code on the server – putting data and infrastructure at risk." The company, in collaboration with Synacktiv, said it was able to extract more than 260,000 APP_KEYs from GitHub from 2018 to May 30, 2025, identifying over 600 vulnerable Laravel applications in the process. GitGuardian said it observed over 10,000 unique APP_KEYs across GitHub, of which 400 APP_KEYs were validated as functional. APP_KEY is a random 32-byte encryption key that's generated during the installation of Laravel. Stored in the .env file of the application, it's used ...
Critical mcp-remote Vulnerability Enables Remote Code Execution, Impacting 437,000+ Downloads

Critical mcp-remote Vulnerability Enables Remote Code Execution, Impacting 437,000+ Downloads

7月 10, 2025 Vulnerability / AI Security
Cybersecurity researchers have discovered a critical vulnerability in the open-source mcp-remote project that could result in the execution of arbitrary operating system (OS) commands. The vulnerability, tracked as CVE-2025-6514 , carries a CVSS score of 9.6 out of 10.0. "The vulnerability allows attackers to trigger arbitrary OS command execution on the machine running mcp-remote when it initiates a connection to an untrusted MCP server, posing a significant risk to users – a full system compromise," Or Peles, JFrog Vulnerability Research Team Leader, said . Mcp-remote is a tool that sprang forth following Anthropic's release of Model Context Protocol (MCP), an open-source framework that standardizes the way large language model (LLM) applications integrate and share data with external data sources and services. It acts as a local proxy, enabling MCP clients like Claude Desktop to communicate with remote MCP servers, as opposed to running them locally on the same...
SEO Poisoning Campaign Targets 8,500+ SMB Users with Malware Disguised as AI Tools

SEO Poisoning Campaign Targets 8,500+ SMB Users with Malware Disguised as AI Tools

7月 07, 2025 Malware / Malvertising
Cybersecurity researchers have disclosed a malicious campaign that leverages search engine optimization ( SEO ) poisoning techniques to deliver a known malware loader called Oyster (aka Broomstick or CleanUpLoader). The malvertising activity, per Arctic Wolf, promotes fake websites hosting trojanized versions of legitimate tools like PuTTY and WinSCP, aiming to trick software professionals searching for these programs into installing them instead. "Upon execution, a backdoor known as Oyster/Broomstick is installed," the company said in a brief published last week. "Persistence is established by creating a scheduled task that runs every three minutes, executing a malicious DLL (twain_96.dll) via rundll32.exe using the DllRegisterServer export, indicating the use of DLL registration as part of the persistence mechanism." The names of some of the bogus websites are listed below - updaterputty[.]com zephyrhype[.]com putty[.]run putty[.]bet, and puttyy[.]org...
Your AI Agents Might Be Leaking Data — Watch this Webinar to Learn How to Stop It

Your AI Agents Might Be Leaking Data — Watch this Webinar to Learn How to Stop It

7月 04, 2025 AI Security / Enterprise Security
Generative AI is changing how businesses work, learn, and innovate. But beneath the surface, something dangerous is happening. AI agents and custom GenAI workflows are creating new, hidden ways for sensitive enterprise data to leak —and most teams don't even realize it. If you're building, deploying, or managing AI systems, now is the time to ask: Are your AI agents exposing confidential data without your knowledge? Most GenAI models don't intentionally leak data. But here's the problem: these agents are often plugged into corporate systems—pulling from SharePoint, Google Drive, S3 buckets, and internal tools to give smart answers. And that's where the risks begin. Without tight access controls, governance policies, and oversight, a well-meaning AI can accidentally expose sensitive information to the wrong users—or worse, to the internet. Imagine a chatbot revealing internal salary data. Or an assistant surfacing unreleased product designs during a casual query. This isn't hypot...
Hackers Using PDFs to Impersonate Microsoft, DocuSign, and More in Callback Phishing Campaigns

Hackers Using PDFs to Impersonate Microsoft, DocuSign, and More in Callback Phishing Campaigns

7月 02, 2025 Vulnerability / Cybercrime
Cybersecurity researchers are calling attention to phishing campaigns that impersonate popular brands and trick targets into calling phone numbers operated by threat actors. "A significant portion of email threats with PDF payloads persuade victims to call adversary-controlled phone numbers, displaying another popular social engineering technique known as Telephone-Oriented Attack Delivery (TOAD), also known as callback phishing," Cisco Talos researcher Omid Mirzaei said in a report shared with The Hacker News. An analysis of phishing emails with PDF attachments between May 5 and June 5, 2025, has revealed Microsoft and Docusign to be the most impersonated brands. NortonLifeLock, PayPal, and Geek Squad are among the most impersonated brands in TOAD emails with PDF attachments. The activity is part of wider phishing attacks that attempt to leverage the trust people have with popular brands to initiate malicious actions. These messages typically incorporate PDF attachments...
Vercel's v0 AI Tool Weaponized by Cybercriminals to Rapidly Create Fake Login Pages at Scale

Vercel's v0 AI Tool Weaponized by Cybercriminals to Rapidly Create Fake Login Pages at Scale

7月 02, 2025 AI Security / Phishing
Unknown threat actors have been observed weaponizing v0 , a generative artificial intelligence (AI) tool from Vercel, to design fake sign-in pages that impersonate their legitimate counterparts. "This observation signals a new evolution in the weaponization of Generative AI by threat actors who have demonstrated an ability to generate a functional phishing site from simple text prompts," Okta Threat Intelligence researchers Houssem Eddine Bordjiba and Paula De la Hoz said . v0 is an AI-powered offering from Vercel that allows users to create basic landing pages and full-stack apps using natural language prompts. The identity services provider said it has observed scammers using the technology to develop convincing replicas of login pages associated with multiple brands, including an unnamed customer of its own. Following responsible disclosure, Vercel has blocked access to these phishing sites. The threat actors behind the campaign have also been found to host other ...
Critical Vulnerability in Anthropic's MCP Exposes Developer Machines to Remote Exploits

Critical Vulnerability in Anthropic's MCP Exposes Developer Machines to Remote Exploits

7月 01, 2025 Vulnerability / AI Security
Cybersecurity researchers have discovered a critical security vulnerability in artificial intelligence (AI) company Anthropic's Model Context Protocol ( MCP ) Inspector project that could result in remote code execution (RCE) and allow an attacker to gain complete access to the hosts. The vulnerability, tracked as CVE-2025-49596 , carries a CVSS score of 9.4 out of a maximum of 10.0. "This is one of the first critical RCEs in Anthropic's MCP ecosystem, exposing a new class of browser-based attacks against AI developer tools," Oligo Security's Avi Lumelsky said in a report published last week. "With code execution on a developer's machine, attackers can steal data, install backdoors, and move laterally across networks - highlighting serious risks for AI teams, open-source projects, and enterprise adopters relying on MCP." MCP, introduced by Anthropic in November 2024, is an open protocol that standardizes the way large language model (LLM) appli...
Echo Chamber Jailbreak Tricks LLMs Like OpenAI and Google into Generating Harmful Content

Echo Chamber Jailbreak Tricks LLMs Like OpenAI and Google into Generating Harmful Content

6月 23, 2025 LLM Security / AI Security
Cybersecurity researchers are calling attention to a new jailbreaking method called Echo Chamber that could be leveraged to trick popular large language models (LLMs) into generating undesirable responses, irrespective of the safeguards put in place. "Unlike traditional jailbreaks that rely on adversarial phrasing or character obfuscation, Echo Chamber weaponizes indirect references, semantic steering, and multi-step inference," NeuralTrust researcher Ahmad Alobaid said in a report shared with The Hacker News. "The result is a subtle yet powerful manipulation of the model's internal state, gradually leading it to produce policy-violating responses." While LLMs have steadily incorporated various guardrails to combat prompt injections and jailbreaks , the latest research shows that there exist techniques that can yield high success rates with little to no technical expertise. It also serves to highlight a persistent challenge associated with developing eth...
Google Adds Multi-Layered Defenses to Secure GenAI from Prompt Injection Attacks

Google Adds Multi-Layered Defenses to Secure GenAI from Prompt Injection Attacks

6月 23, 2025 Artificial Intelligence / AI Security
Google has revealed the various safety measures that are being incorporated into its generative artificial intelligence (AI) systems to mitigate emerging attack vectors like indirect prompt injections and improve the overall security posture for agentic AI systems. "Unlike direct prompt injections, where an attacker directly inputs malicious commands into a prompt, indirect prompt injections involve hidden malicious instructions within external data sources," Google's GenAI security team said . These external sources can take the form of email messages, documents, or even calendar invites that trick the AI systems into exfiltrating sensitive data or performing other malicious actions. The tech giant said it has implemented what it described as a "layered" defense strategy that is designed to increase the difficulty, expense, and complexity required to pull off an attack against its systems. These efforts span model hardening, introducing purpose-built mac...
Expert Insights Articles Videos
Cybersecurity Resources