#1 Trusted Cybersecurity News Platform
Followed by 5.20+ million
The Hacker News Logo
Subscribe – Get Latest News
Maximizing Efficiency and Security in Government Cloud Environments

artificial intelligence | Breaking Cybersecurity News | The Hacker News

Category — artificial intelligence
Meta Disrupts Influence Ops Targeting Romania, Azerbaijan, and Taiwan with Fake Personas

Meta Disrupts Influence Ops Targeting Romania, Azerbaijan, and Taiwan with Fake Personas

May 30, 2025 Artificial Intelligence / Disinformation
Meta on Thursday revealed that it disrupted three covert influence operations originating from Iran, China, and Romania during the first quarter of 2025. "We detected and removed these campaigns before they were able to build authentic audiences on our apps," the social media giant said in its quarterly Adversarial Threat Report. This included a network of 658 accounts on Facebook, 14 Pages, and two accounts on Instagram that targeted Romania across several platforms, including Meta's services, TikTok, X, and YouTube. One of the pages in question had about 18,300 followers. The threat actors behind the activity leveraged fake accounts to manage Facebook Pages, direct users to off-platform websites, and share comments on posts by politicians and news entities. The accounts masqueraded as locals living in Romania and posted content related to sports, travel, or local news. While a majority of these comments did not receive any engagement from authentic audiences, Met...
Cybercriminals Target AI Users with Malware-Loaded Installers Posing as Popular Tools

Cybercriminals Target AI Users with Malware-Loaded Installers Posing as Popular Tools

May 29, 2025 Artificial Intelligence / Cybercrime
Fake installers for popular artificial intelligence (AI) tools like OpenAI ChatGPT and InVideo AI are being used as lures to propagate various threats, such as the CyberLock and Lucky_Gh0$t ransomware families, and a new malware dubbed Numero. "CyberLock ransomware, developed using PowerShell, primarily focuses on encrypting specific files on the victim's system," Cisco Talos researcher Chetan Raghuprasad said in a report published today. "Lucky_Gh0$t ransomware is yet another variant of the Yashma ransomware, which is the sixth iteration of the Chaos ransomware series, featuring only minor modifications to the ransomware binary." Numero, on the other hand, is a destructive malware that impacts victims by manipulating the graphical user interface (GUI) components of their Windows operating system, thereby rendering the machines unusable. The cybersecurity company said the legitimate versions of the AI tools are popular in the business-to-business (B2B) sal...
AI Agents and the Non‑Human Identity Crisis: How to Deploy AI More Securely at Scale

AI Agents and the Non‑Human Identity Crisis: How to Deploy AI More Securely at Scale

May 27, 2025 Artificial Intelligence / Cloud Identity
Artificial intelligence is driving a massive shift in enterprise productivity, from GitHub Copilot's code completions to chatbots that mine internal knowledge bases for instant answers. Each new agent must authenticate to other services, quietly swelling the population of non‑human identities (NHIs) across corporate clouds. That population is already overwhelming the enterprise: many companies now juggle at least 45 machine identities for every human user . Service accounts, CI/CD bots, containers, and AI agents all need secrets, most commonly in the form of API keys, tokens, or certificates, to connect securely to other systems to do their work. GitGuardian's State of Secrets Sprawl 2025 report reveals the cost of this sprawl: over 23.7 million secrets surfaced on public GitHub in 2024 alone. And instead of making the situation better, repositories with Copilot enabled the leak of secrets 40 percent more often .  NHIs Are Not People Unlike human beings logging into systems, ...
cyber security

Navigating the Maze: How to Choose the Best Threat Detection Solution

websiteSygniaThreat Detection / Cybersecurity
Discover how to continuously protect your critical assets with the right MDR strategy. Download the Guide.
The Persistence Problem: Why Exposed Credentials Remain Unfixed—and How to Change That

The Persistence Problem: Why Exposed Credentials Remain Unfixed—and How to Change That

May 12, 2025Secrets Management / DevSecOps
Detecting leaked credentials is only half the battle. The real challenge—and often the neglected half of the equation—is what happens after detection. New research from GitGuardian's State of Secrets Sprawl 2025 report reveals a disturbing trend: the vast majority of exposed company secrets discovered in public repositories remain valid for years after detection, creating an expanding attack surface that many organizations are failing to address. According to GitGuardian's analysis of exposed secrets across public GitHub repositories, an alarming percentage of credentials detected as far back as 2022 remain valid today: "Detecting a leaked secret is just the first step," says GitGuardian's research team. "The true challenge lies in swift remediation." Why Exposed Secrets Remain Valid This persistent validity suggests two troubling possibilities: either organizations are unaware their credentials have been exposed (a security visibility problem),...
GitLab Duo Vulnerability Enabled Attackers to Hijack AI Responses with Hidden Prompts

GitLab Duo Vulnerability Enabled Attackers to Hijack AI Responses with Hidden Prompts

May 23, 2025 Artificial Intelligence / Vulnerability
Cybersecurity researchers have discovered an indirect prompt injection flaw in GitLab's artificial intelligence (AI) assistant Duo that could have allowed attackers to steal source code and inject untrusted HTML into its responses, which could then be used to direct victims to malicious websites. GitLab Duo is an artificial intelligence (AI)-powered coding assistant that enables users to write, review, and edit code. Built using Anthropic's Claude models, the service was first launched in June 2023. But as Legit Security found , GitLab Duo Chat has been susceptible to an indirect prompt injection flaw that permits attackers to "steal source code from private projects, manipulate code suggestions shown to other users, and even exfiltrate confidential, undisclosed zero-day vulnerabilities." Prompt injection refers to a class of vulnerabilities common in AI systems that enable threat actors to weaponize large language models (LLMs) to manipulate responses to user...
Fileless Remcos RAT Delivered via LNK Files and MSHTA in PowerShell-Based Attacks

Fileless Remcos RAT Delivered via LNK Files and MSHTA in PowerShell-Based Attacks

May 16, 2025 Malware / Cyber Attack
Cybersecurity researchers have shed light on a new malware campaign that makes use of a PowerShell-based shellcode loader to deploy a remote access trojan called Remcos RAT. "Threat actors delivered malicious LNK files embedded within ZIP archives, often disguised as Office documents," Qualys security researcher Akshay Thorve said in a technical report. "The attack chain leverages mshta.exe for proxy execution during the initial stage." The latest wave of attacks, as detailed by Qualys, employs tax-related lures to entice users into opening a malicious ZIP archive containing a Windows shortcut (LNK) file, which, in turn, makes use of mshta.exe, a legitimate Microsoft tool used to run HTML Applications (HTA). The binary is used to execute an obfuscated HTA file named "xlab22.hta" hosted on a remote server, which incorporates Visual Basic Script code to download a PowerShell script, a decoy PDF, and another HTA file similar to xlab22.hta called "3...
Fake AI Tools Used to Spread Noodlophile Malware, Targeting 62,000+ via Facebook Lures

Fake AI Tools Used to Spread Noodlophile Malware, Targeting 62,000+ via Facebook Lures

May 12, 2025 Malware / Artificial Intelligence
Threat actors have been observed leveraging fake artificial intelligence (AI)-powered tools as a lure to entice users into downloading an information stealer malware dubbed Noodlophile . "Instead of relying on traditional phishing or cracked software sites, they build convincing AI-themed platforms – often advertised via legitimate-looking Facebook groups and viral social media campaigns," Morphisec researcher Shmuel Uzan said in a report published last week. Posts shared on these pages have been found to attract over 62,000 views on a single post, indicating that users looking for AI tools for video and image editing are the target of this campaign. Some of the fake social media pages identified include Luma Dreammachine Al, Luma Dreammachine, and gratistuslibros. Users who land on the social media posts are urged to click on links that advertise AI-powered content creation services, including videos, logos, images, and even websites. One of the bogus websites masquerad...
Deploying AI Agents? Learn to Secure Them Before Hackers Strike Your Business

Deploying AI Agents? Learn to Secure Them Before Hackers Strike Your Business

May 09, 2025 Artificial Intelligence / Software Security
AI agents are changing the way businesses work. They can answer questions, automate tasks, and create better user experiences. But with this power comes new risks — like data leaks, identity theft, and malicious misuse. If your company is exploring or already using AI agents, you need to ask:  Are they secure? AI agents work with sensitive data and make real-time decisions. If they're not protected, attackers can exploit them to steal information, spread misinformation, or take control of systems. Join Michelle Agroskin, Product Marketing Manager at Auth0 , for a free, expert-led webinar — Building AI Agents Securely  — that breaks down the most important AI security issues and what you can do about them. What You'll Learn: What AI Agents Are: Understand how AI agents work and why they're different from chatbots or traditional apps. What Can Go Wrong: Learn about real risks — like adversarial attacks, data leakage, and identity misuse. How to Secure Them: Discover prov...
Google Rolls Out On-Device AI Protections to Detect Scams in Chrome and Android

Google Rolls Out On-Device AI Protections to Detect Scams in Chrome and Android

May 09, 2025 Artificial Intelligence / Online Fraud
Google on Thursday announced it's rolling out new artificial intelligence (AI)-powered countermeasures to combat scams across Chrome, Search, and Android. The tech giant said it will begin using Gemini Nano , its on-device large language model (LLM), to improve Safe Browsing in Chrome 137 on desktops. "The on-device approach provides instant insight on risky websites and allows us to offer protection, even against scams that haven't been seen before. Gemini Nano's LLM is perfect for this use because of its ability to distill the varied, complex nature of websites, helping us adapt to new scam tactics more quickly," the company said . Google noted that it's already using this AI-driven approach to tackle remote tech support scams, which often seek to trick users into parting with their personal or financial information under the pretext of a non-existent computer problem. This works by evaluating the web pages using the LLM for potential signals that are...
Claude AI Exploited to Operate 100+ Fake Political Personas in Global Influence Campaign

Claude AI Exploited to Operate 100+ Fake Political Personas in Global Influence Campaign

May 01, 2025 Artificial Intelligence / Disinformation
Artificial intelligence (AI) company Anthropic has revealed that unknown threat actors leveraged its Claude chatbot for an "influence-as-a-service" operation to engage with authentic accounts across Facebook and X. The sophisticated activity, branded as financially-motivated, is said to have used its AI tool to orchestrate 100 distinct personas on the two social media platforms, creating a network of "politically-aligned accounts" that engaged with "10s of thousands" of authentic accounts. The now-disrupted operation, Anthropic researchers said, prioritized persistence and longevity over vitality and sought to amplify moderate political perspectives that supported or undermined European, Iranian, the United Arab Emirates (U.A.E.), and Kenyan interests. These included promoting the U.A.E. as a superior business environment while being critical of European regulatory frameworks, focusing on energy security narratives for European audiences, and cultural...
Researchers Demonstrate How MCP Prompt Injection Can Be Used for Both Attack and Defense

Researchers Demonstrate How MCP Prompt Injection Can Be Used for Both Attack and Defense

Apr 30, 2025 Artificial Intelligence / Email Security
As the field of artificial intelligence (AI) continues to evolve at a rapid pace, fresh research has found how techniques that render the Model Context Protocol ( MCP ) susceptible to prompt injection attacks could be used to develop security tooling or identify malicious tools, according to a new report from Tenable. MCP, launched by Anthropic in November 2024, is a framework designed to connect Large Language Models (LLMs) with external data sources and services, and make use of model-controlled tools to interact with those systems to enhance the accuracy, relevance, and utility of AI applications. It follows a client-server architecture, allowing hosts with MCP clients such as Claude Desktop or Cursor to communicate with different MCP servers, each of which exposes specific tools and capabilities. While the open standard offers a unified interface to access various data sources and even switch between LLM providers, they also come with a new set of risks, ranging from exc...
Meta Launches LlamaFirewall Framework to Stop AI Jailbreaks, Injections, and Insecure Code

Meta Launches LlamaFirewall Framework to Stop AI Jailbreaks, Injections, and Insecure Code

Apr 30, 2025 Secure Coding / Vulnerability
Meta on Tuesday announced LlamaFirewall , an open-source framework designed to secure artificial intelligence (AI) systems against emerging cyber risks such as prompt injection, jailbreaks, and insecure code, among others. The framework , the company said, incorporates three guardrails, including PromptGuard 2, Agent Alignment Checks, and CodeShield. PromptGuard 2 is designed to detect direct jailbreak and prompt injection attempts in real-time, while Agent Alignment Checks is capable of inspecting agent reasoning for possible goal hijacking and indirect prompt injection scenarios. CodeShield refers to an online static analysis engine that seeks to prevent the generation of insecure or dangerous code by AI agents. "LlamaFirewall is built to serve as a flexible, real-time guardrail framework for securing LLM-powered applications," the company said in a GitHub description of the project. "Its architecture is modular, enabling security teams and developers to com...
WhatsApp Launches Private Processing to Enable AI Features While Protecting Message Privacy

WhatsApp Launches Private Processing to Enable AI Features While Protecting Message Privacy

Apr 29, 2025 Artificial Intelligence / Data Protection
Popular messaging app WhatsApp on Tuesday unveiled a new technology called Private Processing to enable artificial intelligence (AI) capabilities in a privacy-preserving manner. "Private Processing will allow users to leverage powerful optional AI features – like summarizing unread messages or editing help – while preserving WhatsApp's core privacy promise," the Meta-owned service said in a statement shared with The Hacker News. With the introduction of the latest feature, the idea is to facilitate the use of AI features while still keeping users' messages private. It's expected to be made available in the coming weeks. The capability, in a nutshell, allows users to initiate a request to process messages using AI within a secure environment called the confidential virtual machine (CVM) such that no other party, including Meta and WhatsApp, can access them. Confidential processing is one of the three tenets that underpin the feature, the others being - Enf...
New Reports Uncover Jailbreaks, Unsafe Code, and Data Theft Risks in Leading AI Systems

New Reports Uncover Jailbreaks, Unsafe Code, and Data Theft Risks in Leading AI Systems

Apr 29, 2025 Vulnerability / Artificial Intelligence
Various generative artificial intelligence (GenAI) services have been found vulnerable to two types of jailbreak attacks that make it possible to produce illicit or dangerous content. The first of the two techniques, codenamed Inception, instructs an AI tool to imagine a fictitious scenario, which can then be adapted into a second scenario within the first one where there exists no safety guardrails . "Continued prompting to the AI within the second scenarios context can result in bypass of safety guardrails and allow the generation of malicious content," the CERT Coordination Center (CERT/CC) said in an advisory released last week. The second jailbreak is realized by prompting the AI for information on how not to reply to a specific request.  "The AI can then be further prompted with requests to respond as normal, and the attacker can then pivot back and forth between illicit questions that bypass safety guardrails and normal prompts," CERT/CC added. Success...
Expert Insights Articles Videos
Cybersecurity Resources