#1 Trusted Cybersecurity News Platform
Followed by 5.20+ million
The Hacker News Logo
Subscribe – Get Latest News
Salesforce Security Handbook

artificial intelligence | Breaking Cybersecurity News | The Hacker News

Category — artificial intelligence
The AI SOC Stack of 2026: What Sets Top-Tier Platforms Apart?

The AI SOC Stack of 2026: What Sets Top-Tier Platforms Apart?

Oct 10, 2025 Artificial Intelligence / Threat Detection
The SOC of 2026 will no longer be a human-only battlefield. As organizations scale and threats evolve in sophistication and velocity, a new generation of AI-powered agents is reshaping how Security Operations Centers (SOCs) detect, respond, and adapt. But not all AI SOC platforms are created equal. From prompt-dependent copilots to autonomous, multi-agent systems, the current market offers everything from smart assistants to force-multiplying automation. While adoption is still early— estimated at 1–5% penetration according to Gartner —the shift is undeniable. SOC teams must now ask a fundamental question: What type of AI belongs in my security stack? The Limits of Traditional SOC Automation Despite promises from legacy SOAR platforms and rule-based SIEM enhancements, many security leaders still face the same core challenges: Analyst alert fatigue from redundant low-fidelity triage tasks Manual context correlation across disparate tools and logs Disjointed and static detect...
From HealthKick to GOVERSHELL: The Evolution of UTA0388's Espionage Malware

From HealthKick to GOVERSHELL: The Evolution of UTA0388's Espionage Malware

Oct 09, 2025 Cyber Espionage / Artificial Intelligence
A China-aligned threat actor codenamed UTA0388 has been attributed to a series of spear-phishing campaigns targeting North America, Asia, and Europe that are designed to deliver a Go-based implant known as GOVERSHELL . "The initially observed campaigns were tailored to the targets, and the messages purported to be sent by senior researchers and analysts from legitimate-sounding, completely fabricated organizations," Volexity said in a Wednesday report. "The goal of these spear phishing campaigns was to socially engineer targets into clicking links that led to a remotely hosted archive containing a malicious payload." Since then, the threat actor behind the attacks is said to have leveraged different lures and fictional identities, spanning several languages, including English, Chinese, Japanese, French, and German. Early iterations of the campaigns have been found to embed links to phishing content either hosted on a cloud-based service or their own infrastruc...
ThreatsDay Bulletin: MS Teams Hack, MFA Hijacking, $2B Crypto Heist, Apple Siri Probe & More

ThreatsDay Bulletin: MS Teams Hack, MFA Hijacking, $2B Crypto Heist, Apple Siri Probe & More

Oct 09, 2025 Cybersecurity / Hacking News
Cyber threats are evolving faster than ever. Attackers now combine social engineering, AI-driven manipulation, and cloud exploitation to breach targets once considered secure. From communication platforms to connected devices, every system that enhances convenience also expands the attack surface. This edition of ThreatsDay Bulletin explores these converging risks and the safeguards that help preserve trust in an increasingly intelligent threat landscape. How Threat Actors Abuse Microsoft Teams Attackers Abuse Microsoft Teams for Extortion, Social Engineering, and Financial Theft Microsoft detailed the various ways threat actors can abuse its Teams chat software at various stages of the attack chain, even using it to support financial theft through extortion, social engineering, or technical means. " Octo Tempest has used communication apps, including Teams, to send taunting and threatening messages to organizations, defenders, and incident response teams as p...
cyber security

New Webinar: Analyzing Real-world ClickFix Attacks

websitePush SecurityBrowser Security / Threat Detection
Learn how ClickFix-style attacks are bypassing detection controls, and what security teams can do about it.
cyber security

Weaponized GenAI + Extortion-First Strategies Fueling a New Age of Ransomware

websiteZscalerRansomware / Endpoint Security
Trends and insights based on expert analysis of public leak sites, ransomware samples and attack data.
From Phishing to Malware: AI Becomes Russia's New Cyber Weapon in War on Ukraine

From Phishing to Malware: AI Becomes Russia's New Cyber Weapon in War on Ukraine

Oct 09, 2025 Artificial Intelligence / Malware
Russian hackers' adoption of artificial intelligence (AI) in cyber attacks against Ukraine has reached a new level in the first half of 2025 (H1 2025), the country's State Service for Special Communications and Information Protection (SSSCIP) said. "Hackers now employ it not only to generate phishing messages, but some of the malware samples we have analyzed show clear signs of being generated with AI – and attackers are certainly not going to stop there," the agency said in a report published Wednesday. SSSCIP said 3,018 cyber incidents were recorded during the time period, up from 2,575 in the second half of 2024 (H2 2024). Local authorities and military entities witnessed an increase in attacks compared to H2 2024, while those targeting government and energy sectors declined. One notable attack observed involved UAC-0219's use of malware called WRECKSTEEL in attacks aimed at state administration bodies and critical infrastructure facilities in the country...
Severe Framelink Figma MCP Vulnerability Lets Hackers Execute Code Remotely

Severe Framelink Figma MCP Vulnerability Lets Hackers Execute Code Remotely

Oct 08, 2025 Vulnerability / Software Security
Cybersecurity researchers have disclosed details of a now-patched vulnerability in the popular figma-developer-mcp Model Context Protocol ( MCP ) server that could allow attackers to achieve code execution. The vulnerability, tracked as CVE-2025-53967 (CVSS score: 7.5), is a command injection bug stemming from the unsanitized use of user input, opening the door to a scenario where an attacker can send arbitrary system commands. "The server constructs and executes shell commands using unvalidated user input directly within command-line strings. This introduces the possibility of shell metacharacter injection (|, >, &&, etc.)," according to a GitHub advisory for the flaw. "Successful exploitation can lead to remote code execution under the server process's privileges." Given that the Framelink Figma MCP server exposes various tools to perform operations in Figma using artificial intelligence (AI)-powered coding agents like Cursor, an attacker co...
OpenAI Disrupts Russian, North Korean, and Chinese Hackers Misusing ChatGPT for Cyberattacks

OpenAI Disrupts Russian, North Korean, and Chinese Hackers Misusing ChatGPT for Cyberattacks

Oct 08, 2025 Artificial Intelligence / Threat Intelligence
OpenAI on Tuesday said it disrupted three activity clusters for misusing its ChatGPT artificial intelligence (AI) tool to facilitate malware development. This includes a Russian‑language threat actor, who is said to have used the chatbot to help develop and refine a remote access trojan (RAT), a credential stealer with an aim to evade detection. The operator also used several ChatGPT accounts to prototype and troubleshoot technical components that enable post‑exploitation and credential theft. "These accounts appear to be affiliated with Russian-speaking criminal groups, as we observed them posting evidence of their activities in a Telegram channel dedicated to those actors," OpenAI said. The AI company said while its large language models (LLMs) refused the threat actor's direct requests to produce malicious content, they worked around the limitation by creating building-block code, which was then assembled to create the workflows. Some of the produced output invo...
Google's New AI Doesn't Just Find Vulnerabilities — It Rewrites Code to Patch Them

Google's New AI Doesn't Just Find Vulnerabilities — It Rewrites Code to Patch Them

Oct 07, 2025 Artificial Intelligence / Software Security
Google's DeepMind division on Monday announced an artificial intelligence (AI)-powered agent called CodeMender that automatically detects, patches, and rewrites vulnerable code to prevent future exploits. The efforts add to the company's ongoing efforts to improve AI-powered vulnerability discovery, such as Big Sleep and OSS-Fuzz . DeepMind said the AI agent is designed to be both reactive and proactive, by fixing new vulnerabilities as soon as they are spotted as well as rewriting and securing existing codebases with an aim to eliminate whole classes of vulnerabilities in the process. "By automatically creating and applying high-quality security patches, CodeMender's AI-powered agent helps developers and maintainers focus on what they do best — building good software," DeepMind researchers Raluca Ada Popa and Four Flynn said . "Over the past six months that we've been building CodeMender, we have already upstreamed 72 security fixes to open source proje...
New Research: AI Is Already the #1 Data Exfiltration Channel in the Enterprise

New Research: AI Is Already the #1 Data Exfiltration Channel in the Enterprise

Oct 07, 2025 Artificial Intelligence / Browser Security
For years, security leaders have treated artificial intelligence as an "emerging" technology, something to keep an eye on but not yet mission-critical. A new Enterprise AI and SaaS Data Security Report by AI & Browser Security company LayerX proves just how outdated that mindset has become. Far from a future concern, AI is already the single largest uncontrolled channel for corporate data exfiltration—bigger than shadow SaaS or unmanaged file sharing. The findings, drawn from real-world enterprise browsing telemetry, reveal a counterintuitive truth: the problem with AI in enterprises isn't tomorrow's unknowns, it's today's everyday workflows. Sensitive data is already flowing into ChatGPT, Claude, and Copilot at staggering rates, mostly through unmanaged accounts and invisible copy/paste channels. Traditional DLP tools—built for sanctioned, file-based environments—aren't even looking in the right direction. From "Emerging" to Essential in Record Time In just two years, AI tool...
5 Critical Questions For Adopting an AI Security Solution

5 Critical Questions For Adopting an AI Security Solution

Oct 06, 2025 Artificial Intelligence / Data Protection
In the era of rapidly advancing artificial intelligence (AI) and cloud technologies, organizations are increasingly implementing security measures to protect sensitive data and ensure regulatory compliance. Among these measures, AI-SPM (AI Security Posture Management) solutions have gained traction to secure AI pipelines, sensitive data assets, and the overall AI ecosystem. These solutions help organizations identify risks, control security policies, and protect data and algorithms critical to their operations.  However, not all AI-SPM tools are created equal. When evaluating potential solutions, organizations often struggle to pinpoint which questions to ask to make an informed decision. To help you navigate this complex space, here are five critical questions every organization should ask when selecting an AI-SPM solution: 1: Does the solution offer comprehensive visibility and control over AI and associated data risk? With the proliferation of AI models across enterprises, m...
Learn How Leading Security Teams Blend AI + Human Workflows (Free Webinar)

Learn How Leading Security Teams Blend AI + Human Workflows (Free Webinar)

Oct 01, 2025 Automation / IT Operations
AI is changing automation—but not always for the better. That's why we're hosting a new webinar, " Workflow Clarity: Where AI Fits in Modern Automation ," with Thomas Kinsella, Co-founder & Chief Customer Officer at Tines, to explore how leading teams are cutting through the hype and building workflows that actually deliver. The rise of AI has changed how organizations think about automation. But here's the reality many teams are quietly wrestling with: AI isn't a silver bullet. Purely human-led workflows buckle under pressure, rigid rules-based automations break the moment reality shifts, and fully autonomous AI agents risk introducing black-box decision-making that's impossible to audit. For cybersecurity and operations leaders, the stakes are even higher. You need workflows that are fast but reliable, powerful but secure, and—above all—explainable. So where does AI really fit in? The Hidden Problem with "All-In" Automation The push to automate everythi...
2025 Cybersecurity Reality Check: Breaches Hidden, Attack Surfaces Growing, and AI Misperceptions Rising

2025 Cybersecurity Reality Check: Breaches Hidden, Attack Surfaces Growing, and AI Misperceptions Rising

Oct 01, 2025 Attack Surface / Artificial Intelligence
Bitdefender's 2025 Cybersecurity Assessment Report paints a sobering picture of today's cyber defense landscape: mounting pressure to remain silent after breaches, a gap between leadership and frontline teams, and a growing urgency to shrink the enterprise attack surface. The annual research combines insights from over 1,200 IT and security professionals across six countries, along with an analysis of 700,000 cyber incidents by Bitdefender Labs. The results reveal hard truths about how organizations are grappling with threats in an increasingly complex environment. Breaches Swept Under the Rug This year's findings spotlight a disturbing trend: 58% of security professionals were told to keep a breach confidential , even when they believed disclosure was necessary. That's a 38% jump since 2023 , suggesting more organizations may be prioritizing optics over transparency. The pressure is especially acute for CISOs and CIOs , who report higher levels of expectation to remain quiet c...
Researchers Disclose Google Gemini AI Flaws Allowing Prompt Injection and Cloud Exploits

Researchers Disclose Google Gemini AI Flaws Allowing Prompt Injection and Cloud Exploits

Sep 30, 2025 Artificial Intelligence / Vulnerability
Cybersecurity researchers have disclosed three now-patched security vulnerabilities impacting Google's Gemini artificial intelligence (AI) assistant that, if successfully exploited, could have exposed users to major privacy risks and data theft. "They made Gemini vulnerable to search-injection attacks on its Search Personalization Model; log-to-prompt injection attacks against Gemini Cloud Assist; and exfiltration of the user's saved information and location data via the Gemini Browsing Tool," Tenable security researcher Liv Matan said in a report shared with The Hacker News. The vulnerabilities have been collectively codenamed the Gemini Trifecta by the cybersecurity company. They reside in three distinct components of the Gemini suite - A prompt injection flaw in Gemini Cloud Assist that could allow attackers to exploit cloud-based services and compromise cloud resources by taking advantage of the fact that the tool is capable of summarizing logs pulled dir...
Stop Alert Chaos: Context Is the Key to Effective Incident Response

Stop Alert Chaos: Context Is the Key to Effective Incident Response

Sep 30, 2025 Artificial Intelligence / Threat Detection
The Problem: Legacy SOCs and Endless Alert Noise Every SOC leader knows the feeling: hundreds of alerts pouring in, dashboards lighting up like a slot machine, analysts scrambling to keep pace. The harder they try to scale people or buy new tools, the faster the chaos multiplies. The problem is not just volume; it is the model itself. Traditional SOCs start with rules, wait for alerts to fire, and then dump raw signals on analysts. By the time someone pieces together what is really happening, the attacker has already moved on, or moved in. It is a broken loop of noise chasing noise. Flipping the Model: Context Over Chaos Instead of drowning in raw events, treat every incoming signal as a potential opening move in a bigger story. Logs from identity systems, endpoints, cloud workloads, and SIEMs do not just land in separate dashboards; they are normalized, connected, and enriched to form a coherent investigation. A brute-force login attempt on its own is easy to dismiss. But when enh...
Evolving Enterprise Defense to Secure the Modern AI Supply Chain

Evolving Enterprise Defense to Secure the Modern AI Supply Chain

Sep 30, 2025 Artificial Intelligence / Data Protection
The world of enterprise technology is undergoing a dramatic shift. Gen-AI adoption is accelerating at an unprecedented pace, and SaaS vendors are embedding powerful LLMs directly into their platforms. Organizations are embracing AI-powered applications across every function, from marketing and development to finance and HR. This transformation unlocks innovation and efficiency, but it also introduces new risks. Enterprises must balance the promise of AI with the responsibility to protect their data, maintain compliance, and secure their expanding application supply chain. The New Risk Landscape With AI adoption comes a new set of challenges: AI Sprawl : Employees adopt AI tools independently, often without security oversight, creating blind spots and unmanaged risks. Supply Chain Vulnerabilities : interapplication integrations between AI tools and enterprise resources expand the attack surface and introduce dependencies and access paths enterprises can't easily control. Data Exp...
EvilAI Malware Masquerades as AI Tools to Infiltrate Global Organizations

EvilAI Malware Masquerades as AI Tools to Infiltrate Global Organizations

Sep 29, 2025 Malware / Artificial Intelligence
Threat actors have been observed using seemingly legitimate artificial intelligence (AI) tools and software to sneakily slip malware for future attacks on organizations worldwide. According to Trend Micro, the campaign is using productivity or AI-enhanced tools to deliver malware targeting various regions, including Europe, the Americas, and the Asia, Middle East, and Africa (AMEA) region. Manufacturing, government, healthcare, technology, and retail are some of the top sectors affected by the attacks, with India, the U.S., France, Italy, Brazil, Germany, the U.K., Norway, Spain, and Canada emerging as the regions with the most infections, indicating a global spread. "This swift, widespread distribution across multiple regions strongly indicates that EvilAI is not an isolated incident but rather an active and evolving campaign currently circulating in the wild," security researchers Jeffrey Francis Bonaobra, Joshua Aquino, Emmanuel Panopio, Emmanuel Roll, Joshua Lijandro ...
The State of AI in the SOC 2025 - Insights from Recent Study 

The State of AI in the SOC 2025 - Insights from Recent Study 

Sep 29, 2025 Artificial Intelligence / Enterprise Security
Security leaders are embracing AI for triage, detection engineering, and threat hunting as alert volumes and burnout hit breaking points. A comprehensive survey of 282 security leaders at companies across industries reveals a stark reality facing modern Security Operations Centers: alert volumes have reached unsustainable levels, forcing teams to leave critical threats uninvestigated. You can download the full report here . The research, conducted primarily among US-based organizations, shows that AI adoption in security operations has shifted from experimental to essential as teams struggle to keep pace with an ever-growing stream of security alerts. The findings paint a picture of an industry at a tipping point, where traditional SOC models are buckling under operational pressure and AI-powered solutions are emerging as the primary path forward. Alert Volume Reaches Breaking Point Security teams are drowning in alerts, with organizations processing an average of 960 alerts per ...
Microsoft Flags AI-Driven Phishing: LLM-Crafted SVG Files Outsmart Email Security

Microsoft Flags AI-Driven Phishing: LLM-Crafted SVG Files Outsmart Email Security

Sep 29, 2025 Email Security / Artificial Intelligence
Microsoft is calling attention to a new phishing campaign primarily aimed at U.S.-based organizations that has likely utilized code generated using large language models (LLMs) to obfuscate payloads and evade security defenses. "Appearing to be aided by a large language model (LLM), the activity obfuscated its behavior within an SVG file, leveraging business terminology and a synthetic structure to disguise its malicious intent," the Microsoft Threat Intelligence team said in an analysis published last week. The activity, detected on August 28, 2025, shows how threat actors are increasingly adopting artificial intelligence (AI) tools into their workflows, often with the goal of crafting more convincing phishing lures, automating malware obfuscation, and generating code that mimics legitimate content. In the attack chain documented by the Windows maker, bad actors have been observed leveraging an already compromised business email account to send phishing messages to stea...
Salesforce Patches Critical ForcedLeak Bug Exposing CRM Data via AI Prompt Injection

Salesforce Patches Critical ForcedLeak Bug Exposing CRM Data via AI Prompt Injection

Sep 25, 2025 Vulnerability / AI Security
Cybersecurity researchers have disclosed a critical flaw impacting Salesforce Agentforce , a platform for building artificial intelligence (AI) agents, that could allow attackers to potentially exfiltrate sensitive data from its customer relationship management (CRM) tool by means of an indirect prompt injection. The vulnerability has been codenamed ForcedLeak (CVSS score: 9.4) by Noma Security, which discovered and reported the problem on July 28, 2025. It impacts any organization using Salesforce Agentforce with the Web-to-Lead functionality enabled. "This vulnerability demonstrates how AI agents present a fundamentally different and expanded attack surface compared to traditional prompt-response systems," Sasi Levi, security research lead at Noma, said in a report shared with The Hacker News. One of the most severe threats facing generative artificial intelligence (GenAI) systems today is indirect prompt injection , which occurs when malicious instructions are ins...
Researchers Uncover GPT-4-Powered MalTerminal Malware Creating Ransomware, Reverse Shell

Researchers Uncover GPT-4-Powered MalTerminal Malware Creating Ransomware, Reverse Shell

Sep 20, 2025 Malware / Artificial Intelligence
Cybersecurity researchers have discovered what they say is the earliest example known to date of a malware that bakes in Large Language Model (LLM) capabilities. The malware has been codenamed MalTerminal by SentinelOne SentinelLABS research team. The findings were presented at the LABScon 2025 security conference. In a report examining the malicious use of LLMs, the cybersecurity company said AI models are being increasingly used by threat actors for operational support, as well as for embedding them into their tools – an emerging category called LLM-embedded malware that's exemplified by the appearance of LAMEHUG (aka PROMPTSTEAL) and PromptLock . This includes the discovery of a previously reported Windows executable called MalTerminal that uses OpenAI GPT-4 to dynamically generate ransomware code or a reverse shell. There is no evidence to suggest it was ever deployed in the wild, raising the possibility that it could also be a proof-of-concept malware or red team tool. ...
c
Expert Insights Articles Videos
Cybersecurity Resources