#1 Trusted Cybersecurity News Platform
Followed by 5.20+ million
The Hacker News Logo
Subscribe – Get Latest News
Salesforce Security Handbook

artificial intelligence | Breaking Cybersecurity News | The Hacker News

Category — artificial intelligence
Learn How Leading Security Teams Blend AI + Human Workflows (Free Webinar)

Learn How Leading Security Teams Blend AI + Human Workflows (Free Webinar)

Oct 01, 2025 Automation / IT Operations
AI is changing automation—but not always for the better. That's why we're hosting a new webinar, " Workflow Clarity: Where AI Fits in Modern Automation ," with Thomas Kinsella, Co-founder & Chief Customer Officer at Tines, to explore how leading teams are cutting through the hype and building workflows that actually deliver. The rise of AI has changed how organizations think about automation. But here's the reality many teams are quietly wrestling with: AI isn't a silver bullet. Purely human-led workflows buckle under pressure, rigid rules-based automations break the moment reality shifts, and fully autonomous AI agents risk introducing black-box decision-making that's impossible to audit. For cybersecurity and operations leaders, the stakes are even higher. You need workflows that are fast but reliable, powerful but secure, and—above all—explainable. So where does AI really fit in? The Hidden Problem with "All-In" Automation The push to automate everythi...
2025 Cybersecurity Reality Check: Breaches Hidden, Attack Surfaces Growing, and AI Misperceptions Rising

2025 Cybersecurity Reality Check: Breaches Hidden, Attack Surfaces Growing, and AI Misperceptions Rising

Oct 01, 2025 Attack Surface / Artificial Intelligence
Bitdefender's 2025 Cybersecurity Assessment Report paints a sobering picture of today's cyber defense landscape: mounting pressure to remain silent after breaches, a gap between leadership and frontline teams, and a growing urgency to shrink the enterprise attack surface. The annual research combines insights from over 1,200 IT and security professionals across six countries, along with an analysis of 700,000 cyber incidents by Bitdefender Labs. The results reveal hard truths about how organizations are grappling with threats in an increasingly complex environment. Breaches Swept Under the Rug This year's findings spotlight a disturbing trend: 58% of security professionals were told to keep a breach confidential , even when they believed disclosure was necessary. That's a 38% jump since 2023 , suggesting more organizations may be prioritizing optics over transparency. The pressure is especially acute for CISOs and CIOs , who report higher levels of expectation to remain quiet c...
Researchers Disclose Google Gemini AI Flaws Allowing Prompt Injection and Cloud Exploits

Researchers Disclose Google Gemini AI Flaws Allowing Prompt Injection and Cloud Exploits

Sep 30, 2025 Artificial Intelligence / Vulnerability
Cybersecurity researchers have disclosed three now-patched security vulnerabilities impacting Google's Gemini artificial intelligence (AI) assistant that, if successfully exploited, could have exposed users to major privacy risks and data theft. "They made Gemini vulnerable to search-injection attacks on its Search Personalization Model; log-to-prompt injection attacks against Gemini Cloud Assist; and exfiltration of the user's saved information and location data via the Gemini Browsing Tool," Tenable security researcher Liv Matan said in a report shared with The Hacker News. The vulnerabilities have been collectively codenamed the Gemini Trifecta by the cybersecurity company. They reside in three distinct components of the Gemini suite - A prompt injection flaw in Gemini Cloud Assist that could allow attackers to exploit cloud-based services and compromise cloud resources by taking advantage of the fact that the tool is capable of summarizing logs pulled dir...
cyber security

How to Remove Otter AI from Your Org

websiteNudge SecuritySaaS Security / Artificial Intelligence
AI notetakers like Otter AI spread fast and introduce a slew of data privacy risks. Learn how to find and remove viral notetakers.
cyber security

[Download Report] State of AI in the SOC 2025: What 280+ Security Leaders Say

websiteProphet SecurityAI SOC Analyst
SOC teams face alert overload. Download this report to learn how SOCs are using AI for faster and smarter triage, investigation, and response.
Stop Alert Chaos: Context Is the Key to Effective Incident Response

Stop Alert Chaos: Context Is the Key to Effective Incident Response

Sep 30, 2025 Artificial Intelligence / Threat Detection
The Problem: Legacy SOCs and Endless Alert Noise Every SOC leader knows the feeling: hundreds of alerts pouring in, dashboards lighting up like a slot machine, analysts scrambling to keep pace. The harder they try to scale people or buy new tools, the faster the chaos multiplies. The problem is not just volume; it is the model itself. Traditional SOCs start with rules, wait for alerts to fire, and then dump raw signals on analysts. By the time someone pieces together what is really happening, the attacker has already moved on, or moved in. It is a broken loop of noise chasing noise. Flipping the Model: Context Over Chaos Instead of drowning in raw events, treat every incoming signal as a potential opening move in a bigger story. Logs from identity systems, endpoints, cloud workloads, and SIEMs do not just land in separate dashboards; they are normalized, connected, and enriched to form a coherent investigation. A brute-force login attempt on its own is easy to dismiss. But when enh...
Evolving Enterprise Defense to Secure the Modern AI Supply Chain

Evolving Enterprise Defense to Secure the Modern AI Supply Chain

Sep 30, 2025 Artificial Intelligence / Data Protection
The world of enterprise technology is undergoing a dramatic shift. Gen-AI adoption is accelerating at an unprecedented pace, and SaaS vendors are embedding powerful LLMs directly into their platforms. Organizations are embracing AI-powered applications across every function, from marketing and development to finance and HR. This transformation unlocks innovation and efficiency, but it also introduces new risks. Enterprises must balance the promise of AI with the responsibility to protect their data, maintain compliance, and secure their expanding application supply chain. The New Risk Landscape With AI adoption comes a new set of challenges: AI Sprawl : Employees adopt AI tools independently, often without security oversight, creating blind spots and unmanaged risks. Supply Chain Vulnerabilities : interapplication integrations between AI tools and enterprise resources expand the attack surface and introduce dependencies and access paths enterprises can't easily control. Data Exp...
EvilAI Malware Masquerades as AI Tools to Infiltrate Global Organizations

EvilAI Malware Masquerades as AI Tools to Infiltrate Global Organizations

Sep 29, 2025 Malware / Artificial Intelligence
Threat actors have been observed using seemingly legitimate artificial intelligence (AI) tools and software to sneakily slip malware for future attacks on organizations worldwide. According to Trend Micro, the campaign is using productivity or AI-enhanced tools to deliver malware targeting various regions, including Europe, the Americas, and the Asia, Middle East, and Africa (AMEA) region. Manufacturing, government, healthcare, technology, and retail are some of the top sectors affected by the attacks, with India, the U.S., France, Italy, Brazil, Germany, the U.K., Norway, Spain, and Canada emerging as the regions with the most infections, indicating a global spread. "This swift, widespread distribution across multiple regions strongly indicates that EvilAI is not an isolated incident but rather an active and evolving campaign currently circulating in the wild," security researchers Jeffrey Francis Bonaobra, Joshua Aquino, Emmanuel Panopio, Emmanuel Roll, Joshua Lijandro ...
The State of AI in the SOC 2025 - Insights from Recent Study 

The State of AI in the SOC 2025 - Insights from Recent Study 

Sep 29, 2025 Artificial Intelligence / Enterprise Security
Security leaders are embracing AI for triage, detection engineering, and threat hunting as alert volumes and burnout hit breaking points. A comprehensive survey of 282 security leaders at companies across industries reveals a stark reality facing modern Security Operations Centers: alert volumes have reached unsustainable levels, forcing teams to leave critical threats uninvestigated. You can download the full report here . The research, conducted primarily among US-based organizations, shows that AI adoption in security operations has shifted from experimental to essential as teams struggle to keep pace with an ever-growing stream of security alerts. The findings paint a picture of an industry at a tipping point, where traditional SOC models are buckling under operational pressure and AI-powered solutions are emerging as the primary path forward. Alert Volume Reaches Breaking Point Security teams are drowning in alerts, with organizations processing an average of 960 alerts per ...
Microsoft Flags AI-Driven Phishing: LLM-Crafted SVG Files Outsmart Email Security

Microsoft Flags AI-Driven Phishing: LLM-Crafted SVG Files Outsmart Email Security

Sep 29, 2025 Email Security / Artificial Intelligence
Microsoft is calling attention to a new phishing campaign primarily aimed at U.S.-based organizations that has likely utilized code generated using large language models (LLMs) to obfuscate payloads and evade security defenses. "Appearing to be aided by a large language model (LLM), the activity obfuscated its behavior within an SVG file, leveraging business terminology and a synthetic structure to disguise its malicious intent," the Microsoft Threat Intelligence team said in an analysis published last week. The activity, detected on August 28, 2025, shows how threat actors are increasingly adopting artificial intelligence (AI) tools into their workflows, often with the goal of crafting more convincing phishing lures, automating malware obfuscation, and generating code that mimics legitimate content. In the attack chain documented by the Windows maker, bad actors have been observed leveraging an already compromised business email account to send phishing messages to stea...
Salesforce Patches Critical ForcedLeak Bug Exposing CRM Data via AI Prompt Injection

Salesforce Patches Critical ForcedLeak Bug Exposing CRM Data via AI Prompt Injection

Sep 25, 2025 Vulnerability / AI Security
Cybersecurity researchers have disclosed a critical flaw impacting Salesforce Agentforce , a platform for building artificial intelligence (AI) agents, that could allow attackers to potentially exfiltrate sensitive data from its customer relationship management (CRM) tool by means of an indirect prompt injection. The vulnerability has been codenamed ForcedLeak (CVSS score: 9.4) by Noma Security, which discovered and reported the problem on July 28, 2025. It impacts any organization using Salesforce Agentforce with the Web-to-Lead functionality enabled. "This vulnerability demonstrates how AI agents present a fundamentally different and expanded attack surface compared to traditional prompt-response systems," Sasi Levi, security research lead at Noma, said in a report shared with The Hacker News. One of the most severe threats facing generative artificial intelligence (GenAI) systems today is indirect prompt injection , which occurs when malicious instructions are ins...
Researchers Uncover GPT-4-Powered MalTerminal Malware Creating Ransomware, Reverse Shell

Researchers Uncover GPT-4-Powered MalTerminal Malware Creating Ransomware, Reverse Shell

Sep 20, 2025 Malware / Artificial Intelligence
Cybersecurity researchers have discovered what they say is the earliest example known to date of a malware that bakes in Large Language Model (LLM) capabilities. The malware has been codenamed MalTerminal by SentinelOne SentinelLABS research team. The findings were presented at the LABScon 2025 security conference. In a report examining the malicious use of LLMs, the cybersecurity company said AI models are being increasingly used by threat actors for operational support, as well as for embedding them into their tools – an emerging category called LLM-embedded malware that's exemplified by the appearance of LAMEHUG (aka PROMPTSTEAL) and PromptLock . This includes the discovery of a previously reported Windows executable called MalTerminal that uses OpenAI GPT-4 to dynamically generate ransomware code or a reverse shell. There is no evidence to suggest it was ever deployed in the wild, raising the possibility that it could also be a proof-of-concept malware or red team tool. ...
ShadowLeak Zero-Click Flaw Leaks Gmail Data via OpenAI ChatGPT Deep Research Agent

ShadowLeak Zero-Click Flaw Leaks Gmail Data via OpenAI ChatGPT Deep Research Agent

Sep 20, 2025 Artificial Intelligence / Cloud Security
Cybersecurity researchers have disclosed a zero-click flaw in OpenAI ChatGPT's Deep Research agent that could allow an attacker to leak sensitive Gmail inbox data with a single crafted email without any user action. The new class of attack has been codenamed ShadowLeak by Radware. Following responsible disclosure on June 18, 2025, the issue was addressed by OpenAI in early August. "The attack utilizes an indirect prompt injection that can be hidden in email HTML (tiny fonts, white-on-white text, layout tricks) so the user never notices the commands, but the agent still reads and obeys them," security researchers Zvika Babo, Gabi Nakibly, and Maor Uziel said . "Unlike prior research that relied on client-side image rendering to trigger the leak, this attack leaks data directly from OpenAI's cloud infrastructure, making it invisible to local or enterprise defenses." Launched by OpenAI in February 2025, Deep Research is an agentic capability built into ...
TA558 Uses AI-Generated Scripts to Deploy Venom RAT in Brazil Hotel Attacks

TA558 Uses AI-Generated Scripts to Deploy Venom RAT in Brazil Hotel Attacks

Sep 17, 2025 Malware / Artificial Intelligence
The threat actor known as TA558 has been attributed to a fresh set of attacks delivering various remote access trojans (RATs) like Venom RAT to breach hotels in Brazil and Spanish-speaking markets. Russian cybersecurity vendor Kaspersky is tracking the activity, observed in summer 2025, to a cluster it tracks as RevengeHotels. "The threat actors continue to employ phishing emails with invoice themes to deliver Venom RAT implants via JavaScript loaders and PowerShell downloaders," the company said . "A significant portion of the initial infector and downloader code in this campaign appears to be generated by large language model (LLM) agents." The findings demonstrate a new trend among cybercriminal groups to leverage artificial intelligence (AI) to bolster their tradecraft. Known to be active since at least 2015, RevengeHotels has a history of hospitality, hotel, and travel organizations in Latin America with the goal of installing malware on compromised syste...
From Quantum Hacks to AI Defenses – Expert Guide to Building Unbreakable Cyber Resilience

From Quantum Hacks to AI Defenses – Expert Guide to Building Unbreakable Cyber Resilience

Sep 17, 2025 Cyber Resilience / Webinar
Quantum computing and AI working together will bring incredible opportunities. Together, the technologies will help us extend innovation further and faster than ever before. But, imagine the flip side, waking up to news that hackers have used a quantum computer to crack your company's encryption overnight, exposing your most sensitive data, rendering much of it untrustworthy. And with your sensitive data exposed, where does that leave trust from your customers? And the cost to mitigate - if that is even possible with your outdated pre-quantum systems? According to IBM, cyber breaches are already hitting businesses with an average of $4.44 million per incident, and as high as $10.22 million in the US, but with quantum and AI working simultaneously, experts warn it could go much higher. In 2025, nearly two-thirds of organizations see quantum computing as the biggest cybersecurity threat looming in the next 3-5 years, while 93% of security leaders are prepping for daily AI-driven a...
Securing the Agentic Era: Introducing Astrix's AI Agent Control Plane

Securing the Agentic Era: Introducing Astrix's AI Agent Control Plane

Sep 16, 2025 AI Security / Enterprise Security
AI agents are rapidly becoming a core part of the enterprise, being embedded across enterprise workflows, operating with autonomy, and making decisions about which systems to access and how to use them. But as agents grow in power and autonomy, so do the risks and threats.  Recent studies show 80% of companies have already experienced unintended AI agent actions, from unauthorized system access to data leaks. These incidents aren't edge cases. They are the inevitable outcome of deploying AI agents at scale without purpose-built security mechanisms. Traditional IAM wasn't designed for this. Agents move too fast, operate 24/7, while relying on non-human identities (NHIs) to define precisely what they can and can't do. How can organizations possibly secure what they cannot see or control? To address this challenge, a new approach is needed—one that enables secure-by-design AI agent deployment across the enterprise. Enter: Astrix's Agent Control Plane (ACP) Astrix's AI Agent Cont...
AI-Powered Villager Pen Testing Tool Hits 11,000 PyPI Downloads Amid Abuse Concerns

AI-Powered Villager Pen Testing Tool Hits 11,000 PyPI Downloads Amid Abuse Concerns

Sep 15, 2025 Artificial Intelligence / Offensive Security
A new artificial intelligence (AI)-powered penetration testing tool linked to a China-based company has attracted nearly 11,000 downloads on the Python Package Index (PyPI) repository, raising concerns that it could be repurposed by cybercriminals for malicious purposes. Dubbed Villager, the framework is assessed to be the work of Cyberspike, which has positioned the tools as a red teaming solution to automate testing workflows. The package was first uploaded to PyPI in late July 2025 by a user named stupidfish001, a former capture the flag (CTF) player for the Chinese HSCSEC team. "The rapid, public availability and automation capabilities create a realistic risk that Villager will follow the Cobalt Strike trajectory: commercially or legitimately developed tooling becoming widely adopted by threat actors for malicious campaigns," Straiker researchers Dan Regalado and Amanda Rousseau said in a report shared with The Hacker News. The emergence of Villager comes shortly ...
Expert Insights Articles Videos
Cybersecurity Resources