-->
#1 Trusted Cybersecurity News Platform
Followed by 5.70+ million
The Hacker News Logo
Get the Latest News
cybersecurity

Search results for AI vulnerability discovery | Breaking Cybersecurity News | The Hacker News

Artificial Intelligence – What's all the fuss?

Artificial Intelligence – What's all the fuss?

Apr 17, 2025 Artificial Intelligence / Threat Intelligence
Talking about AI: Definitions Artificial Intelligence (AI) — AI refers to the simulation of human intelligence in machines, enabling them to perform tasks that typically require human intelligence, such as decision-making and problem-solving. AI is the broadest concept in this field, encompassing various technologies and methodologies, including Machine Learning (ML) and Deep Learning. Machine Learning (ML) — ML is a subset of AI that focuses on developing algorithms and statistical models that allow machines to learn from and make predictions or decisions based on data. ML is a specific approach within AI, emphasizing data-driven learning and improvement over time. Deep Learning (DL) — Deep Learning is a specialized subset of ML that uses neural networks with multiple layers to analyze and interpret complex data patterns. This advanced form of ML is particularly effective for tasks such as image and speech recognition, making it a crucial component of many AI applications. Larg...
The State of Trusted Open Source Report

The State of Trusted Open Source Report

Apr 02, 2026 DevSecOps / Artificial Intelligence
In December 2025 , we shared the first-ever The State of Trusted Open Source report, featuring insights from our product data and customer base on open source consumption across our catalog of container image projects, versions, images, language libraries, and builds. These insights shed light on what teams pull, deploy, and maintain day to day, alongside the vulnerabilities and remediation realities these projects face. Fast forward a few months, and software development is accelerating at a pace that most didn’t see coming. AI is increasingly embedded across the development lifecycle, from code generation to infrastructure automation, as models become more advanced and better at meeting the demands of modern work. This shift is expanding what teams can build and how quickly they can ship. It is also reshaping the security landscape. Before diving into the numbers, it’s important to explain how we perform this analysis. We examined over 2,20...
Mythos Changed the Math on Vulnerability Discovery. Most Teams Aren't Ready for the Remediation Side

Mythos Changed the Math on Vulnerability Discovery. Most Teams Aren't Ready for the Remediation Side

Apr 27, 2026 Artificial Intelligence / Enterprise Security
Anthropic’s Claude Mythos Preview has dominated security discussions since its April 7 announcement. Early reporting describes a powerful cybersecurity-focused AI system capable of identifying vulnerabilities at scale and raising serious questions about how quickly organizations can validate, prioritize, and remediate what it finds. The debate that followed has mostly focused on the right questions: Is this a step-change or an incremental advance? Does restricting access to Microsoft, Apple, AWS, and JPMorgan actually reduce risk, or does it just concentrate defensive advantage among the already-well-defended? What happens when adversaries—state actors, criminal enterprises—build equivalent capability? These are important. But there's a quieter operational problem that's getting less airtime, and it's the one that will actually determine whether most organizations survive this shift.  The Discovery-to-Remediation Gap The Mythos announcement, and the broader AI security...
cyber security

Master High-Velocity Defense: SentinelOne's Virtual Cyber Threat Forum 2026

websiteSentinelOneCyber Resilience / Threat Intel
See Jayson E. Street deconstruct a bank breach and learn to hunt high-velocity threats at machine speed.
cyber security

99% of Mythos Findings Remain Unpatched. Defenders Are Building the Response

websitePicus SecurityAI Security / Security Validation
Autonomous Validation Summit, May 12 and 14. Register free and get 12 recommendations for the Mythos era.
Google AI "Big Sleep" Stops Exploitation of Critical SQLite Vulnerability Before Hackers Act

Google AI "Big Sleep" Stops Exploitation of Critical SQLite Vulnerability Before Hackers Act

Jul 16, 2025 AI Security / Vulnerability
Google on Tuesday revealed that its large language model (LLM)-assisted vulnerability discovery framework identified a security flaw in the SQLite open-source database engine before it could have been exploited in the wild. The vulnerability, tracked as CVE-2025-6965 (CVSS score: 7.2), is a memory corruption flaw affecting all versions prior to 3.50.2. It was discovered by Big Sleep , an artificial intelligence (AI) agent that was launched by Google last year as part of a collaboration between DeepMind and Google Project Zero. "An attacker who can inject arbitrary SQL statements into an application might be able to cause an integer overflow resulting in read off the end of an array," SQLite project maintainers said in an advisory. The tech giant described CVE-2025-6965 as a critical security issue that was "known only to threat actors and was at risk of being exploited." Google did not reveal who the threat actors were. "Through the combination of threa...
Chinese Hackers Use Anthropic's AI to Launch Automated Cyber Espionage Campaign

Chinese Hackers Use Anthropic's AI to Launch Automated Cyber Espionage Campaign

Nov 14, 2025 Cyber Espionage / AI Security
State-sponsored threat actors from China used artificial intelligence (AI) technology developed by Anthropic to orchestrate automated cyber attacks as part of a "highly sophisticated espionage campaign" in mid-September 2025. "The attackers used AI's 'agentic' capabilities to an unprecedented degree – using AI not just as an advisor, but to execute the cyber attacks themselves," the AI upstart said . The activity is assessed to have manipulated Claude Code, Anthropic's AI coding tool, to attempt to break into about 30 global targets spanning large tech companies, financial institutions, chemical manufacturing companies, and government agencies. A subset of these intrusions succeeded. Anthropic has since banned the relevant accounts and enforced defensive mechanisms to flag such attacks. The campaign, GTG-1002, marks the first time a threat actor has leveraged AI to conduct a "large-scale cyber attack" without major human intervention an...
Pentesters: Is AI Coming for Your Role?

Pentesters: Is AI Coming for Your Role?

Mar 12, 2025 Automation / Penetration Testing
We’ve been hearing the same story for years: AI is coming for your job. In fact, in 2017, McKinsey printed a report, Jobs Lost, Jobs Gained: Workforce Transitions in a Time of Automation , predicting that by 2030, 375 million workers would need to find new jobs or risk being displaced by AI and automation. Queue the anxiety.  There have been ongoing whispers about what roles would be impacted, and pentesting has recently come into question. With AI now able to automate tasks such as vulnerability scans and network scans—among other things—and with platforms like PlexTrac adding AI capabilities to cut back on the manual effort, will pentesters be out of a job? Let’s start with some optimism. This year, McKinsey retracted its former prediction that 375 million workers would be displaced by AI, lowering the prediction to roughly 92 million workers. The article continued to ease concern stating that although some jobs may become obsolete, it’s more likely that jobs will simply unde...
Project Glasswing Proved AI Can Find the Bugs. Who's Going to Fix Them?

Project Glasswing Proved AI Can Find the Bugs. Who's Going to Fix Them?

Apr 23, 2026 Artificial Intelligence / Exposure Management
Last week, Anthropic announced Project Glasswing, an AI model so effective at discovering software vulnerabilities that they took the extraordinary step of postponing its public release. Instead, the company has given access to Apple, Microsoft, Google, Amazon, and a coalition of others to find and patch bugs before adversaries can . Mythos Preview, the model that led to Project Glasswing, found vulnerabilities across every major operating system and browser. Some of these bugs had survived decades of human audits, aggressive fuzzing, and open-source scrutiny. One had been sitting for 27 years  in  OpenBSD,  generally considered to be one of the world’s most secure operating systems. It's tempting to file this under " AI lab says their AI is too dangerous, " the same playbook OpenAI ran with GPT-2.  Not so fast; there's a material difference this time.  Mythos didn't just find individual CVEs.  It chained four independent bugs into an exploit sequen...
Google's New AI Doesn't Just Find Vulnerabilities — It Rewrites Code to Patch Them

Google's New AI Doesn't Just Find Vulnerabilities — It Rewrites Code to Patch Them

Oct 07, 2025 Artificial Intelligence / Software Security
Google's DeepMind division on Monday announced an artificial intelligence (AI)-powered agent called CodeMender that automatically detects, patches, and rewrites vulnerable code to prevent future exploits. The efforts add to the company's ongoing efforts to improve AI-powered vulnerability discovery, such as Big Sleep and OSS-Fuzz . DeepMind said the AI agent is designed to be both reactive and proactive, by fixing new vulnerabilities as soon as they are spotted as well as rewriting and securing existing codebases with an aim to eliminate whole classes of vulnerabilities in the process. "By automatically creating and applying high-quality security patches, CodeMender’s AI-powered agent helps developers and maintainers focus on what they do best — building good software," DeepMind researchers Raluca Ada Popa and Four Flynn said . "Over the past six months that we’ve been building CodeMender, we have already upstreamed 72 security fixes to open source proje...
OpenAI Patches ChatGPT Data Exfiltration Flaw and Codex GitHub Token Vulnerability

OpenAI Patches ChatGPT Data Exfiltration Flaw and Codex GitHub Token Vulnerability

Mar 30, 2026 Vulnerability / Enterprise Security
A previously unknown vulnerability in OpenAI ChatGPT allowed sensitive conversation data to be exfiltrated without user knowledge or consent, according to new findings from Check Point. "A single malicious prompt could turn an otherwise ordinary conversation into a covert exfiltration channel, leaking user messages, uploaded files, and other sensitive content," the cybersecurity company said in a report published today. "A backdoored GPT could abuse the same weakness to obtain access to user data without the user's awareness or consent." Following responsible disclosure, OpenAI addressed the issue on February 20, 2026. There is no evidence that the issue was ever exploited in a malicious context. While ChatGPT is built with various guardrails to prevent unauthorized data sharing or generate direct outbound network requests , the newly discovered vulnerability bypasses these safeguards entirely by exploiting a side channel originating from the Linux runtime ...
Threat Actors Weaponize HexStrike AI to Exploit Citrix Flaws Within a Week of Disclosure

Threat Actors Weaponize HexStrike AI to Exploit Citrix Flaws Within a Week of Disclosure

Sep 03, 2025 Artificial Intelligence / Vulnerability
Threat actors are attempting to leverage a newly released artificial intelligence (AI) offensive security tool called HexStrike AI to exploit recently disclosed security flaws. HexStrike AI, according to its website , is pitched as an AI‑driven security platform to automate reconnaissance and vulnerability discovery with an aim to accelerate authorized red teaming operations, bug bounty hunting, and capture the flag (CTF) challenges. Per information shared on its GitHub repository, the open-source platform integrates with over 150 security tools to facilitate network reconnaissance, web application security testing, reverse engineering, and cloud security. It also supports dozens of specialized AI agents that are fine-tuned for vulnerability intelligence, exploit development, attack chain discovery, and error handling. But according to a report from Check Point, threat actors are trying their hands on the tool to gain an adversarial advantage, attempting to weaponize the tool to ...
SOC Analysts - Reimagining Their Role Using AI

SOC Analysts - Reimagining Their Role Using AI

Jan 30, 2025 AI Security / SOC Automation
The job of a SOC analyst has never been easy. Faced with an overwhelming flood of daily alerts, analysts (and sometimes IT teams who are doubling as SecOps) must try and triage thousands of security alerts—often false positives—just to identify a handful of real threats. This relentless, 24/7 work leads to alert fatigue, desensitization, and increased risk of missing critical security incidents. Studies show that 70% of SOC analysts experience severe stress, and 65% consider leaving their jobs within a year . This makes retention a major challenge for security teams, especially in light of the existing shortage of skilled security analysts . On the operational side, analysts spend more time on repetitive, manual tasks like investigating alerts, and resolving and documenting incidents than they do on proactive security measures. Security teams struggle with configuring and maintaining SOAR playbooks as the cyber landscape rapidly changes. To top this all off, tool overload and siloed ...
The AI Arms Race – Why Unified Exposure Management Is Becoming a Boardroom Priority

The AI Arms Race – Why Unified Exposure Management Is Becoming a Boardroom Priority

Mar 31, 2026
The cybersecurity landscape is accelerating at an unprecedented rate. What is emerging is not simply a rise in the number of vulnerabilities or tools, but a dramatic increase in speed. Speed of attack, speed of exploitation, and speed of change across modern environments. This is the defining challenge of the new era of digital warfare: the weaponization of Artificial Intelligence. Threat actors, from nation-states to sophisticated criminal enterprises, are no longer just attacking. They are automating the entire kill chain. In this AI arms race, traditional defensive strategies are no longer sufficient. Periodic point-in-time assessments, manual triage, and human-speed response were already under pressure in fast-moving environments. Against AI-enabled adversaries, they are increasingly inadequate. Solutions like PlexTrac are built to help organizations move beyond fragmented findings, disconnected tools, and slow manual workflows by unifying exposure management, remediation, and...
💡 Expert Insights Articles Videos
🛠️ Cybersecurity Resources