#1 Trusted Cybersecurity News Platform
Followed by 5.20+ million
The Hacker News Logo
Subscribe – Get Latest News
Salesforce Security Handbook

Threat Research | Breaking Cybersecurity News | The Hacker News

Category — Threat Research
Researchers Uncover 30+ Flaws in AI Coding Tools Enabling Data Theft and RCE Attacks

Researchers Uncover 30+ Flaws in AI Coding Tools Enabling Data Theft and RCE Attacks

Dec 06, 2025 AI Security / Vulnerability
Over 30 security vulnerabilities have been disclosed in various artificial intelligence (AI)-powered Integrated Development Environments (IDEs) that combine prompt injection primitives with legitimate features to achieve data exfiltration and remote code execution. The security shortcomings have been collectively named IDEsaster by security researcher Ari Marzouk (MaccariTA). They affect popular IDEs and extensions such as Cursor, Windsurf, Kiro.dev, GitHub Copilot, Zed.dev, Roo Code, Junie, and Cline, among others. Of these, 24 have been assigned CVE identifiers. "I think the fact that multiple universal attack chains affected each and every AI IDE tested is the most surprising finding of this research," Marzouk told The Hacker News. "All AI IDEs (and coding assistants that integrate with them) effectively ignore the base software (IDE) in their threat model. They treat their features as inherently safe because they've been there for years. However, once you add A...
Zero-Click Agentic Browser Attack Can Delete Entire Google Drive Using Crafted Emails

Zero-Click Agentic Browser Attack Can Delete Entire Google Drive Using Crafted Emails

Dec 05, 2025 Email Security / Threat Research
A new agentic browser attack targeting Perplexity's Comet browser that's capable of turning a seemingly innocuous email into a destructive action that wipes a user's entire Google Drive contents, findings from Straiker STAR Labs show. The zero-click Google Drive Wiper technique hinges on connecting the browser to services like Gmail and Google Drive to automate routine tasks by granting them access to read emails, as well as browse files and folders, and perform actions like moving, renaming, or deleting content. For instance, a prompt issued by a benign user might look like this: "Please check my email and complete all my recent organization tasks." This will cause the browser agent to search the inbox for relevant messages and perform the necessary actions. "This behavior reflects excessive agency in LLM-powered assistants where the LLM performs actions that go far beyond the user's explicit request," security researcher Amanda Rousseau said in ...
Researchers Find ChatGPT Vulnerabilities That Let Attackers Trick AI Into Leaking Data

Researchers Find ChatGPT Vulnerabilities That Let Attackers Trick AI Into Leaking Data

Nov 05, 2025 Artificial Intelligence / Vulnerability
Cybersecurity researchers have disclosed a new set of vulnerabilities impacting OpenAI's ChatGPT artificial intelligence (AI) chatbot that could be exploited by an attacker to steal personal information from users' memories and chat histories without their knowledge. The seven vulnerabilities and attack techniques, according to Tenable, were found in OpenAI's GPT-4o and GPT-5 models. OpenAI has since addressed some of them .  These issues expose the AI system to indirect prompt injection attacks , allowing an attacker to manipulate the expected behavior of a large language model (LLM) and trick it into performing unintended or malicious actions, security researchers Moshe Bernstein and Liv Matan said in a report shared with The Hacker News. The identified shortcomings are listed below - Indirect prompt injection vulnerability via trusted sites in Browsing Context, which involves asking ChatGPT to summarize the contents of web pages with malicious instructions added...
cyber security

The 2026 CISO Budget Benchmark

websiteWizEnterprise Security / Cloud Security
See how 300+ CISOs are planning 2026 budgets: top trends in AI, cloud, staffing, and tool consolidation shaping next year's security priorities.
cyber security

2025 Cloud Security Survey Report

websiteSentinelOneCloud Security / Identity Protection
Learn from 400+ security leaders and practitioners to get the latest insights and trends on cloud security
Wiz Uncovers Critical Access Bypass Flaw in AI-Powered Vibe Coding Platform Base44

Wiz Uncovers Critical Access Bypass Flaw in AI-Powered Vibe Coding Platform Base44

Jul 29, 2025 LLM Security / Vulnerability
Cybersecurity researchers have disclosed a now-patched critical security flaw in a popular vibe coding platform called Base44 that could allow unauthorized access to private applications built by its users. "The vulnerability we discovered was remarkably simple to exploit -- by providing only a non-secret 'app_id' value to undocumented registration and email verification endpoints, an attacker could have created a verified account for private applications on their platform," cloud security firm Wiz said in a report shared with The Hacker News. A net result of this issue is that it bypasses all authentication controls, including Single Sign-On (SSO) protections, granting full access to all the private applications and data contained within them. Following responsible disclosure on July 9, 2025, an official fix was rolled out by Wix, which owns Base44, within 24 hours. There is no evidence that the issue was ever maliciously exploited in the wild. While vibe codin...
Researchers Reveal 'Deceptive Delight' Method to Jailbreak AI Models

Researchers Reveal 'Deceptive Delight' Method to Jailbreak AI Models

Oct 23, 2024 Artificial Intelligence / Vulnerability
Cybersecurity researchers have shed light on a new adversarial technique that could be used to jailbreak large language models (LLMs) during the course of an interactive conversation by sneaking in an undesirable instruction between benign ones. The approach has been codenamed Deceptive Delight by Palo Alto Networks Unit 42, which described it as both simple and effective, achieving an average attack success rate (ASR) of 64.6% within three interaction turns. "Deceptive Delight is a multi-turn technique that engages large language models (LLM) in an interactive conversation, gradually bypassing their safety guardrails and eliciting them to generate unsafe or harmful content," Unit 42's Jay Chen and Royce Lu said. It's also a little different from multi-turn jailbreak (aka many-shot jailbreak) methods like Crescendo , wherein unsafe or restricted topics are sandwiched between innocuous instructions, as opposed to gradually leading the model to produce harmful outpu...
Expert Insights Articles Videos
Cybersecurity Resources