#1 Trusted Cybersecurity News Platform
Followed by 5.20+ million
The Hacker News Logo
Subscribe – Get Latest News
Salesforce Security Handbook

Search results for code jailbreak 2025 | Breaking Cybersecurity News | The Hacker News

Cursor AI Code Editor Vulnerability Enables RCE via Malicious MCP File Swaps Post Approval

Cursor AI Code Editor Vulnerability Enables RCE via Malicious MCP File Swaps Post Approval

Aug 05, 2025 AI Security / MCP Protocol
Cybersecurity researchers have disclosed a high-severity security flaw in the artificial intelligence (AI)-powered code editor Cursor that could result in remote code execution. The vulnerability, tracked as CVE-2025-54136 (CVSS score: 7.2), has been codenamed MCPoison by Check Point Research, owing to the fact that it exploits a quirk in the way the software handles modifications to Model Context Protocol (MCP) server configurations. "A vulnerability in Cursor AI allows an attacker to achieve remote and persistent code execution by modifying an already trusted MCP configuration file inside a shared GitHub repository or editing the file locally on the target's machine," Cursor said in an advisory released last week. "Once a collaborator accepts a harmless MCP, the attacker can silently swap it for a malicious command (e.g., calc.exe) without triggering any warning or re-prompt." MCP is an open-standard developed by Anthropic that allows large language mode...
⚡ Weekly Recap: BadCam Attack, WinRAR 0-Day, EDR Killer, NVIDIA Flaws, Ransomware Attacks & More

⚡ Weekly Recap: BadCam Attack, WinRAR 0-Day, EDR Killer, NVIDIA Flaws, Ransomware Attacks & More

Aug 11, 2025
This week, cyber attackers are moving quickly, and businesses need to stay alert. They're finding new weaknesses in popular software and coming up with clever ways to get around security. Even one unpatched flaw could let attackers in, leading to data theft or even taking control of your systems. The clock is ticking—if defenses aren't updated regularly, it could lead to serious damage. The message is clear: don't wait for an attack to happen. Take action now to protect your business. Here's a look at some of the biggest stories in cybersecurity this week: from new flaws in WinRAR and NVIDIA Triton to advanced attack techniques you should know about. Let's get into the details. ⚡ Threat of the Week Trend Micro Warns of Actively Exploited 0-Day — Trend Micro has released temporary mitigations to address critical security flaws in on-premise versions of Apex One Management Console that it said have been exploited in the wild. The vulnerabilities (CVE-2025-54948 and CVE-2025-54987),...
GitLab Duo Vulnerability Enabled Attackers to Hijack AI Responses with Hidden Prompts

GitLab Duo Vulnerability Enabled Attackers to Hijack AI Responses with Hidden Prompts

May 23, 2025 Artificial Intelligence / Vulnerability
Cybersecurity researchers have discovered an indirect prompt injection flaw in GitLab's artificial intelligence (AI) assistant Duo that could have allowed attackers to steal source code and inject untrusted HTML into its responses, which could then be used to direct victims to malicious websites. GitLab Duo is an artificial intelligence (AI)-powered coding assistant that enables users to write, review, and edit code. Built using Anthropic's Claude models, the service was first launched in June 2023. But as Legit Security found , GitLab Duo Chat has been susceptible to an indirect prompt injection flaw that permits attackers to "steal source code from private projects, manipulate code suggestions shown to other users, and even exfiltrate confidential, undisclosed zero-day vulnerabilities." Prompt injection refers to a class of vulnerabilities common in AI systems that enable threat actors to weaponize large language models (LLMs) to manipulate responses to user...
cyber security

5 Critical Google Workspace Security Settings You Could Be Missing

websiteNudge SecurityGoogle Workspace / SaaS Security
Learn the essential steps you can take today to improve your Google Workspace security posture.
cyber security

Explore the MDR Advantage: From Reactive to Resilient Security Posture

websiteESETEndpoint Protection / Threat Detection
ESET MDR delivers proactive defense, supercharged by AI-driven detection, robust encryption, and 24/7 support.
⚡ THN Weekly Recap: Top Cybersecurity Threats, Tools and Tips [3 February]

⚡ THN Weekly Recap: Top Cybersecurity Threats, Tools and Tips [3 February]

Feb 03, 2025 Cybersecurity / Recap
This week, our news radar shows that every new tech idea comes with its own challenges. A hot AI tool is under close watch, law enforcement is shutting down online spots that help cybercriminals, and teams are busy fixing software bugs that could let attackers in. From better locks on our devices to stopping sneaky tricks online, simple steps are making a big difference.  Let's take a closer look at how these efforts are shaping a safer digital world. ⚡ Threat of the Week DeepSeek's Popularity Invites Scrutiny — The overnight popularity of DeepSeek, an artificial intelligence (AI) platform originating from China, has led to extensive scrutiny of its models, with several analyses finding ways to jailbreak its system and produce malicious or prohibited content. While jailbreaks and prompt injections are a persistent concern in mainstream AI products, the findings also show that the model lacks enough protections to prevent potential abuse by malicious actors . The AI chatbot ha...
Taiwan Bans DeepSeek AI Over National Security Concerns, Citing Data Leakage Risks

Taiwan Bans DeepSeek AI Over National Security Concerns, Citing Data Leakage Risks

Feb 04, 2025 Artificial Intelligence / Data Privacy
Taiwan has become the latest country to ban government agencies from using Chinese startup DeepSeek's Artificial Intelligence (AI) platform, citing security risks. "Government agencies and critical infrastructure should not use DeepSeek, because it endangers national information security," according to a statement released by Taiwan's Ministry of Digital Affairs, per Radio Free Asia . "DeepSeek AI service is a Chinese product. Its operation involves cross-border transmission, and information leakage and other information security concerns." DeepSeek's Chinese origins have prompted authorities from various countries to look into the service's use of personal data. Last week, it was blocked in Italy, citing a lack of information regarding its data handling practices. Several companies have also prohibited access to the chatbot over similar risks. The chatbot has captured much of the mainstream attention over the past few weeks for the fact tha...
Italy Bans Chinese DeepSeek AI Over Data Privacy and Ethical Concerns

Italy Bans Chinese DeepSeek AI Over Data Privacy and Ethical Concerns

Jan 31, 2025 AI Ethics / Machine Learning
Italy's data protection watchdog has blocked Chinese artificial intelligence (AI) firm DeepSeek's service within the country, citing a lack of information on its use of users' personal data. The development comes days after the authority, the Garante, sent a series of questions to DeepSeek, asking about its data handling practices and where it obtained its training data. In particular, it wanted to know what personal data is collected by its web platform and mobile app, from which sources, for what purposes, on what legal basis, and whether it is stored in China. In a statement issued January 30, 2025, the Garante said it arrived at the decision after DeepSeek provided information that it said was "completely insufficient." The entities behind the service, Hangzhou DeepSeek Artificial Intelligence and Beijing DeepSeek Artificial Intelligence, have "declared that they do not operate in Italy and that European legislation does not apply to them," it...
Researchers Uncover GPT-5 Jailbreak and Zero-Click AI Agent Attacks Exposing Cloud and IoT Systems

Researchers Uncover GPT-5 Jailbreak and Zero-Click AI Agent Attacks Exposing Cloud and IoT Systems

Aug 09, 2025 Generative AI / Vulnerability
Cybersecurity researchers have uncovered a jailbreak technique to bypass ethical guardrails erected by OpenAI in its latest large language model (LLM) GPT-5 and produce illicit instructions. Generative artificial intelligence (AI) security platform NeuralTrust said it combined a known technique called Echo Chamber with narrative-driven steering to trick the model into producing undesirable responses. "We use Echo Chamber to seed and reinforce a subtly poisonous conversational context, then guide the model with low-salience storytelling that avoids explicit intent signaling," security researcher Martí Jordà said . "This combination nudges the model toward the objective while minimizing triggerable refusal cues." Echo Chamber is a jailbreak approach that was detailed by the company back in June 2025 as a way to deceive an LLM into generating responses to prohibited topics using indirect references, semantic steering, and multi-step inference. In recent weeks, the...
c
Expert Insights Articles Videos
Cybersecurity Resources