#1 Trusted Cybersecurity News Platform
Followed by 5.20+ million
The Hacker News Logo
Subscribe – Get Latest News
Salesforce Security Handbook

Search results for bypass-chat-gpt | Breaking Cybersecurity News | The Hacker News

AkiraBot Targets 420,000 Sites with OpenAI-Generated Spam, Bypassing CAPTCHA Protections

AkiraBot Targets 420,000 Sites with OpenAI-Generated Spam, Bypassing CAPTCHA Protections

Apr 10, 2025 Website Security / Cybercrime
Cybersecurity researchers have disclosed details of an artificial intelligence (AI) powered platform called AkiraBot that's used to spam website chats, comment sections, and contact forms to promote dubious search engine optimization (SEO) services such as Akira and ServicewrapGO. "AkiraBot has targeted more than 400,000 websites and successfully spammed at least 80,000 websites since September 2024," SentinelOne researchers Alex Delamotte and Jim Walter said in a report shared with The Hacker News. "The bot uses OpenAI to generate custom outreach messages based on the purpose of the website." Targets of the activity include contact forms and chat widgets present in small to medium-sized business websites, with the framework sharing spam content generated using OpenAI's large language models (LLMs). What makes the "sprawling" Python-based tool stand apart is its ability to craft content such that it can bypass spam filters. It's believe...
New Reports Uncover Jailbreaks, Unsafe Code, and Data Theft Risks in Leading AI Systems

New Reports Uncover Jailbreaks, Unsafe Code, and Data Theft Risks in Leading AI Systems

Apr 29, 2025 Vulnerability / Artificial Intelligence
Various generative artificial intelligence (GenAI) services have been found vulnerable to two types of jailbreak attacks that make it possible to produce illicit or dangerous content. The first of the two techniques, codenamed Inception, instructs an AI tool to imagine a fictitious scenario, which can then be adapted into a second scenario within the first one where there exists no safety guardrails . "Continued prompting to the AI within the second scenarios context can result in bypass of safety guardrails and allow the generation of malicious content," the CERT Coordination Center (CERT/CC) said in an advisory released last week. The second jailbreak is realized by prompting the AI for information on how not to reply to a specific request.  "The AI can then be further prompted with requests to respond as normal, and the attacker can then pivot back and forth between illicit questions that bypass safety guardrails and normal prompts," CERT/CC added. Success...
Researchers Find ChatGPT Vulnerabilities That Let Attackers Trick AI Into Leaking Data

Researchers Find ChatGPT Vulnerabilities That Let Attackers Trick AI Into Leaking Data

Nov 05, 2025 Artificial Intelligence / Vulnerability
Cybersecurity researchers have disclosed a new set of vulnerabilities impacting OpenAI's ChatGPT artificial intelligence (AI) chatbot that could be exploited by an attacker to steal personal information from users' memories and chat histories without their knowledge. The seven vulnerabilities and attack techniques, according to Tenable, were found in OpenAI's GPT-4o and GPT-5 models. OpenAI has since addressed some of them .  These issues expose the AI system to indirect prompt injection attacks , allowing an attacker to manipulate the expected behavior of a large language model (LLM) and trick it into performing unintended or malicious actions, security researchers Moshe Bernstein and Liv Matan said in a report shared with The Hacker News. The identified shortcomings are listed below - Indirect prompt injection vulnerability via trusted sites in Browsing Context, which involves asking ChatGPT to summarize the contents of web pages with malicious instructions added...
cyber security

2025 Cloud Security Risk Report

websiteSentinelOneEnterprise Security / Cloud Security
Learn 5 key risks to cloud security such as cloud credential theft, lateral movements, AI services, and more.
cyber security

Traditional Firewalls Are Obsolete in the AI Era

websiteZscalerZero Trust / Cloud Security
It's time for a new security approach that removes your attack surface so you can innovate with AI.
ThreatsDay Bulletin: Stealth Loaders, AI Chatbot Flaws AI Exploits, Docker Hack, and 15 More Stories

ThreatsDay Bulletin: Stealth Loaders, AI Chatbot Flaws AI Exploits, Docker Hack, and 15 More Stories

Dec 25, 2025 Cybersecurity / Hacking News
It's getting harder to tell where normal tech ends and malicious intent begins. Attackers are no longer just breaking in — they're blending in, hijacking everyday tools, trusted apps, and even AI assistants. What used to feel like clear-cut "hacker stories" now looks more like a mirror of the systems we all use. This week's findings show a pattern: precision, patience, and persuasion. The newest campaigns don't shout for attention — they whisper through familiar interfaces, fake updates, and polished code. The danger isn't just in what's being exploited, but in how ordinary it all looks. ThreatsDay pulls these threads together — from corporate networks to consumer tech — revealing how quiet manipulation and automation are reshaping the threat landscape. It's a reminder that the future of cybersecurity won't hinge on bigger walls, but on sharper awareness. Open-source tool exploited Abuse of Nezha for Post-Exploitation Bad actors are le...
Researchers Uncover GPT-4-Powered MalTerminal Malware Creating Ransomware, Reverse Shell

Researchers Uncover GPT-4-Powered MalTerminal Malware Creating Ransomware, Reverse Shell

Sep 20, 2025 Malware / Artificial Intelligence
Cybersecurity researchers have discovered what they say is the earliest example known to date of a malware that bakes in Large Language Model (LLM) capabilities. The malware has been codenamed MalTerminal by SentinelOne SentinelLABS research team. The findings were presented at the LABScon 2025 security conference. In a report examining the malicious use of LLMs, the cybersecurity company said AI models are being increasingly used by threat actors for operational support, as well as for embedding them into their tools – an emerging category called LLM-embedded malware that's exemplified by the appearance of LAMEHUG (aka PROMPTSTEAL) and PromptLock . This includes the discovery of a previously reported Windows executable called MalTerminal that uses OpenAI GPT-4 to dynamically generate ransomware code or a reverse shell. There is no evidence to suggest it was ever deployed in the wild, raising the possibility that it could also be a proof-of-concept malware or red team tool. ...
⚡ Weekly Recap: Fortinet Exploited, China's AI Hacks, PhaaS Empire Falls & More

⚡ Weekly Recap: Fortinet Exploited, China's AI Hacks, PhaaS Empire Falls & More

Nov 17, 2025 Cybersecurity / Hacking News
This week showed just how fast things can go wrong when no one's watching. Some attacks were silent and sneaky. Others used tools we trust every day — like AI, VPNs, or app stores — to cause damage without setting off alarms. It's not just about hacking anymore. Criminals are building systems to make money, spy, or spread malware like it's a business. And in some cases, they're using the same apps and services that businesses rely on — flipping the script without anyone noticing at first. The scary part? Some threats weren't even bugs — just clever use of features we all take for granted. And by the time people figured it out, the damage was done. Let's look at what really happened, why it matters, and what we should all be thinking about now. ⚡ Threat of the Week Silently Patched Fortinet Flaw Comes Under Attack — A vulnerability that was patched by Fortinet in FortiWeb Web Application Firewall (WAF) has been exploited in the wild since early October 2025 by threat actors to c...
Dozens of Chrome Extensions Hacked, Exposing Millions of Users to Data Theft

Dozens of Chrome Extensions Hacked, Exposing Millions of Users to Data Theft

Dec 29, 2024 Endpoint Protection / Browser Security
A new attack campaign has targeted known Chrome browser extensions, leading to at least 35 extensions being compromised and exposing over 2.6 million users to data exposure and credential theft. The attack targeted publishers of browser extensions on the Chrome Web Store via a phishing campaign and used their access permissions to insert malicious code into legitimate extensions in order to steal cookies and user access tokens. The first company to shed light the campaign was cybersecurity firm Cyberhaven, one of whose employees was targeted by a phishing attack on December 24, allowing the threat actors to publish a malicious version of the extension. On December 27, Cyberhaven disclosed that a threat actor compromised its browser extension and injected malicious code to communicate with an external command-and-control (C&C) server located on the domain cyberhavenext[.]pro, download additional configuration files, and exfiltrate user data. The phishing email, which purported...
⚡ Weekly Recap: Hot CVEs, npm Worm Returns, Firefox RCE, M365 Email Raid & More

⚡ Weekly Recap: Hot CVEs, npm Worm Returns, Firefox RCE, M365 Email Raid & More

Dec 01, 2025 Hacking News / Cybersecurity
Hackers aren't kicking down the door anymore. They just use the same tools we use every day — code packages, cloud accounts, email, chat, phones, and "trusted" partners — and turn them against us. One bad download can leak your keys. One weak vendor can expose many customers at once. One guest invite, one link on a phone, one bug in a common tool, and suddenly your mail, chats, repos, and servers are in play. Every story below is a reminder that your "safe" tools might be the real weak spot. ⚡ Threat of the Week Shai-Hulud Returns with More Aggression — The npm registry was targeted a second time by a self-replicating worm that went by the moniker "Sha1-Hulud: The Second Coming," affecting over 800 packages and 27,000 GitHub repositories. Like in the previous iteration, the main objective was to steal sensitive data like API keys, cloud credentials, and npm and GitHub authentication information, and facilitate deeper supply chain compromise in a worm-like fashion. Th...
Cursor AI Code Editor Vulnerability Enables RCE via Malicious MCP File Swaps Post Approval

Cursor AI Code Editor Vulnerability Enables RCE via Malicious MCP File Swaps Post Approval

Aug 05, 2025 AI Security / MCP Protocol
Cybersecurity researchers have disclosed a high-severity security flaw in the artificial intelligence (AI)-powered code editor Cursor that could result in remote code execution. The vulnerability, tracked as CVE-2025-54136 (CVSS score: 7.2), has been codenamed MCPoison by Check Point Research, owing to the fact that it exploits a quirk in the way the software handles modifications to Model Context Protocol (MCP) server configurations. "A vulnerability in Cursor AI allows an attacker to achieve remote and persistent code execution by modifying an already trusted MCP configuration file inside a shared GitHub repository or editing the file locally on the target's machine," Cursor said in an advisory released last week. "Once a collaborator accepts a harmless MCP, the attacker can silently swap it for a malicious command (e.g., calc.exe) without triggering any warning or re-prompt." MCP is an open-standard developed by Anthropic that allows large language mode...
Expert Insights Articles Videos
Cybersecurity Resources