#1 Trusted Cybersecurity News Platform
Followed by 5.20+ million
The Hacker News Logo
Subscribe – Get Latest News

Search results for GPT-in-the-Loop | Breaking Cybersecurity News | The Hacker News

Researchers Uncover GPT-5 Jailbreak and Zero-Click AI Agent Attacks Exposing Cloud and IoT Systems

Researchers Uncover GPT-5 Jailbreak and Zero-Click AI Agent Attacks Exposing Cloud and IoT Systems

Aug 09, 2025 Generative AI / Vulnerability
Cybersecurity researchers have uncovered a jailbreak technique to bypass ethical guardrails erected by OpenAI in its latest large language model (LLM) GPT-5 and produce illicit instructions. Generative artificial intelligence (AI) security platform NeuralTrust said it combined a known technique called Echo Chamber with narrative-driven steering to trick the model into producing undesirable responses. "We use Echo Chamber to seed and reinforce a subtly poisonous conversational context, then guide the model with low-salience storytelling that avoids explicit intent signaling," security researcher Martí Jordà said . "This combination nudges the model toward the objective while minimizing triggerable refusal cues." Echo Chamber is a jailbreak approach that was detailed by the company back in June 2025 as a way to deceive an LLM into generating responses to prohibited topics using indirect references, semantic steering, and multi-step inference. In recent weeks, the...
ThreatsDay Bulletin: Stealth Loaders, AI Chatbot Flaws AI Exploits, Docker Hack, and 15 More Stories

ThreatsDay Bulletin: Stealth Loaders, AI Chatbot Flaws AI Exploits, Docker Hack, and 15 More Stories

Dec 25, 2025 Cybersecurity / Hacking News
It's getting harder to tell where normal tech ends and malicious intent begins. Attackers are no longer just breaking in — they're blending in, hijacking everyday tools, trusted apps, and even AI assistants. What used to feel like clear-cut "hacker stories" now looks more like a mirror of the systems we all use. This week's findings show a pattern: precision, patience, and persuasion. The newest campaigns don't shout for attention — they whisper through familiar interfaces, fake updates, and polished code. The danger isn't just in what's being exploited, but in how ordinary it all looks. ThreatsDay pulls these threads together — from corporate networks to consumer tech — revealing how quiet manipulation and automation are reshaping the threat landscape. It's a reminder that the future of cybersecurity won't hinge on bigger walls, but on sharper awareness. Open-source tool exploited Abuse of Nezha for Post-Exploitation Bad actors are le...
Someone Created the First AI-Powered Ransomware Using OpenAI's gpt-oss:20b Model

Someone Created the First AI-Powered Ransomware Using OpenAI's gpt-oss:20b Model

Aug 27, 2025 Ransomware / Artificial Intelligence
Cybersecurity company ESET has disclosed that it discovered an artificial intelligence (AI)-powered ransomware variant codenamed PromptLock . Written in Golang, the newly identified strain uses the gpt-oss:20b model from OpenAI locally via the Ollama API to generate malicious Lua scripts in real-time. The open-weight language model was released by OpenAI earlier this month. "PromptLock leverages Lua scripts generated from hard-coded prompts to enumerate the local filesystem, inspect target files, exfiltrate selected data, and perform encryption," ESET said . "These Lua scripts are cross-platform compatible, functioning on Windows, Linux, and macOS." The ransomware code also embeds instructions to craft a custom note based on the "files affected," and the infected machine is a personal computer, company server, or a power distribution controller. It's currently not known who is behind the malware, but ESET told The Hacker News that PromptLoc arti...
cyber security

Secured Images 101

websiteWizDevOps / AppSec
ecure your container ecosystem with this easy-to-read digital poster that breaks down everything you need to know about container image security. Perfect for engineering, platform, DevOps, AppSec, and cloud security teams.
cyber security

7 Key Metrics for Choosing the Right AI SOC Partner And Cutting Through Hype

websiteProphet SecurityArtificial Intelligence / SOC
Get the comprehensive framework for vetting AI SOC solutions on coverage, accuracy, explainability, and privacy.
Securing Agentic AI: How to Protect the Invisible Identity Access

Securing Agentic AI: How to Protect the Invisible Identity Access

Jul 15, 2025 Automation / Risk Management
AI agents promise to automate everything from financial reconciliations to incident response. Yet every time an AI agent spins up a workflow, it has to authenticate somewhere; often with a high-privilege API key, OAuth token, or service account that defenders can't easily see. These "invisible" non-human identities (NHIs) now outnumber human accounts in most cloud environments, and they have become one of the ripest targets for attackers. Astrix's Field CTO Jonathan Sander put it bluntly in a recent Hacker News webinar : "One dangerous habit we've had for a long time is trusting application logic to act as the guardrails. That doesn't work when your AI agent is powered by LLMs that don't stop and think when they're about to do something wrong. They just do it." Why AI Agents Redefine Identity Risk Autonomy changes everything: An AI agent can chain multiple API calls and modify data without a human in the loop. If the underlying credential is exposed or overprivileged, each addit...
Expert Insights Articles Videos
Cybersecurity Resources