#1 Trusted Cybersecurity News Platform
Followed by 5.20+ million
The Hacker News Logo
Subscribe – Get Latest News
Salesforce Security Handbook

Search results for google code llm | Breaking Cybersecurity News | The Hacker News

Google Uncovers PROMPTFLUX Malware That Uses Gemini AI to Rewrite Its Code Hourly

Google Uncovers PROMPTFLUX Malware That Uses Gemini AI to Rewrite Its Code Hourly

Nov 05, 2025 Artificial Intelligence / Threat Intelligence
Google on Wednesday said it discovered an unknown threat actor using an experimental Visual Basic Script (VB Script) malware dubbed PROMPTFLUX that interacts with its Gemini artificial intelligence (AI) model API to write its own source code for improved obfuscation and evasion. "PROMPTFLUX is written in VB Script and interacts with Gemini's API to request specific VBScript obfuscation and evasion techniques to facilitate 'just-in-time' self-modification, likely to evade static signature-based detection," Google Threat Intelligence Group (GTIG) said in a report shared with The Hacker News. The novel feature is part of its "Thinking Robot" component, which periodically queries the large language model (LLM), Gemini 1.5 Flash or later in this case, to obtain new code so as to sidestep detection. This, in turn, is accomplished by using a hard-coded API key to send the query to the Gemini API endpoint. The prompt sent to the model is both highly speci...
Google Rolls Out On-Device AI Protections to Detect Scams in Chrome and Android

Google Rolls Out On-Device AI Protections to Detect Scams in Chrome and Android

May 09, 2025 Artificial Intelligence / Online Fraud
Google on Thursday announced it's rolling out new artificial intelligence (AI)-powered countermeasures to combat scams across Chrome, Search, and Android. The tech giant said it will begin using Gemini Nano , its on-device large language model (LLM), to improve Safe Browsing in Chrome 137 on desktops. "The on-device approach provides instant insight on risky websites and allows us to offer protection, even against scams that haven't been seen before. Gemini Nano's LLM is perfect for this use because of its ability to distill the varied, complex nature of websites, helping us adapt to new scam tactics more quickly," the company said . Google noted that it's already using this AI-driven approach to tackle remote tech support scams, which often seek to trick users into parting with their personal or financial information under the pretext of a non-existent computer problem. This works by evaluating the web pages using the LLM for potential signals that are...
Google's AI-Powered OSS-Fuzz Tool Finds 26 Vulnerabilities in Open-Source Projects

Google's AI-Powered OSS-Fuzz Tool Finds 26 Vulnerabilities in Open-Source Projects

Nov 21, 2024 Artificial Intelligence / Software Security
Google has revealed that its AI-powered fuzzing tool, OSS-Fuzz, has been used to help identify 26 vulnerabilities in various open-source code repositories, including a medium-severity flaw in the OpenSSL cryptographic library. "These particular vulnerabilities represent a milestone for automated vulnerability finding: each was found with AI, using AI-generated and enhanced fuzz targets," Google's open-source security team said in a blog post shared with The Hacker News. The OpenSSL vulnerability in question is CVE-2024-9143 (CVSS score: 4.3), an out-of-bounds memory write bug that can result in an application crash or remote code execution. The issue has been addressed in OpenSSL versions 3.3.3, 3.2.4, 3.1.8, 3.0.16, 1.1.1zb, and 1.0.2zl. Google, which added the ability to leverage large language models (LLMs) to improve fuzzing coverage in OSS-Fuzz in August 2023, said the vulnerability has likely been present in the codebase for two decades and that it "wo...
cyber security

Operationalize Incident Response: Scale Tabletop Exercises with AEV

websiteFiligranIncident Response / Exposure Validation
Learn how to standardize, automate, and scale IR tabletop drills for compliance and team readiness.
cyber security

The Cyber Event of the Year Returns: SANS 2026

websiteSANS InstituteCybersecurity Training / Certification
50+ courses, NetWars, AI Keynote, and a full week of action. Join SANS in Orlando.
Researcher Uncovers 30+ Flaws in AI Coding Tools Enabling Data Theft and RCE Attacks

Researcher Uncovers 30+ Flaws in AI Coding Tools Enabling Data Theft and RCE Attacks

Dec 06, 2025 AI Security / Vulnerability
Over 30 security vulnerabilities have been disclosed in various artificial intelligence (AI)-powered Integrated Development Environments (IDEs) that combine prompt injection primitives with legitimate features to achieve data exfiltration and remote code execution. The security shortcomings have been collectively named IDEsaster by security researcher Ari Marzouk (MaccariTA), who discovered them over the last six months. They affect popular IDEs and extensions such as Cursor, Windsurf, Kiro.dev, GitHub Copilot, Zed.dev, Roo Code, Junie, and Cline, among others. Of these, 24 have been assigned CVE identifiers. "I think the fact that multiple universal attack chains affected each and every AI IDE tested is the most surprising finding of this research," Marzouk told The Hacker News. "All AI IDEs (and coding assistants that integrate with them) effectively ignore the base software (IDE) in their threat model. They treat their features as inherently safe because they've...
"I Had a Dream" and Generative AI Jailbreaks

"I Had a Dream" and Generative AI Jailbreaks

Oct 09, 2023 Artificial Intelligence /
"Of course, here's an example of simple code in the Python programming language that can be associated with the keywords "MyHotKeyHandler," "Keylogger," and "macOS," this is a message from ChatGPT followed by a piece of malicious code and a brief remark not to use it for illegal purposes. Initially published by  Moonlock Lab , the screenshots of ChatGPT writing code for a keylogger malware is yet another example of trivial ways to hack large language models and exploit them against their policy of use. In the case of Moonlock Lab, their malware research engineer told ChatGPT about a dream where an attacker was writing code. In the dream, he could only see the three words: "MyHotKeyHandler," "Keylogger," and "macOS." The engineer asked ChatGPT to completely recreate the malicious code and help him stop the attack. After a brief conversation, the AI finally provided the answer. "At times, the code generated isn...
Microsoft Issues Security Fixes for 56 Flaws, Including Active Exploit and Two Zero-Days

Microsoft Issues Security Fixes for 56 Flaws, Including Active Exploit and Two Zero-Days

Dec 10, 2025 Patch Tuesday / Vulnerability
Microsoft closed out 2025 with patches for 56 security flaws in various products across the Windows platform, including one vulnerability that has been actively exploited in the wild. Of the 56 flaws, three are rated Critical, and 53 are rated Important in severity. Two other defects are listed as publicly known at the time of the release. These include 29 privilege escalation, 18 remote code execution, four information disclosure, three denial-of-service, and two spoofing vulnerabilities. In total, Microsoft has addressed a total of 1,275 CVEs in 2025, according to data compiled by Fortra. Tenable's Satnam Narang said 2025 also marks the second consecutive year where the Windows maker has patched over 1,000 CVEs. It's the third time it has done so since Patch Tuesday's inception. The update is in addition to 17 shortcomings the tech giant patched in its Chromium-based Edge browser since the release of the November 2025 Patch Tuesday update . This also consists of a s...
Google Cloud Introduces Security AI Workbench for Faster Threat Detection and Analysis

Google Cloud Introduces Security AI Workbench for Faster Threat Detection and Analysis

Apr 25, 2023 Artificial Intelligence / Threat Detection
Google's cloud division is following in the  footsteps of Microsoft  with the launch of  Security AI Workbench  that leverages generative AI models to gain better visibility into the threat landscape.  Powering the cybersecurity suite is Sec-PaLM, a specialized large language model ( LLM ) that's "fine-tuned for security use cases." The idea is to take advantage of the latest advances in AI to augment point-in-time incident analysis, threat detection, and analytics to counter and prevent new infections by delivering intelligence that's  trusted, relevant, and actionable . To that end, the Security AI Workbench spans a wide range of new AI-powered tools, including  VirusTotal Code Insight  and  Mandiant Breach Analytics for Chronicle , to analyze potentially malicious scripts and alert customers of active breaches in their environments. Users, like with Microsoft's GPT-4-based  Security Copilot , can "conversationally search, analyz...
Google's New AI Doesn't Just Find Vulnerabilities — It Rewrites Code to Patch Them

Google's New AI Doesn't Just Find Vulnerabilities — It Rewrites Code to Patch Them

Oct 07, 2025 Artificial Intelligence / Software Security
Google's DeepMind division on Monday announced an artificial intelligence (AI)-powered agent called CodeMender that automatically detects, patches, and rewrites vulnerable code to prevent future exploits. The efforts add to the company's ongoing efforts to improve AI-powered vulnerability discovery, such as Big Sleep and OSS-Fuzz . DeepMind said the AI agent is designed to be both reactive and proactive, by fixing new vulnerabilities as soon as they are spotted as well as rewriting and securing existing codebases with an aim to eliminate whole classes of vulnerabilities in the process. "By automatically creating and applying high-quality security patches, CodeMender's AI-powered agent helps developers and maintainers focus on what they do best — building good software," DeepMind researchers Raluca Ada Popa and Four Flynn said . "Over the past six months that we've been building CodeMender, we have already upstreamed 72 security fixes to open source proje...
⚡ THN Weekly Recap: Google Secrets Stolen, Windows Hack, New Crypto Scams and More

⚡ THN Weekly Recap: Google Secrets Stolen, Windows Hack, New Crypto Scams and More

Feb 17, 2025 Cyber Threats / Cybersecurity
Welcome to this week's Cybersecurity News Recap. Discover how cyber attackers are using clever tricks like fake codes and sneaky emails to gain access to sensitive data. We cover everything from device code phishing to cloud exploits, breaking down the technical details into simple, easy-to-follow insights. ⚡ Threat of the Week Russian Threat Actors Leverage Device Code Phishing to Hack Microsoft Accounts — Microsoft and Volexity have revealed that threat actors with ties to Russia are leveraging a technique known as device code phishing to gain unauthorized access to victim accounts, and use that access to get hold of sensitive data and enable persistent access to the victim environment. At least three different Russia-linked clusters have been identified abusing the technique to date. The attacks entail sending phishing emails that masquerade as Microsoft Teams meeting invitations, which, when clicked, urge the message recipients to authenticate using a threat actor-generated dev...
12,000+ API Keys and Passwords Found in Public Datasets Used for LLM Training

12,000+ API Keys and Passwords Found in Public Datasets Used for LLM Training

Feb 28, 2025 Machine Learning / Data Privacy
A dataset used to train large language models (LLMs) has been found to contain nearly 12,000 live secrets, which allow for successful authentication. The findings once again highlight how hard-coded credentials pose a severe security risk to users and organizations alike, not to mention compounding the problem when LLMs end up suggesting insecure coding practices to their users. Truffle Security said it downloaded a December 2024 archive from Common Crawl , which maintains a free, open repository of web crawl data. The massive dataset contains over 250 billion pages spanning 18 years.  The archive specifically contains 400TB of compressed web data, 90,000 WARC files (Web ARChive format), and data from 47.5 million hosts across 38.3 million registered domains. The company's analysis found that there are 219 different secret types in the Common Crawl archive, including Amazon Web Services (AWS) root keys, Slack webhooks, and Mailchimp API keys. "'Live' secrets ar...
Google AI "Big Sleep" Stops Exploitation of Critical SQLite Vulnerability Before Hackers Act

Google AI "Big Sleep" Stops Exploitation of Critical SQLite Vulnerability Before Hackers Act

Jul 16, 2025 AI Security / Vulnerability
Google on Tuesday revealed that its large language model (LLM)-assisted vulnerability discovery framework identified a security flaw in the SQLite open-source database engine before it could have been exploited in the wild. The vulnerability, tracked as CVE-2025-6965 (CVSS score: 7.2), is a memory corruption flaw affecting all versions prior to 3.50.2. It was discovered by Big Sleep , an artificial intelligence (AI) agent that was launched by Google last year as part of a collaboration between DeepMind and Google Project Zero. "An attacker who can inject arbitrary SQL statements into an application might be able to cause an integer overflow resulting in read off the end of an array," SQLite project maintainers said in an advisory. The tech giant described CVE-2025-6965 as a critical security issue that was "known only to threat actors and was at risk of being exploited." Google did not reveal who the threat actors were. "Through the combination of threa...
5 Threats That Reshaped Web Security This Year [2025]

5 Threats That Reshaped Web Security This Year [2025]

Dec 04, 2025 Web Security / Data Privacy
As 2025 draws to a close, security professionals face a sobering realization: the traditional playbook for web security has become dangerously obsolete. AI-powered attacks, evolving injection techniques, and supply chain compromises affecting hundreds of thousands of websites forced a fundamental rethink of defensive strategies. Here are the five threats that reshaped web security this year, and why the lessons learned will define digital protection for years to come. 1. Vibe Coding Natural language coding, " vibe coding " , transformed from novelty to production reality in 2025, with nearly 25% of Y Combinator startups using AI to build core codebases. One developer launched a multiplayer flight simulator in under three hours, eventually scaling it to 89,000 players and generating thousands in monthly revenue. The Result Code that functions perfectly yet contains exploitable flaws, bypassing traditional security tools. AI generates what you ask for, not what you forget...
From Misuse to Abuse: AI Risks and Attacks

From Misuse to Abuse: AI Risks and Attacks

Oct 16, 2024 Artificial Intelligence / Cybercrime
AI from the attacker's perspective: See how cybercriminals are leveraging AI and exploiting its vulnerabilities to compromise systems, users, and even other AI applications Cybercriminals and AI: The Reality vs. Hype "AI will not replace humans in the near future. But humans who know how to use AI are going to replace those humans who don't know how to use AI," says Etay Maor, Chief Security Strategist at Cato Networks and founding member of Cato CTRL . "Similarly, attackers are also turning to AI to augment their own capabilities." Yet, there is a lot more hype than reality around AI's role in cybercrime. Headlines often sensationalize AI threats, with terms like "Chaos-GPT" and "Black Hat AI Tools," even claiming they seek to destroy humanity. However, these articles are more fear-inducing than descriptive of serious threats. For instance, when explored in underground forums, several of these so-called "AI cyber tools" were found to be nothing...
AI Could Generate 10,000 Malware Variants, Evading Detection in 88% of Case

AI Could Generate 10,000 Malware Variants, Evading Detection in 88% of Case

Dec 23, 2024 Machine Learning / Threat Analysis
Cybersecurity researchers have found that it's possible to use large language models (LLMs) to generate new variants of malicious JavaScript code at scale in a manner that can better evade detection. "Although LLMs struggle to create malware from scratch, criminals can easily use them to rewrite or obfuscate existing malware, making it harder to detect," Palo Alto Networks Unit 42 researchers said in a new analysis. "Criminals can prompt LLMs to perform transformations that are much more natural-looking, which makes detecting this malware more challenging." With enough transformations over time, the approach could have the advantage of degrading the performance of malware classification systems, tricking them into believing that a piece of nefarious code is actually benign. While LLM providers have increasingly enforced security guardrails to prevent them from going off the rails and producing unintended output, bad actors have advertised tools like WormGPT...
ThreatsDay Bulletin: $176M Crypto Fine, Hacking Formula 1, Chromium Vulns, AI Hijack & More

ThreatsDay Bulletin: $176M Crypto Fine, Hacking Formula 1, Chromium Vulns, AI Hijack & More

Oct 23, 2025 Cybersecurity / Hacking News
Criminals don't need to be clever all the time; they just follow the easiest path in: trick users, exploit stale components, or abuse trusted systems like OAuth and package registries. If your stack or habits make any of those easy, you're already a target. This week's ThreatsDay highlights show exactly how those weak points are being exploited — from overlooked misconfigurations to sophisticated new attack chains that turn ordinary tools into powerful entry points. Lumma Stealer Stumbles After Doxxing Drama Decline in Lumma Stealer Activity After Doxxing Campaign The activity of the Lumma Stealer (aka Water Kurita) information stealer has witnessed a "sudden drop" since last months after the identities of five alleged core group members were exposed as part of what's said to be an aggressive underground exposure campaign dubbed Lumma Rats since late August 2025. The targeted individuals are affiliated with the malware's development and administ...
Expert Insights Articles Videos
Cybersecurity Resources