#1 Trusted Cybersecurity News Platform
Followed by 5.20+ million
The Hacker News Logo
Subscribe – Get Latest News
Cloud Security

OpenAI | Breaking Cybersecurity News | The Hacker News

Category — OpenAI
Google Drops Cookie Prompt in Chrome, Adds IP Protection to Incognito

Google Drops Cookie Prompt in Chrome, Adds IP Protection to Incognito

Apr 23, 2025 Privacy / Artificial Intelligence
Google on Tuesday revealed that it will no longer offer a standalone prompt for third-party cookies in its Chrome browser as part of its Privacy Sandbox initiative. "We've made the decision to maintain our current approach to offering users third-party cookie choice in Chrome, and will not be rolling out a new standalone prompt for third-party cookies," Anthony Chavez, vice president of Privacy Sandbox at Google, said . "Users can continue to choose the best option for themselves in Chrome's Privacy and Security Settings." Back in July 2024, the tech giant said it had abandoned its plans to deprecate third-party tracking cookies and that it intends to roll out a new experience instead that lets users make an informed choice. Google said feedback from publishers, developers, regulators, and the ads industry has made it clear there are "divergent perspectives" on making changes that could affect the availability of third-party cookies. In its...
AkiraBot Targets 420,000 Sites with OpenAI-Generated Spam, Bypassing CAPTCHA Protections

AkiraBot Targets 420,000 Sites with OpenAI-Generated Spam, Bypassing CAPTCHA Protections

Apr 10, 2025 Website Security / Cybercrime
Cybersecurity researchers have disclosed details of an artificial intelligence (AI) powered platform called AkiraBot that's used to spam website chats, comment sections, and contact forms to promote dubious search engine optimization (SEO) services such as Akira and ServicewrapGO. "AkiraBot has targeted more than 400,000 websites and successfully spammed at least 80,000 websites since September 2024," SentinelOne researchers Alex Delamotte and Jim Walter said in a report shared with The Hacker News. "The bot uses OpenAI to generate custom outreach messages based on the purpose of the website." Targets of the activity include contact forms and chat widgets present in small to medium-sized business websites, with the framework sharing spam content generated using OpenAI's large language models (LLMs). What makes the "sprawling" Python-based tool stand apart is its ability to craft content such that it can bypass spam filters. It's believe...
5 Reasons Device Management Isn't Device Trust​

5 Reasons Device Management Isn't Device Trust​

Apr 21, 2025Endpoint Security / Zero Trust
The problem is simple: all breaches start with initial access, and initial access comes down to two primary attack vectors – credentials and devices. This is not news; every report you can find on the threat landscape depicts the same picture.  The solution is more complex. For this article, we'll focus on the device threat vector. The risk they pose is significant, which is why device management tools like Mobile Device Management (MDM) and Endpoint Detection and Response (EDR) are essential components of an organization's security infrastructure. However, relying solely on these tools to manage device risk actually creates a false sense of security. Instead of the blunt tools of device management, organizations are looking for solutions that deliver device trust . Device trust provides a comprehensive, risk-based approach to device security enforcement, closing the large gaps left behind by traditional device management solutions. Here are 5 of those limitations and how to ov...
Leaked Black Basta Chats Suggest Russian Officials Aided Leader's Escape from Armenia

Leaked Black Basta Chats Suggest Russian Officials Aided Leader's Escape from Armenia

Mar 19, 2025 Cybercrime / Threat Intelligence
The recently leaked trove of internal chat logs among members of the Black Basta ransomware operation has revealed possible connections between the e-crime gang and Russian authorities. The leak, containing over 200,000 messages from September 2023 to September 2024, was published by a Telegram user @ExploitWhispers last month. According to an analysis of the messages by cybersecurity company Trellix, Black Basta's alleged leader Oleg Nefedov (aka GG or AA) may have received help from Russian officials following his arrest in Yerevan, Armenia, in June 2024, allowing him to escape three days later. In the messages, GG claimed that he contacted high-ranking officials to pass through a "green corridor" and facilitate the extraction. "This knowledge from chat leaks makes it difficult for the Black Basta gang to completely abandon the way they operate and start a new RaaS from scratch without a reference to their previous activities," Trellix researchers Ja...
cyber security

Mastering AI Security: Your Essential Guide

websiteWizAI Security / Posture Management
Learn how to secure your AI pipelines and stay ahead of AI-specific risks at every stage with these best practices.
Italy Fines OpenAI €15 Million for ChatGPT GDPR Data Privacy Violations

Italy Fines OpenAI €15 Million for ChatGPT GDPR Data Privacy Violations

Dec 23, 2024 GDPR / Data Privacy
Italy's data protection authority has fined ChatGPT maker OpenAI a fine of €15 million ($15.66 million) over how the generative artificial intelligence application handles personal data. The fine comes nearly a year after the Garante found that ChatGPT processed users' information to train its service in violation of the European Union's General Data Protection Regulation (GDPR). The authority said OpenAI did not notify it of a security breach that took place in March 2023, and that it processed the personal information of users to train ChatGPT without having an adequate legal basis to do so. It also accused the company of going against the principle of transparency and related information obligations toward users. "Furthermore, OpenAI has not provided for mechanisms for age verification, which could lead to the risk of exposing children under 13 to inappropriate responses with respect to their degree of development and self-awareness," the Garante said. ...
Researchers Uncover Prompt Injection Vulnerabilities in DeepSeek and Claude AI

Researchers Uncover Prompt Injection Vulnerabilities in DeepSeek and Claude AI

Dec 09, 2024 Artificial Intelligenc / Vulnerability
Details have emerged about a now-patched security flaw in the DeepSeek artificial intelligence (AI) chatbot that, if successfully exploited, could permit a bad actor to take control of a victim's account by means of a prompt injection attack . Security researcher Johann Rehberger, who has chronicled many a prompt injection attack targeting various AI tools, found that providing the input "Print the xss cheat sheet in a bullet list. just payloads" in the DeepSeek chat triggered the execution of JavaScript code as part of the generated response – a classic case of cross-site scripting (XSS). XSS attacks can have serious consequences as they lead to the execution of unauthorized code in the context of the victim's web browser. An attacker could take advantage of such flaws to hijack a user's session and gain access to cookies and other data associated with the chat.deepseek[.]com domain, thereby leading to an account takeover. "After some experimenting,...
PyPI Attack: ChatGPT, Claude Impersonators Deliver JarkaStealer via Python Libraries

PyPI Attack: ChatGPT, Claude Impersonators Deliver JarkaStealer via Python Libraries

Nov 22, 2024 Artificial Intelligence / Malware
Cybersecurity researchers have discovered two malicious packages uploaded to the Python Package Index (PyPI) repository that impersonated popular artificial intelligence (AI) models like OpenAI ChatGPT and Anthropic Claude to deliver an information stealer called JarkaStealer. The packages, named gptplus and claudeai-eng , were uploaded by a user named " Xeroline " in November 2023, attracting 1,748 and 1,826 downloads, respectively. Both libraries are no longer available for download from PyPI. "The malicious packages were uploaded to the repository by one author and, in fact, differed from each other only in name and description," Kaspersky said in a post. The packages purported to offer a way to access GPT-4 Turbo API and Claude AI API, but harbored malicious code that initiated the deployment of the malware upon installation. Specifically, the "__init__.py" file in these packages contained Base64-encoded data that incorporated code to download ...
ChatGPT macOS Flaw Could've Enabled Long-Term Spyware via Memory Function

ChatGPT macOS Flaw Could've Enabled Long-Term Spyware via Memory Function

Sep 25, 2024 Artificial Intelligence / Vulnerability
A now-patched security vulnerability in OpenAI's ChatGPT app for macOS could have made it possible for attackers to plant long-term persistent spyware into the artificial intelligence (AI) tool's memory. The technique, dubbed SpAIware , could be abused to facilitate "continuous data exfiltration of any information the user typed or responses received by ChatGPT, including any future chat sessions," security researcher Johann Rehberger said . The issue, at its core, abuses a feature called memory , which OpenAI introduced earlier this February before rolling it out to ChatGPT Free, Plus, Team, and Enterprise users at the start of the month. What it does is essentially allow ChatGPT to remember certain things across chats so that it saves users the effort of repeating the same information over and over again. Users also have the option to instruct the program to forget something. "ChatGPT's memories evolve with your interactions and aren't linked to s...
Apple Launches Private Cloud Compute for Privacy-Centric AI Processing

Apple Launches Private Cloud Compute for Privacy-Centric AI Processing

Jun 11, 2024 Cloud Computing / Artificial Intelligence
Apple has announced the launch of a "groundbreaking cloud intelligence system" called Private Cloud Compute (PCC) that's designed for processing artificial intelligence (AI) tasks in a privacy-preserving manner in the cloud. The tech giant described PCC as the "most advanced security architecture ever deployed for cloud AI compute at scale." PCC coincides with the arrival of new generative AI (GenAI) features – collectively dubbed Apple Intelligence , or AI for short – that the iPhone maker unveiled in its next generation of software, including iOS 18 , iPadOS 18 , and macOS Sequoia . All of the Apple Intelligence features, both the ones that run on-device and those that rely on PCC, leverage in-house generative models trained on "licensed data, including data selected to enhance specific features, as well as publicly available data collected by our web-crawler, AppleBot." With PCC, the idea is to essentially offload complex requests that requir...
OpenAI, Meta, and TikTok Crack Down on Covert Influence Campaigns, Some AI-Powered

OpenAI, Meta, and TikTok Crack Down on Covert Influence Campaigns, Some AI-Powered

May 31, 2024 Ethical AI / Disinformation
OpenAI on Thursday disclosed that it took steps to cut off five covert influence operations (IO) originating from China, Iran, Israel, and Russia that sought to abuse its artificial intelligence (AI) tools to manipulate public discourse or political outcomes online while obscuring their true identity. These activities, which were detected over the past three months, used its AI models to generate short comments and longer articles in a range of languages, cook up names and bios for social media accounts, conduct open-source research, debug simple code, and translate and proofread texts. The AI research organization said two of the networks were linked to actors in Russia, including a previously undocumented operation codenamed Bad Grammar that primarily used at least a dozen Telegram accounts to target audiences in Ukraine, Moldova, the Baltic States and the United States (U.S.) with sloppy content in Russian and English. "The network used our models and accounts on Telegram t...
From Deepfakes to Malware: AI's Expanding Role in Cyber Attacks

From Deepfakes to Malware: AI's Expanding Role in Cyber Attacks

Mar 19, 2024 Generative AI / Incident Response
Large language models (LLMs) powering artificial intelligence (AI) tools today could be exploited to develop self-augmenting malware capable of bypassing YARA rules. "Generative AI can be used to evade string-based YARA rules by augmenting the source code of small malware variants, effectively lowering detection rates," Recorded Future  said  in a new report shared with The Hacker News. The findings are part of a red teaming exercise designed to uncover malicious use cases for AI technologies, which are  already being experimented  with by threat actors to create malware code snippets, generate phishing emails, and conduct reconnaissance on potential targets. The cybersecurity firm said it submitted to an LLM a known piece of malware called  STEELHOOK  that's associated with the APT28 hacking group, alongside its YARA rules, asking it to modify the source code to sidestep detection such that the original functionality remained intact and the gene...
Over 225,000 Compromised ChatGPT Credentials Up for Sale on Dark Web Markets

Over 225,000 Compromised ChatGPT Credentials Up for Sale on Dark Web Markets

Mar 05, 2024 Malware / Artificial Intelligence
More than 225,000 logs containing compromised OpenAI ChatGPT credentials were made available for sale on underground markets between January and October 2023, new findings from Group-IB show. These credentials were found within  information stealer logs  associated with LummaC2, Raccoon, and RedLine stealer malware. "The number of infected devices decreased slightly in mid- and late summer but grew significantly between August and September," the Singapore-headquartered cybersecurity company  said  in its Hi-Tech Crime Trends 2023/2024 report published last week. Between June and October 2023, more than 130,000 unique hosts with access to OpenAI ChatGPT were infiltrated, a 36% increase over what was observed during the first five months of 2023. The breakdown by the top three stealer families is below - LummaC2 - 70,484 hosts Raccoon - 22,468 hosts RedLine - 15,970 hosts "The sharp increase in the number of ChatGPT credentials for sale is due to the overal...
Microsoft, OpenAI Warn of Nation-State Hackers Weaponizing AI for Cyber Attacks

Microsoft, OpenAI Warn of Nation-State Hackers Weaponizing AI for Cyber Attacks

Feb 14, 2024 Artificial Intelligence / Cyber Attack
Nation-state actors associated with Russia, North Korea, Iran, and China are experimenting with artificial intelligence (AI) and large language models (LLMs) to complement their ongoing cyber attack operations. The findings come from a report published by Microsoft in collaboration with OpenAI, both of which  said  they disrupted efforts made by five state-affiliated actors that used its AI services to perform malicious cyber activities by terminating their assets and accounts. "Language support is a natural feature of LLMs and is attractive for threat actors with continuous focus on social engineering and other techniques relying on false, deceptive communications tailored to their targets' jobs, professional networks, and other relationships," Microsoft  said  in a report shared with The Hacker News. While no significant or novel attacks employing the LLMs have been detected to date, adversarial exploration of AI technologies has transcended various phases of t...
Italian Data Protection Watchdog Accuses ChatGPT of Privacy Violations

Italian Data Protection Watchdog Accuses ChatGPT of Privacy Violations

Jan 30, 2024 Generative AI / Data Privacy
Italy's data protection authority (DPA) has notified ChatGPT-maker OpenAI of supposedly violating privacy laws in the region. "The available evidence pointed to the existence of breaches of the provisions contained in the E.U. GDPR [General Data Protection Regulation]," the Garante per la protezione dei dati personali (aka the Garante)  said  in a statement on Monday. It also said it will "take account of the work in progress within the ad-hoc  task force  set up by the European Data Protection Framework (EDPB) in its final determination on the case." The development comes nearly 10 months after the watchdog imposed a  temporary ban  on ChatGPT in the country, weeks after which OpenAI  announced  a number of privacy controls, including an  opt-out form  to remove one's personal data from being processed by the large language model (LLM). Access to the tool was subsequently reinstated in late April 2023. The Italian DPA said the lat...
Offensive and Defensive AI: Let’s Chat(GPT) About It

Offensive and Defensive AI: Let's Chat(GPT) About It

Nov 07, 2023 Artificial Intelligence / Data Security
ChatGPT: Productivity tool, great for writing poems, and… a security risk?! In this article, we show how threat actors can exploit ChatGPT, but also how defenders can use it for leveling up their game. ChatGPT is the most swiftly growing consumer application to date. The extremely popular generative AI chatbot has the ability to generate human-like, coherent and contextually relevant responses. This makes it very valuable for applications like content creation, coding, education, customer support, and even personal assistance. However, ChatGPT also comes with security risks. ChatGPT can be used for data exfiltration, spreading misinformation, developing cyber attacks and writing phishing emails. On the flip side, it can help defenders who can use it for identifying vulnerabilities and learning about various defenses. In this article, we show numerous ways attackers can exploit ChatGPT and the OpenAI Playground. Just as importantly, we show ways that defenders can leverage ChatGPT t...
Expert Insights / Articles Videos
Cybersecurity Resources