#1 Trusted Cybersecurity News Platform
Followed by 5.20+ million
The Hacker News Logo
Subscribe – Get Latest News

OpenAI | Breaking Cybersecurity News | The Hacker News

Category — OpenAI
PyPI Attack: ChatGPT, Claude Impersonators Deliver JarkaStealer via Python Libraries

PyPI Attack: ChatGPT, Claude Impersonators Deliver JarkaStealer via Python Libraries

Nov 22, 2024 Artificial Intelligence / Malware
Cybersecurity researchers have discovered two malicious packages uploaded to the Python Package Index (PyPI) repository that impersonated popular artificial intelligence (AI) models like OpenAI ChatGPT and Anthropic Claude to deliver an information stealer called JarkaStealer. The packages, named gptplus and claudeai-eng , were uploaded by a user named " Xeroline " in November 2023, attracting 1,748 and 1,826 downloads, respectively. Both libraries are no longer available for download from PyPI. "The malicious packages were uploaded to the repository by one author and, in fact, differed from each other only in name and description," Kaspersky said in a post. The packages purported to offer a way to access GPT-4 Turbo API and Claude AI API, but harbored malicious code that initiated the deployment of the malware upon installation. Specifically, the "__init__.py" file in these packages contained Base64-encoded data that incorporated code to download ...
ChatGPT macOS Flaw Could've Enabled Long-Term Spyware via Memory Function

ChatGPT macOS Flaw Could've Enabled Long-Term Spyware via Memory Function

Sep 25, 2024 Artificial Intelligence / Vulnerability
A now-patched security vulnerability in OpenAI's ChatGPT app for macOS could have made it possible for attackers to plant long-term persistent spyware into the artificial intelligence (AI) tool's memory. The technique, dubbed SpAIware , could be abused to facilitate "continuous data exfiltration of any information the user typed or responses received by ChatGPT, including any future chat sessions," security researcher Johann Rehberger said . The issue, at its core, abuses a feature called memory , which OpenAI introduced earlier this February before rolling it out to ChatGPT Free, Plus, Team, and Enterprise users at the start of the month. What it does is essentially allow ChatGPT to remember certain things across chats so that it saves users the effort of repeating the same information over and over again. Users also have the option to instruct the program to forget something. "ChatGPT's memories evolve with your interactions and aren't linked to s...
How AI Is Transforming IAM and Identity Security

How AI Is Transforming IAM and Identity Security

Nov 15, 2024Machine Learning / Identity Security
In recent years, artificial intelligence (AI) has begun revolutionizing Identity Access Management (IAM), reshaping how cybersecurity is approached in this crucial field. Leveraging AI in IAM is about tapping into its analytical capabilities to monitor access patterns and identify anomalies that could signal a potential security breach. The focus has expanded beyond merely managing human identities — now, autonomous systems, APIs, and connected devices also fall within the realm of AI-driven IAM, creating a dynamic security ecosystem that adapts and evolves in response to sophisticated cyber threats. The Role of AI and Machine Learning in IAM AI and machine learning (ML) are creating a more robust, proactive IAM system that continuously learns from the environment to enhance security. Let's explore how AI impacts key IAM components: Intelligent Monitoring and Anomaly Detection AI enables continuous monitoring of both human and non-human identities , including APIs, service acc...
Apple Launches Private Cloud Compute for Privacy-Centric AI Processing

Apple Launches Private Cloud Compute for Privacy-Centric AI Processing

Jun 11, 2024 Cloud Computing / Artificial Intelligence
Apple has announced the launch of a "groundbreaking cloud intelligence system" called Private Cloud Compute (PCC) that's designed for processing artificial intelligence (AI) tasks in a privacy-preserving manner in the cloud. The tech giant described PCC as the "most advanced security architecture ever deployed for cloud AI compute at scale." PCC coincides with the arrival of new generative AI (GenAI) features – collectively dubbed Apple Intelligence , or AI for short – that the iPhone maker unveiled in its next generation of software, including iOS 18 , iPadOS 18 , and macOS Sequoia . All of the Apple Intelligence features, both the ones that run on-device and those that rely on PCC, leverage in-house generative models trained on "licensed data, including data selected to enhance specific features, as well as publicly available data collected by our web-crawler, AppleBot." With PCC, the idea is to essentially offload complex requests that requir...
cyber security

Creating, Managing and Securing Non-Human Identities

websitePermisoCybersecurity / Identity Security
A new class of identities has emerged alongside traditional human users: non-human identities (NHIs). Permiso Security's new eBook details everything you need to know about managing and securing non-human identities, and strategies to unify identity security without compromising agility.
OpenAI, Meta, and TikTok Crack Down on Covert Influence Campaigns, Some AI-Powered

OpenAI, Meta, and TikTok Crack Down on Covert Influence Campaigns, Some AI-Powered

May 31, 2024 Ethical AI / Disinformation
OpenAI on Thursday disclosed that it took steps to cut off five covert influence operations (IO) originating from China, Iran, Israel, and Russia that sought to abuse its artificial intelligence (AI) tools to manipulate public discourse or political outcomes online while obscuring their true identity. These activities, which were detected over the past three months, used its AI models to generate short comments and longer articles in a range of languages, cook up names and bios for social media accounts, conduct open-source research, debug simple code, and translate and proofread texts. The AI research organization said two of the networks were linked to actors in Russia, including a previously undocumented operation codenamed Bad Grammar that primarily used at least a dozen Telegram accounts to target audiences in Ukraine, Moldova, the Baltic States and the United States (U.S.) with sloppy content in Russian and English. "The network used our models and accounts on Telegram t...
From Deepfakes to Malware: AI's Expanding Role in Cyber Attacks

From Deepfakes to Malware: AI's Expanding Role in Cyber Attacks

Mar 19, 2024 Generative AI / Incident Response
Large language models (LLMs) powering artificial intelligence (AI) tools today could be exploited to develop self-augmenting malware capable of bypassing YARA rules. "Generative AI can be used to evade string-based YARA rules by augmenting the source code of small malware variants, effectively lowering detection rates," Recorded Future  said  in a new report shared with The Hacker News. The findings are part of a red teaming exercise designed to uncover malicious use cases for AI technologies, which are  already being experimented  with by threat actors to create malware code snippets, generate phishing emails, and conduct reconnaissance on potential targets. The cybersecurity firm said it submitted to an LLM a known piece of malware called  STEELHOOK  that's associated with the APT28 hacking group, alongside its YARA rules, asking it to modify the source code to sidestep detection such that the original functionality remained intact and the gene...
Over 225,000 Compromised ChatGPT Credentials Up for Sale on Dark Web Markets

Over 225,000 Compromised ChatGPT Credentials Up for Sale on Dark Web Markets

Mar 05, 2024 Malware / Artificial Intelligence
More than 225,000 logs containing compromised OpenAI ChatGPT credentials were made available for sale on underground markets between January and October 2023, new findings from Group-IB show. These credentials were found within  information stealer logs  associated with LummaC2, Raccoon, and RedLine stealer malware. "The number of infected devices decreased slightly in mid- and late summer but grew significantly between August and September," the Singapore-headquartered cybersecurity company  said  in its Hi-Tech Crime Trends 2023/2024 report published last week. Between June and October 2023, more than 130,000 unique hosts with access to OpenAI ChatGPT were infiltrated, a 36% increase over what was observed during the first five months of 2023. The breakdown by the top three stealer families is below - LummaC2 - 70,484 hosts Raccoon - 22,468 hosts RedLine - 15,970 hosts "The sharp increase in the number of ChatGPT credentials for sale is due to the overal...
Microsoft, OpenAI Warn of Nation-State Hackers Weaponizing AI for Cyber Attacks

Microsoft, OpenAI Warn of Nation-State Hackers Weaponizing AI for Cyber Attacks

Feb 14, 2024 Artificial Intelligence / Cyber Attack
Nation-state actors associated with Russia, North Korea, Iran, and China are experimenting with artificial intelligence (AI) and large language models (LLMs) to complement their ongoing cyber attack operations. The findings come from a report published by Microsoft in collaboration with OpenAI, both of which  said  they disrupted efforts made by five state-affiliated actors that used its AI services to perform malicious cyber activities by terminating their assets and accounts. "Language support is a natural feature of LLMs and is attractive for threat actors with continuous focus on social engineering and other techniques relying on false, deceptive communications tailored to their targets' jobs, professional networks, and other relationships," Microsoft  said  in a report shared with The Hacker News. While no significant or novel attacks employing the LLMs have been detected to date, adversarial exploration of AI technologies has transcended various phases of t...
Italian Data Protection Watchdog Accuses ChatGPT of Privacy Violations

Italian Data Protection Watchdog Accuses ChatGPT of Privacy Violations

Jan 30, 2024 Generative AI / Data Privacy
Italy's data protection authority (DPA) has notified ChatGPT-maker OpenAI of supposedly violating privacy laws in the region. "The available evidence pointed to the existence of breaches of the provisions contained in the E.U. GDPR [General Data Protection Regulation]," the Garante per la protezione dei dati personali (aka the Garante)  said  in a statement on Monday. It also said it will "take account of the work in progress within the ad-hoc  task force  set up by the European Data Protection Framework (EDPB) in its final determination on the case." The development comes nearly 10 months after the watchdog imposed a  temporary ban  on ChatGPT in the country, weeks after which OpenAI  announced  a number of privacy controls, including an  opt-out form  to remove one's personal data from being processed by the large language model (LLM). Access to the tool was subsequently reinstated in late April 2023. The Italian DPA said the lat...
Offensive and Defensive AI: Let’s Chat(GPT) About It

Offensive and Defensive AI: Let's Chat(GPT) About It

Nov 07, 2023 Artificial Intelligence / Data Security
ChatGPT: Productivity tool, great for writing poems, and… a security risk?! In this article, we show how threat actors can exploit ChatGPT, but also how defenders can use it for leveling up their game. ChatGPT is the most swiftly growing consumer application to date. The extremely popular generative AI chatbot has the ability to generate human-like, coherent and contextually relevant responses. This makes it very valuable for applications like content creation, coding, education, customer support, and even personal assistance. However, ChatGPT also comes with security risks. ChatGPT can be used for data exfiltration, spreading misinformation, developing cyber attacks and writing phishing emails. On the flip side, it can help defenders who can use it for identifying vulnerabilities and learning about various defenses. In this article, we show numerous ways attackers can exploit ChatGPT and the OpenAI Playground. Just as importantly, we show ways that defenders can leverage ChatGPT t...
Google Expands Its Bug Bounty Program to Tackle Artificial Intelligence Threats

Google Expands Its Bug Bounty Program to Tackle Artificial Intelligence Threats

Oct 27, 2023 Artificial Intelligence / Vulnerability
Google has announced that it's expanding its Vulnerability Rewards Program ( VRP ) to compensate researchers for finding attack scenarios tailored to generative artificial intelligence (AI) systems in an effort to  bolster AI safety and security . "Generative AI raises new and different concerns than traditional digital security, such as the potential for unfair bias, model manipulation or misinterpretations of data (hallucinations)," Google's Laurie Richardson and Royal Hansen  said . Some of the categories that are in scope  include  prompt injections, leakage of sensitive data from training datasets, model manipulation, adversarial perturbation attacks that trigger misclassification, and model theft. It's worth noting that Google earlier this July instituted an  AI Red Team  to help address threats to AI systems as part of its Secure AI Framework ( SAIF ). Also announced as part of its commitment to secure AI are efforts to strengthen the AI supply...
Over 100,000 Stolen ChatGPT Account Credentials Sold on Dark Web Marketplaces

Over 100,000 Stolen ChatGPT Account Credentials Sold on Dark Web Marketplaces

Jun 20, 2023 Endpoint Security / Password
Over 101,100 compromised OpenAI ChatGPT account credentials have found their way on illicit dark web marketplaces between June 2022 and May 2023, with India alone accounting for 12,632 stolen credentials. The credentials were discovered within information stealer logs made available for sale on the cybercrime underground, Group-IB said in a report shared with The Hacker News. "The number of available logs containing compromised ChatGPT accounts reached a peak of 26,802 in May 2023," the Singapore-headquartered company  said . "The Asia-Pacific region has experienced the highest concentration of ChatGPT credentials being offered for sale over the past year." Other countries with the most number of compromised ChatGPT credentials include Pakistan, Brazil, Vietnam, Egypt, the U.S., France, Morocco, Indonesia, and Bangladesh. A further analysis has revealed that the majority of logs containing ChatGPT accounts have been breached by the notorious Raccoon info steal...
 Searching for AI Tools? Watch Out for Rogue Sites Distributing RedLine Malware

Searching for AI Tools? Watch Out for Rogue Sites Distributing RedLine Malware

May 19, 2023 Artificial Intelligence / Cyber Threat
Malicious Google Search ads for generative AI services like OpenAI ChatGPT and Midjourney are being used to direct users to sketchy websites as part of a BATLOADER campaign designed to deliver  RedLine Stealer  malware. "Both AI services are extremely popular but lack first-party standalone apps (i.e., users interface with ChatGPT via their web interface while Midjourney uses Discord)," eSentire said in an analysis. "This vacuum has been exploited by threat actors looking to drive AI app-seekers to imposter web pages promoting fake apps." BATLOADER is a loader malware that's propagated via drive-by downloads where users searching for certain keywords on search engines are displayed bogus ads that, when clicked, redirect them to rogue landing pages hosting malware. The installer file, per eSentire, is rigged with an executable file (ChatGPT.exe or midjourney.exe) and a PowerShell script (Chat.ps1 or Chat-Ready.ps1) that downloads and loads RedLine Stealer ...
Meta Takes Down Malware Campaign That Used ChatGPT as a Lure to Steal Accounts

Meta Takes Down Malware Campaign That Used ChatGPT as a Lure to Steal Accounts

May 04, 2023 Online Security / ChatGPT
Meta said it took steps to take down more than 1,000 malicious URLs from being shared across its services that were found to leverage OpenAI's ChatGPT as a lure to propagate about 10 malware families since March 2023. The development comes  against  the backdrop of  fake ChatGPT   web browser extensions  being increasingly used to steal users' Facebook account credentials with an aim to run unauthorized ads from hijacked business accounts. "Threat actors create malicious browser extensions available in official web stores that claim to offer ChatGPT-based tools," Meta  said . "They would then promote these malicious extensions on social media and through sponsored search results to trick people into downloading malware." The social media giant said it has blocked several iterations of a multi-pronged malware campaign dubbed  Ducktail  over the years, adding it issued a cease and desist letter to individuals behind the operation who are locate...
Expert Insights / Articles Videos
Cybersecurity Resources