#1 Trusted Cybersecurity News Platform Followed by 4.50+ million
The Hacker News Logo
Subscribe – Get Latest News
Cloud Security

OpenAI | Breaking Cybersecurity News | The Hacker News

From Deepfakes to Malware: AI's Expanding Role in Cyber Attacks

From Deepfakes to Malware: AI's Expanding Role in Cyber Attacks

Mar 19, 2024 Generative AI / Incident Response
Large language models (LLMs) powering artificial intelligence (AI) tools today could be exploited to develop self-augmenting malware capable of bypassing YARA rules. "Generative AI can be used to evade string-based YARA rules by augmenting the source code of small malware variants, effectively lowering detection rates," Recorded Future  said  in a new report shared with The Hacker News. The findings are part of a red teaming exercise designed to uncover malicious use cases for AI technologies, which are  already being experimented  with by threat actors to create malware code snippets, generate phishing emails, and conduct reconnaissance on potential targets. The cybersecurity firm said it submitted to an LLM a known piece of malware called  STEELHOOK  that's associated with the APT28 hacking group, alongside its YARA rules, asking it to modify the source code to sidestep detection such that the original functionality remained intact and the generated source code wa
Over 225,000 Compromised ChatGPT Credentials Up for Sale on Dark Web Markets

Over 225,000 Compromised ChatGPT Credentials Up for Sale on Dark Web Markets

Mar 05, 2024 Malware / Artificial Intelligence
More than 225,000 logs containing compromised OpenAI ChatGPT credentials were made available for sale on underground markets between January and October 2023, new findings from Group-IB show. These credentials were found within  information stealer logs  associated with LummaC2, Raccoon, and RedLine stealer malware. "The number of infected devices decreased slightly in mid- and late summer but grew significantly between August and September," the Singapore-headquartered cybersecurity company  said  in its Hi-Tech Crime Trends 2023/2024 report published last week. Between June and October 2023, more than 130,000 unique hosts with access to OpenAI ChatGPT were infiltrated, a 36% increase over what was observed during the first five months of 2023. The breakdown by the top three stealer families is below - LummaC2 - 70,484 hosts Raccoon - 22,468 hosts RedLine - 15,970 hosts "The sharp increase in the number of ChatGPT credentials for sale is due to the overall rise in the numbe
Hands-on Review: Cynomi AI-powered vCISO Platform

Hands-on Review: Cynomi AI-powered vCISO Platform

Apr 10, 2024vCISO / Risk Assessment
The need for vCISO services is growing. SMBs and SMEs are dealing with more third-party risks, tightening regulatory demands and stringent cyber insurance requirements than ever before. However, they often lack the resources and expertise to hire an in-house security executive team. By outsourcing security and compliance leadership to a vCISO, these organizations can more easily obtain cybersecurity expertise specialized for their industry and strengthen their cybersecurity posture. MSPs and MSSPs looking to meet this growing vCISO demand are often faced with the same challenge. The demand for cybersecurity talent far exceeds the supply. This has led to a competitive market where the costs of hiring and retaining skilled professionals can be prohibitive for MSSPs/MSPs as well. The need to maintain expertise of both security and compliance further exacerbates this challenge. Cynomi, the first AI-driven vCISO platform , can help. Cynomi enables you - MSPs, MSSPs and consulting firms
Microsoft, OpenAI Warn of Nation-State Hackers Weaponizing AI for Cyber Attacks

Microsoft, OpenAI Warn of Nation-State Hackers Weaponizing AI for Cyber Attacks

Feb 14, 2024 Artificial Intelligence / Cyber Attack
Nation-state actors associated with Russia, North Korea, Iran, and China are experimenting with artificial intelligence (AI) and large language models (LLMs) to complement their ongoing cyber attack operations. The findings come from a report published by Microsoft in collaboration with OpenAI, both of which  said  they disrupted efforts made by five state-affiliated actors that used its AI services to perform malicious cyber activities by terminating their assets and accounts. "Language support is a natural feature of LLMs and is attractive for threat actors with continuous focus on social engineering and other techniques relying on false, deceptive communications tailored to their targets' jobs, professional networks, and other relationships," Microsoft  said  in a report shared with The Hacker News. While no significant or novel attacks employing the LLMs have been detected to date, adversarial exploration of AI technologies has transcended various phases of the at
cyber security

WATCH: The SaaS Security Challenge in 90 Seconds

websiteAdaptive ShieldSaaS Security / Cyber Threat
Discover how you can overcome the SaaS security challenge by securing your entire SaaS stack with SSPM.
Italian Data Protection Watchdog Accuses ChatGPT of Privacy Violations

Italian Data Protection Watchdog Accuses ChatGPT of Privacy Violations

Jan 30, 2024 Generative AI / Data Privacy
Italy's data protection authority (DPA) has notified ChatGPT-maker OpenAI of supposedly violating privacy laws in the region. "The available evidence pointed to the existence of breaches of the provisions contained in the E.U. GDPR [General Data Protection Regulation]," the Garante per la protezione dei dati personali (aka the Garante)  said  in a statement on Monday. It also said it will "take account of the work in progress within the ad-hoc  task force  set up by the European Data Protection Framework (EDPB) in its final determination on the case." The development comes nearly 10 months after the watchdog imposed a  temporary ban  on ChatGPT in the country, weeks after which OpenAI  announced  a number of privacy controls, including an  opt-out form  to remove one's personal data from being processed by the large language model (LLM). Access to the tool was subsequently reinstated in late April 2023. The Italian DPA said the latest findings, which h
Offensive and Defensive AI: Let’s Chat(GPT) About It

Offensive and Defensive AI: Let's Chat(GPT) About It

Nov 07, 2023 Artificial Intelligence / Data Security
ChatGPT: Productivity tool, great for writing poems, and… a security risk?! In this article, we show how threat actors can exploit ChatGPT, but also how defenders can use it for leveling up their game. ChatGPT is the most swiftly growing consumer application to date. The extremely popular generative AI chatbot has the ability to generate human-like, coherent and contextually relevant responses. This makes it very valuable for applications like content creation, coding, education, customer support, and even personal assistance. However, ChatGPT also comes with security risks. ChatGPT can be used for data exfiltration, spreading misinformation, developing cyber attacks and writing phishing emails. On the flip side, it can help defenders who can use it for identifying vulnerabilities and learning about various defenses. In this article, we show numerous ways attackers can exploit ChatGPT and the OpenAI Playground. Just as importantly, we show ways that defenders can leverage ChatGPT t
Google Expands Its Bug Bounty Program to Tackle Artificial Intelligence Threats

Google Expands Its Bug Bounty Program to Tackle Artificial Intelligence Threats

Oct 27, 2023 Artificial Intelligence / Vulnerability
Google has announced that it's expanding its Vulnerability Rewards Program ( VRP ) to compensate researchers for finding attack scenarios tailored to generative artificial intelligence (AI) systems in an effort to  bolster AI safety and security . "Generative AI raises new and different concerns than traditional digital security, such as the potential for unfair bias, model manipulation or misinterpretations of data (hallucinations)," Google's Laurie Richardson and Royal Hansen  said . Some of the categories that are in scope  include  prompt injections, leakage of sensitive data from training datasets, model manipulation, adversarial perturbation attacks that trigger misclassification, and model theft. It's worth noting that Google earlier this July instituted an  AI Red Team  to help address threats to AI systems as part of its Secure AI Framework ( SAIF ). Also announced as part of its commitment to secure AI are efforts to strengthen the AI supply chain
Over 100,000 Stolen ChatGPT Account Credentials Sold on Dark Web Marketplaces

Over 100,000 Stolen ChatGPT Account Credentials Sold on Dark Web Marketplaces

Jun 20, 2023 Endpoint Security / Password
Over 101,100 compromised OpenAI ChatGPT account credentials have found their way on illicit dark web marketplaces between June 2022 and May 2023, with India alone accounting for 12,632 stolen credentials. The credentials were discovered within information stealer logs made available for sale on the cybercrime underground, Group-IB said in a report shared with The Hacker News. "The number of available logs containing compromised ChatGPT accounts reached a peak of 26,802 in May 2023," the Singapore-headquartered company  said . "The Asia-Pacific region has experienced the highest concentration of ChatGPT credentials being offered for sale over the past year." Other countries with the most number of compromised ChatGPT credentials include Pakistan, Brazil, Vietnam, Egypt, the U.S., France, Morocco, Indonesia, and Bangladesh. A further analysis has revealed that the majority of logs containing ChatGPT accounts have been breached by the notorious Raccoon info steal
 Searching for AI Tools? Watch Out for Rogue Sites Distributing RedLine Malware

Searching for AI Tools? Watch Out for Rogue Sites Distributing RedLine Malware

May 19, 2023 Artificial Intelligence / Cyber Threat
Malicious Google Search ads for generative AI services like OpenAI ChatGPT and Midjourney are being used to direct users to sketchy websites as part of a BATLOADER campaign designed to deliver  RedLine Stealer  malware. "Both AI services are extremely popular but lack first-party standalone apps (i.e., users interface with ChatGPT via their web interface while Midjourney uses Discord)," eSentire said in an analysis. "This vacuum has been exploited by threat actors looking to drive AI app-seekers to imposter web pages promoting fake apps." BATLOADER is a loader malware that's propagated via drive-by downloads where users searching for certain keywords on search engines are displayed bogus ads that, when clicked, redirect them to rogue landing pages hosting malware. The installer file, per eSentire, is rigged with an executable file (ChatGPT.exe or midjourney.exe) and a PowerShell script (Chat.ps1 or Chat-Ready.ps1) that downloads and loads RedLine Stealer
Meta Takes Down Malware Campaign That Used ChatGPT as a Lure to Steal Accounts

Meta Takes Down Malware Campaign That Used ChatGPT as a Lure to Steal Accounts

May 04, 2023 Online Security / ChatGPT
Meta said it took steps to take down more than 1,000 malicious URLs from being shared across its services that were found to leverage OpenAI's ChatGPT as a lure to propagate about 10 malware families since March 2023. The development comes  against  the backdrop of  fake ChatGPT   web browser extensions  being increasingly used to steal users' Facebook account credentials with an aim to run unauthorized ads from hijacked business accounts. "Threat actors create malicious browser extensions available in official web stores that claim to offer ChatGPT-based tools," Meta  said . "They would then promote these malicious extensions on social media and through sponsored search results to trick people into downloading malware." The social media giant said it has blocked several iterations of a multi-pronged malware campaign dubbed  Ducktail  over the years, adding it issued a cease and desist letter to individuals behind the operation who are located in Vietna
ChatGPT is Back in Italy After Addressing Data Privacy Concerns

ChatGPT is Back in Italy After Addressing Data Privacy Concerns

Apr 29, 2023 Data Safety / Privacy / AI
OpenAI, the company behind ChatGPT, has officially made a return to Italy after the company met the  data protection authority's demands  ahead of April 30, 2023, deadline. The development was  first reported  by the Associated Press. OpenAI's CEO, Sam Altman,  tweeted , "we're excited ChatGPT is available in [Italy] again!" The reinstatement comes following Garante's decision to  temporarily block  access to the popular AI chatbot service in Italy on March 31, 2023, over concerns that its practices are in violation of data protection laws in the region. Generative AI systems like ChatGPT and Google Bard primarily rely on huge amounts of information freely available on the internet as well as the data its users provide over the course of their interactions. OpenAI, which published a  new FAQ , said it filters and removes information such as hate speech, adult content, sites that primarily aggregate personal information, and spam. It also emphasized that
ChatGPT Security: OpenAI's Bug Bounty Program Offers Up to $20,000 Prizes

ChatGPT Security: OpenAI's Bug Bounty Program Offers Up to $20,000 Prizes

Apr 13, 2023 Software Security / Bug Hunting
OpenAI, the company behind the massively popular ChatGPT AI chatbot, has launched a  bug bounty program  in an attempt to ensure its systems are "safe and secure." To that end, it has partnered with the crowdsourced security platform  Bugcrowd  for independent researchers to report vulnerabilities discovered in its product in exchange for rewards ranging from "$200 for low-severity findings to up to $20,000 for exceptional discoveries." It's worth noting that the program does not cover  model safety or hallucination issues , wherein the chatbot is prompted to generate malicious code or other faulty outputs. The company noted that "addressing these issues often involves substantial research and a broader approach." Other prohibited categories are denial-of-service (DoS) attacks, brute-forcing OpenAI APIs, and demonstrations that aim to destroy data or gain unauthorized access to sensitive information beyond what's necessary to highlight the prob
Italian Watchdog Bans OpenAI's ChatGPT Over Data Protection Concerns

Italian Watchdog Bans OpenAI's ChatGPT Over Data Protection Concerns

Apr 03, 2023 Artificial Intelligence / Data Safety
The Italian data protection watchdog, Garante per la Protezione dei Dati Personali (aka Garante), has imposed a temporary ban of OpenAI's ChatGPT service in the country, citing data protection concerns. To that end, it has ordered the company to stop processing users' data with immediate effect, stating it intends to investigate the company over whether it's unlawfully processing such data in violation of the E.U. General Data Protection Regulation ( GDPR ) laws. "No information is provided to users and data subjects whose data are collected by Open AI," the Garante  noted . "More importantly, there appears to be no legal basis underpinning the massive collection and processing of personal data in order to 'train' the algorithms on which the platform relies." ChatGPT, which is estimated to have reached over 100 million monthly active users since its release late last year,  has not   disclosed   what it used  to train its latest large languag
OpenAI Reveals Redis Bug Behind ChatGPT User Data Exposure Incident

OpenAI Reveals Redis Bug Behind ChatGPT User Data Exposure Incident

Mar 25, 2023 Artificial Intelligence / Data Security
OpenAI on Friday disclosed that a bug in the Redis open source library was responsible for the exposure of other users' personal information and chat titles in the upstart's ChatGPT service earlier this week. The  glitch , which came to light on March 20, 2023, enabled certain users to view brief descriptions of other users' conversations from the chat history sidebar, prompting the company to temporarily shut down the chatbot. "It's also possible that the first message of a newly-created conversation was visible in someone else's chat history if both users were active around the same time," the company  said . The bug, it further added, originated in the  redis-py library , leading to a scenario where canceled requests could cause connections to be corrupted and return unexpected data from the database cache, in this case, information belonging to an unrelated user. To make matters worse, the San Francisco-based AI research company said it introduce
Fake ChatGPT Chrome Browser Extension Caught Hijacking Facebook Accounts

Fake ChatGPT Chrome Browser Extension Caught Hijacking Facebook Accounts

Mar 23, 2023 Browser Security / Artificial Intelligence
Google has stepped in to remove a bogus Chrome browser extension from the official Web Store that masqueraded as OpenAI's ChatGPT service to harvest Facebook session cookies and hijack the accounts. The "ChatGPT For Google" extension, a trojanized version of a  legitimate open source browser add-on , attracted over 9,000 installations since March 14, 2023, prior to its removal. It was originally uploaded to the Chrome Web Store on February 14, 2023. According to  Guardio Labs  researcher Nati Tal, the extension was propagated through  malicious   sponsored Google search results  that were designed to redirect unsuspecting users searching for "Chat GPT-4" to fraudulent landing pages that point to the fake add-on. Installing the extension adds the promised functionality – i.e., enhancing search engines with ChatGPT – but it also stealthily activates the ability to capture Facebook-related cookies and exfiltrate it to a remote server in an encrypted manner. O
Cybersecurity Resources