#1 Trusted Cybersecurity News Platform Followed by 4.50+ million
The Hacker News Logo
Subscribe – Get Latest News
Cloud Security

ChatGPT | Breaking Cybersecurity News | The Hacker News

Third-Party ChatGPT Plugins Could Lead to Account Takeovers

Third-Party ChatGPT Plugins Could Lead to Account Takeovers

Mar 15, 2024 Data Privacy / Artificial Intelligence
Cybersecurity researchers have found that third-party plugins available for OpenAI ChatGPT could act as a new attack surface for threat actors looking to gain unauthorized access to sensitive data. According to  new research  published by Salt Labs, security flaws found directly in ChatGPT and within the ecosystem could allow attackers to install malicious plugins without users' consent and hijack accounts on third-party websites like GitHub. ChatGPT plugins , as the name implies, are tools designed to run on top of the large language model (LLM) with the aim of accessing up-to-date information, running computations, or accessing third-party services. OpenAI has since also introduced  GPTs , which are bespoke versions of ChatGPT tailored for specific use cases, while reducing third-party service dependencies. As of March 19, 2024, ChatGPT users  will no longer  be able to install new plugins or create new conversations with existing plugins. One of the flaws unearthed by Salt
Over 225,000 Compromised ChatGPT Credentials Up for Sale on Dark Web Markets

Over 225,000 Compromised ChatGPT Credentials Up for Sale on Dark Web Markets

Mar 05, 2024 Malware / Artificial Intelligence
More than 225,000 logs containing compromised OpenAI ChatGPT credentials were made available for sale on underground markets between January and October 2023, new findings from Group-IB show. These credentials were found within  information stealer logs  associated with LummaC2, Raccoon, and RedLine stealer malware. "The number of infected devices decreased slightly in mid- and late summer but grew significantly between August and September," the Singapore-headquartered cybersecurity company  said  in its Hi-Tech Crime Trends 2023/2024 report published last week. Between June and October 2023, more than 130,000 unique hosts with access to OpenAI ChatGPT were infiltrated, a 36% increase over what was observed during the first five months of 2023. The breakdown by the top three stealer families is below - LummaC2 - 70,484 hosts Raccoon - 22,468 hosts RedLine - 15,970 hosts "The sharp increase in the number of ChatGPT credentials for sale is due to the overall rise in the numbe
The Drop in Ransomware Attacks in 2024 and What it Means

The Drop in Ransomware Attacks in 2024 and What it Means

Apr 08, 2024Ransomware / Cybercrime
The  ransomware industry surged in 2023  as it saw an alarming 55.5% increase in victims worldwide, reaching a staggering 5,070.  But 2024 is starting off showing a very different picture.  While the numbers skyrocketed in Q4 2023 with 1309 cases, in Q1 2024, the ransomware industry was down to 1,048 cases. This is a 22% decrease in ransomware attacks compared to Q4 2023. Figure 1: Victims per quarter There could be several reasons for this significant drop.  Reason 1: The Law Enforcement Intervention Firstly, law enforcement has upped the ante in 2024 with actions against both LockBit and ALPHV. The LockBit Arrests In February, an international operation named "Operation Cronos" culminated in the arrest of at least three associates of the infamous LockBit ransomware syndicate in Poland and Ukraine.  Law enforcement from multiple countries collaborated to take down LockBit's infrastructure. This included seizing their dark web domains and gaining access to their backend sys
Italian Data Protection Watchdog Accuses ChatGPT of Privacy Violations

Italian Data Protection Watchdog Accuses ChatGPT of Privacy Violations

Jan 30, 2024 Generative AI / Data Privacy
Italy's data protection authority (DPA) has notified ChatGPT-maker OpenAI of supposedly violating privacy laws in the region. "The available evidence pointed to the existence of breaches of the provisions contained in the E.U. GDPR [General Data Protection Regulation]," the Garante per la protezione dei dati personali (aka the Garante)  said  in a statement on Monday. It also said it will "take account of the work in progress within the ad-hoc  task force  set up by the European Data Protection Framework (EDPB) in its final determination on the case." The development comes nearly 10 months after the watchdog imposed a  temporary ban  on ChatGPT in the country, weeks after which OpenAI  announced  a number of privacy controls, including an  opt-out form  to remove one's personal data from being processed by the large language model (LLM). Access to the tool was subsequently reinstated in late April 2023. The Italian DPA said the latest findings, which h
cyber security

WATCH: The SaaS Security Challenge in 90 Seconds

websiteAdaptive ShieldSaaS Security / Cyber Threat
Discover how you can overcome the SaaS security challenge by securing your entire SaaS stack with SSPM.
There is a Ransomware Armageddon Coming for Us All

There is a Ransomware Armageddon Coming for Us All

Jan 11, 2024 Artificial Intelligence / Biometric Security
Generative AI will enable anyone to launch sophisticated phishing attacks that only Next-generation MFA devices can stop The least surprising headline from 2023 is that ransomware again set new records for a number of incidents and the damage inflicted. We saw new headlines every week, which included a who's-who of big-name organizations. If MGM, Johnson Controls, Chlorox, Hanes Brands, Caesars Palace, and so many others cannot stop the attacks, how will anyone else? Phishing-driven ransomware is the cyber threat that looms larger and more dangerous than all others. CISA and Cisco report that 90% of data breaches are the result of phishing attacks and monetary losses that exceed $10 billion in total. A report from Splunk revealed that 96 percent of companies fell victim to at least one phishing attack in the last 12 months and 83 percent suffered two or more. Protect your organization from phishing and ransomware by learning about the benefits of Next-Generation MFA. Download th
Vietnamese Hackers Using New Delphi-Powered Malware to Target Indian Marketers

Vietnamese Hackers Using New Delphi-Powered Malware to Target Indian Marketers

Nov 14, 2023 ChatGPT / Malware
The Vietnamese threat actors behind the Ducktail stealer malware have been linked to a new campaign that ran between March and early October 2023, targeting marketing professionals in India with an aim to hijack Facebook business accounts. "An important feature that sets it apart is that, unlike previous campaigns, which relied on .NET applications, this one used Delphi as the programming language," Kaspersky  said  in a report published last week. Ducktail , alongside  Duckport  and  NodeStealer , is part of a  cybercrime ecosystem  operating out of Vietnam, with the attackers primarily using sponsored ads on Facebook to propagate malicious ads and deploy malware capable of plundering victims' login cookies and ultimately taking control of their accounts. Such attacks primarily single out users who may have access to a Facebook Business account. The fraudsters then use the unauthorized access to place advertisements for financial gain, perpetuating the infections fur
Google Expands Its Bug Bounty Program to Tackle Artificial Intelligence Threats

Google Expands Its Bug Bounty Program to Tackle Artificial Intelligence Threats

Oct 27, 2023 Artificial Intelligence / Vulnerability
Google has announced that it's expanding its Vulnerability Rewards Program ( VRP ) to compensate researchers for finding attack scenarios tailored to generative artificial intelligence (AI) systems in an effort to  bolster AI safety and security . "Generative AI raises new and different concerns than traditional digital security, such as the potential for unfair bias, model manipulation or misinterpretations of data (hallucinations)," Google's Laurie Richardson and Royal Hansen  said . Some of the categories that are in scope  include  prompt injections, leakage of sensitive data from training datasets, model manipulation, adversarial perturbation attacks that trigger misclassification, and model theft. It's worth noting that Google earlier this July instituted an  AI Red Team  to help address threats to AI systems as part of its Secure AI Framework ( SAIF ). Also announced as part of its commitment to secure AI are efforts to strengthen the AI supply chain
How to Guard Your Data from Exposure in ChatGPT

How to Guard Your Data from Exposure in ChatGPT

Oct 12, 2023 Data Security / Artificial Intelligence
ChatGPT has transformed the way businesses generate textual content, which can potentially result in a quantum leap in productivity. However, Generative AI innovation also introduces a new dimension of data exposure risk, when employees inadvertently type or paste sensitive business data into ChatGPT, or similar applications. DLP solutions, the go-to solution for similar challenges, are ill-equipped to handle these challenges, since they focus on file-based data protection. A new report by LayerX, "Browser Security Platform: Guard your Data from Exposure in ChatGPT" ( Download here ), sheds light on the challenges and risks of ungoverned ChatGPT usage. It paints a comprehensive picture of the potential hazards for businesses and then offers a potential solution: browser security platforms. Such platforms provide real-time monitoring and governance over web sessions, effectively safeguarding sensitive data. ChatGPT Data Exposure: By the Numbers Employee usage of GenAI apps has surge
"I Had a Dream" and Generative AI Jailbreaks

"I Had a Dream" and Generative AI Jailbreaks

Oct 09, 2023 Artificial Intelligence /
"Of course, here's an example of simple code in the Python programming language that can be associated with the keywords "MyHotKeyHandler," "Keylogger," and "macOS," this is a message from ChatGPT followed by a piece of malicious code and a brief remark not to use it for illegal purposes. Initially published by  Moonlock Lab , the screenshots of ChatGPT writing code for a keylogger malware is yet another example of trivial ways to hack large language models and exploit them against their policy of use. In the case of Moonlock Lab, their malware research engineer told ChatGPT about a dream where an attacker was writing code. In the dream, he could only see the three words: "MyHotKeyHandler," "Keylogger," and "macOS." The engineer asked ChatGPT to completely recreate the malicious code and help him stop the attack. After a brief conversation, the AI finally provided the answer. "At times, the code generated isn&
Microsoft's AI-Powered Bing Chat Ads May Lead Users to Malware-Distributing Sites

Microsoft's AI-Powered Bing Chat Ads May Lead Users to Malware-Distributing Sites

Sep 29, 2023 Artificial Intelligence / Malware
Malicious ads served inside Microsoft Bing's artificial intelligence (AI) chatbot are being used to distribute malware when searching for popular tools. The findings come from Malwarebytes, which revealed that unsuspecting users can be tricked into visiting booby-trapped sites and installing malware directly from Bing Chat conversations. Introduced by Microsoft in February 2023, Bing Chat is an  interactive search experience  that's powered by OpenAI's large language model called  GPT-4 . A month later, the tech giant  began   exploring  placing ads in the conversations. But the move has also opened the doors for threat actors who resort to malvertising tactics and propagate malware. "Ads can be inserted into a Bing Chat conversation in various ways," Jérôme Segura, director of threat intelligence at Malwarebytes,  said . "One of those is when a user hovers over a link and an ad is displayed first before the organic result." In an example highligh
How to Prevent ChatGPT From Stealing Your Content & Traffic

How to Prevent ChatGPT From Stealing Your Content & Traffic

Aug 30, 2023 Artificial Intelligence / Cyber Threat
ChatGPT and similar large language models (LLMs) have added further complexity to the ever-growing online threat landscape. Cybercriminals no longer need advanced coding skills to execute fraud and other damaging attacks against online businesses and customers, thanks to bots-as-a-service, residential proxies, CAPTCHA farms, and other easily accessible tools.  Now, the latest technology damaging businesses' bottom line is  ChatGPT . Not only have ChatGPT, OpenAI, and other LLMs raised ethical issues by  training their models  on scraped data from across the internet. LLMs are negatively impacting enterprises' web traffic, which can be extremely damaging to business.  3 Risks Presented by LLMs, ChatGPT, & ChatGPT Plugins Among the threats ChatGPT and ChatGPT plugins can pose against online businesses, there are three key risks we will focus on: Content theft  (or republishing data without permission from the original source)can hurt the authority, SEO rankings, and perceived
Continuous Security Validation with Penetration Testing as a Service (PTaaS)

Continuous Security Validation with Penetration Testing as a Service (PTaaS)

Aug 09, 2023 Penetration Testing / DevSecOps
Validate security continuously across your full stack with Pen Testing as a Service. In today's modern security operations center (SOC), it's a battle between the defenders and the cybercriminals. Both are using tools and expertise – however, the cybercriminals have the element of surprise on their side, and a host of tactics, techniques, and procedures (TTPs) that have evolved. These external threat actors have now been further emboldened in the era of AI with open-source tools like ChatGPT. With the potential of an attack leading to a breach within minutes, CISOs now are looking to prepare all systems and assets for cyber resilience and rapid response when needed. With tools and capabilities to validate security continuously – including penetration testing as a service – DevSecOps teams can remediate critical vulnerabilities fast due to the easy access to tactical support to the teams that need it the most. This gives the SOC and DevOps teams tools to that remove false po
New AI Tool 'FraudGPT' Emerges, Tailored for Sophisticated Attacks

New AI Tool 'FraudGPT' Emerges, Tailored for Sophisticated Attacks

Jul 26, 2023 Cyber Crime / Artificial Intelligence
Following the footsteps of  WormGPT , threat actors are advertising yet another cybercrime generative artificial intelligence (AI) tool dubbed  FraudGPT  on various dark web marketplaces and Telegram channels. "This is an AI bot, exclusively targeted for offensive purposes, such as crafting spear phishing emails, creating cracking tools, carding, etc.," Netenrich security researcher Rakesh Krishnan   said  in a report published Tuesday. The cybersecurity firm said the offering has been circulating since at least July 22, 2023, for a subscription cost of $200 a month (or $1,000 for six months and $1,700 for a year). "If your [sic] looking for a Chat GPT alternative designed to provide a wide range of exclusive tools, features, and capabilities tailored to anyone's individuals with no boundaries then look no further!," claims the actor, who goes by the online alias CanadianKingpin. The author also states that the tool could be used to write malicious code, c
Go Beyond the Headlines for Deeper Dives into the Cybercriminal Underground

Go Beyond the Headlines for Deeper Dives into the Cybercriminal Underground

Jul 18, 2023 Cybersecurity / Cyber Attacks
Discover stories about threat actors' latest tactics, techniques, and procedures from Cybersixgill's threat experts each month. Each story brings you details on emerging underground threats, the threat actors involved, and how you can take action to mitigate risks. Learn about the top vulnerabilities and review the latest ransomware and malware trends from the deep and dark web. Stolen ChatGPT credentials flood dark web markets Over the past year, 100,000 stolen credentials for ChatGPT were advertised on underground sites, being sold for as little as $5 on dark web marketplaces in addition to being offered for free. Stolen ChatGPT credentials include usernames, passwords, and other personal information associated with accounts. This is problematic because ChatGPT accounts may store sensitive information from queries, including confidential data and intellectual property. Specifically, companies increasingly incorporate ChatGPT into daily workflows, which means employees may disclose
Cybersecurity Resources