#1 Trusted Cybersecurity News Platform Followed by 4.50+ million
The Hacker News Logo
Subscribe – Get Latest News
Insider Risk Management

ChatGPT | Breaking Cybersecurity News | The Hacker News

Generative-AI apps & ChatGPT: Potential risks and mitigation strategies

Generative-AI apps & ChatGPT: Potential risks and mitigation strategies

Jun 22, 2023
Losing sleep over Generative-AI apps? You're not alone or wrong. According to the Astrix Security Research Group, mid size organizations already have, on average, 54 Generative-AI integrations to core systems like Slack, GitHub and Google Workspace and this number is only expected to grow. Continue reading to understand the potential risks and how to minimize them.  Book a Generative-AI Discovery session with Astrix Security's experts (free - no strings attached - agentless & zero friction) "Hey ChatGPT, review and optimize our source code"  "Hey Jasper.ai, generate a summary email of all our net new customers from this quarter"  "Hey Otter.ai, summarize our Zoom board meeting" In this era of financial turmoil, businesses and employees alike are constantly looking for tools to automate work processes and increase efficiency and productivity by connecting third party apps to core business systems such as Google workspace, Slack and GitHub
Over 100,000 Stolen ChatGPT Account Credentials Sold on Dark Web Marketplaces

Over 100,000 Stolen ChatGPT Account Credentials Sold on Dark Web Marketplaces

Jun 20, 2023 Endpoint Security / Password
Over 101,100 compromised OpenAI ChatGPT account credentials have found their way on illicit dark web marketplaces between June 2022 and May 2023, with India alone accounting for 12,632 stolen credentials. The credentials were discovered within information stealer logs made available for sale on the cybercrime underground, Group-IB said in a report shared with The Hacker News. "The number of available logs containing compromised ChatGPT accounts reached a peak of 26,802 in May 2023," the Singapore-headquartered company  said . "The Asia-Pacific region has experienced the highest concentration of ChatGPT credentials being offered for sale over the past year." Other countries with the most number of compromised ChatGPT credentials include Pakistan, Brazil, Vietnam, Egypt, the U.S., France, Morocco, Indonesia, and Bangladesh. A further analysis has revealed that the majority of logs containing ChatGPT accounts have been breached by the notorious Raccoon info steal
New Research: 6% of Employees Paste Sensitive Data into GenAI tools as ChatGPT

New Research: 6% of Employees Paste Sensitive Data into GenAI tools as ChatGPT

Jun 15, 2023 Browser Security / Data Security
The revolutionary technology of GenAI tools, such as ChatGPT, has brought significant risks to organizations' sensitive data. But what do we really know about this risk? A  new research  by Browser Security company LayerX sheds light on the scope and nature of these risks. The report titled "Revealing the True GenAI Data Exposure Risk" provides crucial insights for data protection stakeholders and empowers them to take proactive measures. The Numbers Behind the ChatGPT Risk By analyzing the usage of ChatGPT and other generative AI apps among 10,000 employees, the report has identified key areas of concern. One alarming finding reveals that 6% of employees have pasted sensitive data into GenAI, with 4% engaging in this risky behavior on a weekly basis. This recurring action poses a severe threat of data exfiltration for organizations. The report addresses vital risk assessment questions, including the actual scope of GenAI usage across enterprise workforces, the relati
cyber security

Protecting Your Organization From Insider Threats - All You Need to Know

websiteWing SecuritySaaS Security
Get practical insights and strategies to manage inadequate offboarding and insider risks effectively.
SHQ Response Platform and Risk Centre to Enable Management and Analysts Alike

SHQ Response Platform and Risk Centre to Enable Management and Analysts Alike

May 13, 2024Threat Detection / SoC / SIEM
In the last decade, there has been a growing disconnect between front-line analysts and senior management in IT and Cybersecurity. Well-documented challenges facing modern analysts revolve around a high volume of alerts, false positives, poor visibility of technical environments, and analysts spending too much time on manual tasks. The Impact of Alert Fatigue and False Positives  Analysts are overwhelmed with alerts. The knock-on effect of this is that fatigued analysts are at risk of missing key details in incidents, and often conduct time-consuming triaging tasks manually only to end up copying and pasting a generic closing comment into a false positive alert.  It is likely that there will always be false positives. And many would argue that a false positive is better than a false negative. But for proactive actions to be made, we must move closer to the heart of an incident. That requires diving into how analysts conduct the triage and investigation process. SHQ Response Platfo
 Searching for AI Tools? Watch Out for Rogue Sites Distributing RedLine Malware

Searching for AI Tools? Watch Out for Rogue Sites Distributing RedLine Malware

May 19, 2023 Artificial Intelligence / Cyber Threat
Malicious Google Search ads for generative AI services like OpenAI ChatGPT and Midjourney are being used to direct users to sketchy websites as part of a BATLOADER campaign designed to deliver  RedLine Stealer  malware. "Both AI services are extremely popular but lack first-party standalone apps (i.e., users interface with ChatGPT via their web interface while Midjourney uses Discord)," eSentire said in an analysis. "This vacuum has been exploited by threat actors looking to drive AI app-seekers to imposter web pages promoting fake apps." BATLOADER is a loader malware that's propagated via drive-by downloads where users searching for certain keywords on search engines are displayed bogus ads that, when clicked, redirect them to rogue landing pages hosting malware. The installer file, per eSentire, is rigged with an executable file (ChatGPT.exe or midjourney.exe) and a PowerShell script (Chat.ps1 or Chat-Ready.ps1) that downloads and loads RedLine Stealer
Meta Takes Down Malware Campaign That Used ChatGPT as a Lure to Steal Accounts

Meta Takes Down Malware Campaign That Used ChatGPT as a Lure to Steal Accounts

May 04, 2023 Online Security / ChatGPT
Meta said it took steps to take down more than 1,000 malicious URLs from being shared across its services that were found to leverage OpenAI's ChatGPT as a lure to propagate about 10 malware families since March 2023. The development comes  against  the backdrop of  fake ChatGPT   web browser extensions  being increasingly used to steal users' Facebook account credentials with an aim to run unauthorized ads from hijacked business accounts. "Threat actors create malicious browser extensions available in official web stores that claim to offer ChatGPT-based tools," Meta  said . "They would then promote these malicious extensions on social media and through sponsored search results to trick people into downloading malware." The social media giant said it has blocked several iterations of a multi-pronged malware campaign dubbed  Ducktail  over the years, adding it issued a cease and desist letter to individuals behind the operation who are located in Vietna
ChatGPT is Back in Italy After Addressing Data Privacy Concerns

ChatGPT is Back in Italy After Addressing Data Privacy Concerns

Apr 29, 2023 Data Safety / Privacy / AI
OpenAI, the company behind ChatGPT, has officially made a return to Italy after the company met the  data protection authority's demands  ahead of April 30, 2023, deadline. The development was  first reported  by the Associated Press. OpenAI's CEO, Sam Altman,  tweeted , "we're excited ChatGPT is available in [Italy] again!" The reinstatement comes following Garante's decision to  temporarily block  access to the popular AI chatbot service in Italy on March 31, 2023, over concerns that its practices are in violation of data protection laws in the region. Generative AI systems like ChatGPT and Google Bard primarily rely on huge amounts of information freely available on the internet as well as the data its users provide over the course of their interactions. OpenAI, which published a  new FAQ , said it filters and removes information such as hate speech, adult content, sites that primarily aggregate personal information, and spam. It also emphasized that
ChatGPT's Data Protection Blind Spots and How Security Teams Can Solve Them

ChatGPT's Data Protection Blind Spots and How Security Teams Can Solve Them

Apr 20, 2023 Artificial Intelligence / Data Safety
In the short time since their inception, ChatGPT and other generative AI platforms have rightfully gained the reputation of ultimate productivity boosters. However, the very same technology that enables rapid production of high-quality text on demand, can at the same time expose sensitive corporate data. A recent  incident , in which Samsung software engineers pasted proprietary code into ChatGPT, clearly demonstrates that this tool can easily become a potential data leakage channel. This vulnerability introduces a demanding challenge for security stakeholders, since none of the existing data protection tools can ensure no sensitive data is exposed to ChatGPT. In this article we'll explore this security challenge in detail and show how browser security solutions can provide a solution. All while enabling organizations to fully realize ChatGPT's productivity potential and without having to compromise on data security.  The ChatGPT data protection blind spot: How can you govern
ChatGPT Security: OpenAI's Bug Bounty Program Offers Up to $20,000 Prizes

ChatGPT Security: OpenAI's Bug Bounty Program Offers Up to $20,000 Prizes

Apr 13, 2023 Software Security / Bug Hunting
OpenAI, the company behind the massively popular ChatGPT AI chatbot, has launched a  bug bounty program  in an attempt to ensure its systems are "safe and secure." To that end, it has partnered with the crowdsourced security platform  Bugcrowd  for independent researchers to report vulnerabilities discovered in its product in exchange for rewards ranging from "$200 for low-severity findings to up to $20,000 for exceptional discoveries." It's worth noting that the program does not cover  model safety or hallucination issues , wherein the chatbot is prompted to generate malicious code or other faulty outputs. The company noted that "addressing these issues often involves substantial research and a broader approach." Other prohibited categories are denial-of-service (DoS) attacks, brute-forcing OpenAI APIs, and demonstrations that aim to destroy data or gain unauthorized access to sensitive information beyond what's necessary to highlight the prob
Italian Watchdog Bans OpenAI's ChatGPT Over Data Protection Concerns

Italian Watchdog Bans OpenAI's ChatGPT Over Data Protection Concerns

Apr 03, 2023 Artificial Intelligence / Data Safety
The Italian data protection watchdog, Garante per la Protezione dei Dati Personali (aka Garante), has imposed a temporary ban of OpenAI's ChatGPT service in the country, citing data protection concerns. To that end, it has ordered the company to stop processing users' data with immediate effect, stating it intends to investigate the company over whether it's unlawfully processing such data in violation of the E.U. General Data Protection Regulation ( GDPR ) laws. "No information is provided to users and data subjects whose data are collected by Open AI," the Garante  noted . "More importantly, there appears to be no legal basis underpinning the massive collection and processing of personal data in order to 'train' the algorithms on which the platform relies." ChatGPT, which is estimated to have reached over 100 million monthly active users since its release late last year,  has not   disclosed   what it used  to train its latest large languag
OpenAI Reveals Redis Bug Behind ChatGPT User Data Exposure Incident

OpenAI Reveals Redis Bug Behind ChatGPT User Data Exposure Incident

Mar 25, 2023 Artificial Intelligence / Data Security
OpenAI on Friday disclosed that a bug in the Redis open source library was responsible for the exposure of other users' personal information and chat titles in the upstart's ChatGPT service earlier this week. The  glitch , which came to light on March 20, 2023, enabled certain users to view brief descriptions of other users' conversations from the chat history sidebar, prompting the company to temporarily shut down the chatbot. "It's also possible that the first message of a newly-created conversation was visible in someone else's chat history if both users were active around the same time," the company  said . The bug, it further added, originated in the  redis-py library , leading to a scenario where canceled requests could cause connections to be corrupted and return unexpected data from the database cache, in this case, information belonging to an unrelated user. To make matters worse, the San Francisco-based AI research company said it introduce
Fake ChatGPT Chrome Browser Extension Caught Hijacking Facebook Accounts

Fake ChatGPT Chrome Browser Extension Caught Hijacking Facebook Accounts

Mar 23, 2023 Browser Security / Artificial Intelligence
Google has stepped in to remove a bogus Chrome browser extension from the official Web Store that masqueraded as OpenAI's ChatGPT service to harvest Facebook session cookies and hijack the accounts. The "ChatGPT For Google" extension, a trojanized version of a  legitimate open source browser add-on , attracted over 9,000 installations since March 14, 2023, prior to its removal. It was originally uploaded to the Chrome Web Store on February 14, 2023. According to  Guardio Labs  researcher Nati Tal, the extension was propagated through  malicious   sponsored Google search results  that were designed to redirect unsuspecting users searching for "Chat GPT-4" to fraudulent landing pages that point to the fake add-on. Installing the extension adds the promised functionality – i.e., enhancing search engines with ChatGPT – but it also stealthily activates the ability to capture Facebook-related cookies and exfiltrate it to a remote server in an encrypted manner. O
Fake ChatGPT Chrome Extension Hijacking Facebook Accounts for Malicious Advertising

Fake ChatGPT Chrome Extension Hijacking Facebook Accounts for Malicious Advertising

Mar 13, 2023 Browser Security / Artificial Intelligence
A fake ChatGPT-branded Chrome browser extension has been found to come with capabilities to hijack Facebook accounts and create rogue admin accounts, highlighting one of the different methods cyber criminals are using to distribute malware. "By hijacking high-profile Facebook business accounts, the threat actor creates an elite army of Facebook bots and a malicious paid media apparatus," Guardio Labs researcher Nati Tal  said  in a technical report. "This allows it to push Facebook paid ads at the expense of its victims in a self-propagating worm-like manner." The "Quick access to Chat GPT" extension, which is said to have attracted 2,000 installations per day since March 3, 2023, has since been pulled by Google from the Chrome Web Store as of March 9, 2023. The browser add-on is promoted through Facebook-sponsored posts, and while it offers the ability to connect to the ChatGPT service, it's also engineered to surreptitiously harvest cookies and
Cybersecurity
Expert Insights
Cybersecurity Resources