#1 Trusted Cybersecurity News Platform
Followed by 5.20+ million
The Hacker News Logo
Subscribe – Get Latest News

Large Language Models | Breaking Cybersecurity News | The Hacker News

Category — Large Language Models
From Deepfakes to Malware: AI's Expanding Role in Cyber Attacks

From Deepfakes to Malware: AI's Expanding Role in Cyber Attacks

Mar 19, 2024 Generative AI / Incident Response
Large language models (LLMs) powering artificial intelligence (AI) tools today could be exploited to develop self-augmenting malware capable of bypassing YARA rules. "Generative AI can be used to evade string-based YARA rules by augmenting the source code of small malware variants, effectively lowering detection rates," Recorded Future  said  in a new report shared with The Hacker News. The findings are part of a red teaming exercise designed to uncover malicious use cases for AI technologies, which are  already being experimented  with by threat actors to create malware code snippets, generate phishing emails, and conduct reconnaissance on potential targets. The cybersecurity firm said it submitted to an LLM a known piece of malware called  STEELHOOK  that's associated with the APT28 hacking group, alongside its YARA rules, asking it to modify the source code to sidestep detection such that the original functionality remained intact and the gene...
Over 225,000 Compromised ChatGPT Credentials Up for Sale on Dark Web Markets

Over 225,000 Compromised ChatGPT Credentials Up for Sale on Dark Web Markets

Mar 05, 2024 Malware / Artificial Intelligence
More than 225,000 logs containing compromised OpenAI ChatGPT credentials were made available for sale on underground markets between January and October 2023, new findings from Group-IB show. These credentials were found within  information stealer logs  associated with LummaC2, Raccoon, and RedLine stealer malware. "The number of infected devices decreased slightly in mid- and late summer but grew significantly between August and September," the Singapore-headquartered cybersecurity company  said  in its Hi-Tech Crime Trends 2023/2024 report published last week. Between June and October 2023, more than 130,000 unique hosts with access to OpenAI ChatGPT were infiltrated, a 36% increase over what was observed during the first five months of 2023. The breakdown by the top three stealer families is below - LummaC2 - 70,484 hosts Raccoon - 22,468 hosts RedLine - 15,970 hosts "The sharp increase in the number of ChatGPT credentials for sale is due to the overal...
How AI Is Transforming IAM and Identity Security

How AI Is Transforming IAM and Identity Security

Nov 15, 2024Machine Learning / Identity Security
In recent years, artificial intelligence (AI) has begun revolutionizing Identity Access Management (IAM), reshaping how cybersecurity is approached in this crucial field. Leveraging AI in IAM is about tapping into its analytical capabilities to monitor access patterns and identify anomalies that could signal a potential security breach. The focus has expanded beyond merely managing human identities — now, autonomous systems, APIs, and connected devices also fall within the realm of AI-driven IAM, creating a dynamic security ecosystem that adapts and evolves in response to sophisticated cyber threats. The Role of AI and Machine Learning in IAM AI and machine learning (ML) are creating a more robust, proactive IAM system that continuously learns from the environment to enhance security. Let's explore how AI impacts key IAM components: Intelligent Monitoring and Anomaly Detection AI enables continuous monitoring of both human and non-human identities , including APIs, service acc...
Microsoft, OpenAI Warn of Nation-State Hackers Weaponizing AI for Cyber Attacks

Microsoft, OpenAI Warn of Nation-State Hackers Weaponizing AI for Cyber Attacks

Feb 14, 2024 Artificial Intelligence / Cyber Attack
Nation-state actors associated with Russia, North Korea, Iran, and China are experimenting with artificial intelligence (AI) and large language models (LLMs) to complement their ongoing cyber attack operations. The findings come from a report published by Microsoft in collaboration with OpenAI, both of which  said  they disrupted efforts made by five state-affiliated actors that used its AI services to perform malicious cyber activities by terminating their assets and accounts. "Language support is a natural feature of LLMs and is attractive for threat actors with continuous focus on social engineering and other techniques relying on false, deceptive communications tailored to their targets' jobs, professional networks, and other relationships," Microsoft  said  in a report shared with The Hacker News. While no significant or novel attacks employing the LLMs have been detected to date, adversarial exploration of AI technologies has transcended various phases of t...
cyber security

Creating, Managing and Securing Non-Human Identities

websitePermisoCybersecurity / Identity Security
A new class of identities has emerged alongside traditional human users: non-human identities (NHIs). Permiso Security's new eBook details everything you need to know about managing and securing non-human identities, and strategies to unify identity security without compromising agility.
How to Prevent ChatGPT From Stealing Your Content & Traffic

How to Prevent ChatGPT From Stealing Your Content & Traffic

Aug 30, 2023 Artificial Intelligence / Cyber Threat
ChatGPT and similar large language models (LLMs) have added further complexity to the ever-growing online threat landscape. Cybercriminals no longer need advanced coding skills to execute fraud and other damaging attacks against online businesses and customers, thanks to bots-as-a-service, residential proxies, CAPTCHA farms, and other easily accessible tools.  Now, the latest technology damaging businesses' bottom line is  ChatGPT . Not only have ChatGPT, OpenAI, and other LLMs raised ethical issues by  training their models  on scraped data from across the internet. LLMs are negatively impacting enterprises' web traffic, which can be extremely damaging to business.  3 Risks Presented by LLMs, ChatGPT, & ChatGPT Plugins Among the threats ChatGPT and ChatGPT plugins can pose against online businesses, there are three key risks we will focus on: Content theft  (or republishing data without permission from the original source)can hurt the authority,...
Expert Insights / Articles Videos
Cybersecurity Resources