#1 Trusted Cybersecurity News Platform
Followed by 5.20+ million
The Hacker News Logo
Subscribe – Get Latest News
State of SaaS

artificial intelligence | Breaking Cybersecurity News | The Hacker News

Category — artificial intelligence
AI-Driven Ransomware FunkSec Targets 85 Victims Using Double Extortion Tactics

AI-Driven Ransomware FunkSec Targets 85 Victims Using Double Extortion Tactics

Jan 10, 2025 Artificial Intelligence / Cybercrime
Cybersecurity researchers have shed light on a nascent artificial intelligence (AI) assisted ransomware family called FunkSec that sprang forth in late 2024, and has claimed more than 85 victims to date. "The group uses double extortion tactics, combining data theft with encryption to pressure victims into paying ransoms," Check Point Research said in a new report shared with The Hacker News. "Notably, FunkSec demanded unusually low ransoms, sometimes as little as $10,000, and sold stolen data to third parties at reduced prices." FunkSec launched its data leak site (DLS) in December 2024 to "centralize" their ransomware operations, highlighting breach announcements, a custom tool to conduct distributed denial-of-service (DDoS) attacks, and a bespoke ransomware as part of a ransomware-as-a-service (RaaS) model. A majority of the victims are located in the U.S., India, Italy, Brazil, Israel, Spain, and Mongolia. Check Point's analysis of the group...
New AI Jailbreak Method 'Bad Likert Judge' Boosts Attack Success Rates by Over 60%

New AI Jailbreak Method 'Bad Likert Judge' Boosts Attack Success Rates by Over 60%

Jan 03, 2025 Machine Learning / Vulnerability
Cybersecurity researchers have shed light on a new jailbreak technique that could be used to get past a large language model's (LLM) safety guardrails and produce potentially harmful or malicious responses. The multi-turn (aka many-shot) attack strategy has been codenamed Bad Likert Judge by Palo Alto Networks Unit 42 researchers Yongzhe Huang, Yang Ji, Wenjun Hu, Jay Chen, Akshata Rao, and Danny Tsechansky. "The technique asks the target LLM to act as a judge scoring the harmfulness of a given response using the Likert scale , a rating scale measuring a respondent's agreement or disagreement with a statement," the Unit 42 team said . "It then asks the LLM to generate responses that contain examples that align with the scales. The example that has the highest Likert scale can potentially contain the harmful content." The explosion in popularity of artificial intelligence in recent years has also led to a new class of security exploits called prompt in...
Product Walkthrough: How Reco Discovers Shadow AI in SaaS

Product Walkthrough: How Reco Discovers Shadow AI in SaaS

Jan 09, 2025AI Security / SaaS Security
As SaaS providers race to integrate AI into their product offerings to stay competitive and relevant, a new challenge has emerged in the world of AI: shadow AI.  Shadow AI refers to the unauthorized use of AI tools and copilots at organizations. For example, a developer using ChatGPT to assist with writing code, a salesperson downloading an AI-powered meeting transcription tool, or a customer support person using Agentic AI to automate tasks – without going through the proper channels. When these tools are used without IT or the Security team's knowledge, they often lack sufficient security controls, putting company data at risk. Shadow AI Detection Challenges Because shadow AI tools often embed themselves in approved business applications via AI assistants, copilots, and agents they are even more tricky to discover than traditional shadow IT. While traditional shadow apps can be identified through network monitoring methodologies that scan for unauthorized connections based on...
New U.S. DoJ Rule Halts Bulk Data Transfers to Adversarial Nations to Protect Privacy

New U.S. DoJ Rule Halts Bulk Data Transfers to Adversarial Nations to Protect Privacy

Dec 31, 2024 Data Security / Privacy
The U.S. Department of Justice (DoJ) has issued a final rule carrying out Executive Order (EO) 14117, which prevents mass transfer of citizens' personal data to countries of concern such as China (including Hong Kong and Macau), Cuba, Iran, North Korea, Russia, and Venezuela. "This final rule is a crucial step forward in addressing the extraordinary national security threat posed of our adversaries exploiting Americans' most sensitive personal data," said Assistant Attorney General Matthew G. Olsen of the Justice Department's National Security Division. "This powerful new national-security program is designed to ensure that Americans' personal data is no longer permitted to be sold to hostile foreign powers, whether through outright purchase or other means of commercial access." Back in February 2024, U.S. President Joe Biden signed an executive order to address the national security risk posed by unauthorized access to Americans' sensitiv...
cyber security

Secure Your Azure: Proactive Tips for Cloud Protection

websiteWizCloud Security
Discover how to boost your Azure cloud security with practical steps to help you maintain control and visibility.
Researchers Uncover Prompt Injection Vulnerabilities in DeepSeek and Claude AI

Researchers Uncover Prompt Injection Vulnerabilities in DeepSeek and Claude AI

Dec 09, 2024 Artificial Intelligenc / Vulnerability
Details have emerged about a now-patched security flaw in the DeepSeek artificial intelligence (AI) chatbot that, if successfully exploited, could permit a bad actor to take control of a victim's account by means of a prompt injection attack . Security researcher Johann Rehberger, who has chronicled many a prompt injection attack targeting various AI tools, found that providing the input "Print the xss cheat sheet in a bullet list. just payloads" in the DeepSeek chat triggered the execution of JavaScript code as part of the generated response – a classic case of cross-site scripting (XSS). XSS attacks can have serious consequences as they lead to the execution of unauthorized code in the context of the victim's web browser. An attacker could take advantage of such flaws to hijack a user's session and gain access to cookies and other data associated with the chat.deepseek[.]com domain, thereby leading to an account takeover. "After some experimenting,...
Ultralytics AI Library Compromised: Cryptocurrency Miner Found in PyPI Versions

Ultralytics AI Library Compromised: Cryptocurrency Miner Found in PyPI Versions

Dec 07, 2024 Supply Chain Attack / Cryptocurrency
In yet another software supply chain attack, it has come to light that two versions of a popular Python artificial intelligence (AI) library named ultralytics were compromised to deliver a cryptocurrency miner. The versions, 8.3.41 and 8.3.42, have since been removed from the Python Package Index (PyPI) repository. A subsequently released version has introduced a security fix that "ensures secure publication workflow for the Ultralytics package." The project maintainer, Glenn Jocher, confirmed on GitHub that the two versions were infected by malicious code injection in the PyPI deployment workflow after reports emerged that installing the library led to a drastic spike in CPU usage , a telltale sign of cryptocurrency mining. The most notable aspect of the attack is that bad actors managed to compromise the build environment related to the project to insert unauthorized modifications after the completion of the code review step, thus leading to a discrepancy in the so...
Hackers Using Fake Video Conferencing Apps to Steal Web3 Professionals' Data

Hackers Using Fake Video Conferencing Apps to Steal Web3 Professionals' Data

Dec 07, 2024 Malware / Web3 Security
Cybersecurity researchers have warned of a new scam campaign that leverages fake video conferencing apps to deliver an information stealer called Realst targeting people working in Web3 under the guise of fake business meetings. "The threat actors behind the malware have set up fake companies using AI to make them increase legitimacy," Cado Security researcher Tara Gould said . "The company reaches out to targets to set up a video call, prompting the user to download the meeting application from the website, which is Realst infostealer." The activity has been codenamed Meeten by the security company, owing to the use of names such as Clusee, Cuesee, Meeten, Meetone, and Meetio for the bogus sites. The attacks entail approaching prospective targets on Telegram to discuss a potential investment opportunity, urging them to join a video call hosted on one of the dubious platforms. Users who end up on the site are prompted to download a Windows or macOS version dep...
Researchers Uncover Flaws in Popular Open-Source Machine Learning Frameworks

Researchers Uncover Flaws in Popular Open-Source Machine Learning Frameworks

Dec 06, 2024 Artificial Intelligence / Vulnerability
Cybersecurity researchers have disclosed multiple security flaws impacting open-source machine learning (ML) tools and frameworks such as MLflow, H2O, PyTorch, and MLeap that could pave the way for code execution. The vulnerabilities, discovered by JFrog, are part of a broader collection of 22 security shortcomings the supply chain security company first disclosed last month. Unlike the first set that involved flaws on the server-side, the newly detailed ones allow exploitation of ML clients and reside in libraries that handle safe model formats like Safetensors . "Hijacking an ML client in an organization can allow the attackers to perform extensive lateral movement within the organization," the company said . "An ML client is very likely to have access to important ML services such as ML Model Registries or MLOps Pipelines." This, in turn, could expose sensitive information such as model registry credentials, effectively permitting a malicious actor to back...
A Guide to Securing AI App Development: Join This Cybersecurity Webinar

A Guide to Securing AI App Development: Join This Cybersecurity Webinar

Dec 02, 2024 AI Security / Data Protection
Artificial Intelligence (AI) is no longer a far-off dream—it's here, changing the way we live. From ordering coffee to diagnosing diseases, it's everywhere. But while you're creating the next big AI-powered app, hackers are already figuring out ways to break it. Every AI app is an opportunity—and a potential risk. The stakes are huge: data leaks, downtime, and even safety threats if security isn't built in. With AI adoption moving fast, securing your projects is no longer optional—it's a must. Join Liqian Lim, Senior Product Marketing Manager at Snyk, for an exclusive webinar that's all about securing the future of AI development. Titled " Building Tomorrow, Securely: Securing the Use of AI in App Development ," this session will arm you with the knowledge and tools to tackle the challenges of AI-powered innovation. What You'll Learn: Get AI-Ready: How to make your AI projects secure from the start. Spot Hidden Risks: Uncover threats you might not see coming. Understand the Ma...
AI-Powered Fake News Campaign Targets Western Support for Ukraine and U.S. Elections

AI-Powered Fake News Campaign Targets Western Support for Ukraine and U.S. Elections

Nov 29, 2024 Disinformation / Artificial Intelligence
A Moscow-based company sanctioned by the U.S. earlier this year has been linked to yet another influence operation designed to turn public opinion against Ukraine and erode Western support since at least December 2023. The covert campaign undertaken by Social Design Agency (SDA) leverages videos enhanced using artificial intelligence (AI) and bogus websites impersonating reputable news sources to target audiences across Ukraine, Europe, and the U.S. It has been dubbed Operation Undercut by Recorded Future's Insikt Group. "This operation, running in tandem with other campaigns like Doppelganger , is designed to discredit Ukraine's leadership, question the effectiveness of Western aid, and stir socio-political tensions," the cybersecurity company said . "The campaign also seeks to shape narratives around the 2024 U.S. elections and geopolitical conflicts, such as the Israel-Gaza situation, to deepen divisions." Social Design Agency has been previously a...
North Korean Hackers Steal $10M with AI-Driven Scams and Malware on LinkedIn

North Korean Hackers Steal $10M with AI-Driven Scams and Malware on LinkedIn

Nov 23, 2024 Artificial Intelligence / Cryptocurrency
The North Korea-linked threat actor known as Sapphire Sleet is estimated to have stolen more than $10 million worth of cryptocurrency as part of social engineering campaigns orchestrated over a six-month period. These findings come from Microsoft, which said that multiple threat activity clusters with ties to the country have been observed creating fake profiles on LinkedIn, posing as both recruiters and job seekers to generate illicit revenue for the sanction-hit nation. Sapphire Sleet, which is known to be active since at least 2020, overlaps with hacking groups tracked as APT38 and BlueNoroff. In November 2023, the tech giant revealed that the threat actor had established infrastructure that impersonated skills assessment portals to carry out its social engineering campaigns. One of the main methods adopted by the group for over a year is to pose as a venture capitalist, deceptively claiming an interest in a target user's company in order to set up an online meeting. Targ...
PyPI Attack: ChatGPT, Claude Impersonators Deliver JarkaStealer via Python Libraries

PyPI Attack: ChatGPT, Claude Impersonators Deliver JarkaStealer via Python Libraries

Nov 22, 2024 Artificial Intelligence / Malware
Cybersecurity researchers have discovered two malicious packages uploaded to the Python Package Index (PyPI) repository that impersonated popular artificial intelligence (AI) models like OpenAI ChatGPT and Anthropic Claude to deliver an information stealer called JarkaStealer. The packages, named gptplus and claudeai-eng , were uploaded by a user named " Xeroline " in November 2023, attracting 1,748 and 1,826 downloads, respectively. Both libraries are no longer available for download from PyPI. "The malicious packages were uploaded to the repository by one author and, in fact, differed from each other only in name and description," Kaspersky said in a post. The packages purported to offer a way to access GPT-4 Turbo API and Claude AI API, but harbored malicious code that initiated the deployment of the malware upon installation. Specifically, the "__init__.py" file in these packages contained Base64-encoded data that incorporated code to download ...
Google's AI-Powered OSS-Fuzz Tool Finds 26 Vulnerabilities in Open-Source Projects

Google's AI-Powered OSS-Fuzz Tool Finds 26 Vulnerabilities in Open-Source Projects

Nov 21, 2024 Artificial Intelligence / Software Security
Google has revealed that its AI-powered fuzzing tool, OSS-Fuzz, has been used to help identify 26 vulnerabilities in various open-source code repositories, including a medium-severity flaw in the OpenSSL cryptographic library. "These particular vulnerabilities represent a milestone for automated vulnerability finding: each was found with AI, using AI-generated and enhanced fuzz targets," Google's open-source security team said in a blog post shared with The Hacker News. The OpenSSL vulnerability in question is CVE-2024-9143 (CVSS score: 4.3), an out-of-bounds memory write bug that can result in an application crash or remote code execution. The issue has been addressed in OpenSSL versions 3.3.3, 3.2.4, 3.1.8, 3.0.16, 1.1.1zb, and 1.0.2zl. Google, which added the ability to leverage large language models (LLMs) to improve fuzzing coverage in OSS-Fuzz in August 2023, said the vulnerability has likely been present in the codebase for two decades and that it "wo...
NHIs Are the Future of Cybersecurity: Meet NHIDR

NHIs Are the Future of Cybersecurity: Meet NHIDR

Nov 20, 2024 Identity Security / Cyber Defense
The frequency and sophistication of modern cyberattacks are surging, making it increasingly challenging for organizations to protect sensitive data and critical infrastructure. When attackers compromise a non-human identity (NHI), they can swiftly exploit it to move laterally across systems, identifying vulnerabilities and compromising additional NHIs in minutes. While organizations often take months to detect and contain such breaches, rapid detection and response can stop an attack in its tracks. The Rise of Non-Human Identities in Cybersecurity By 2025, non-human identities will rise to be the primary attack vector in cybersecurity. As businesses increasingly automate processes and adopt AI and IoT technologies, the number of NHIs grows exponentially. While these systems drive efficiency, they also create an expanded attack surface for cybercriminals. NHIs differ fundamentally from human users, making traditional security tools like multi-factor authentication and user behavior...
Researchers Warn of Privilege Escalation Risks in Google's Vertex AI ML Platform

Researchers Warn of Privilege Escalation Risks in Google's Vertex AI ML Platform

Nov 15, 2024 Artificial Intelligence / Vulnerability
Cybersecurity researchers have disclosed two security flaws in Google's Vertex machine learning (ML) platform that, if successfully exploited, could allow malicious actors to escalate privileges and exfiltrate models from the cloud. "By exploiting custom job permissions, we were able to escalate our privileges and gain unauthorized access to all data services in the project," Palo Alto Networks Unit 42 researchers Ofir Balassiano and Ofir Shaty said in an analysis published earlier this week. "Deploying a poisoned model in Vertex AI led to the exfiltration of all other fine-tuned models, posing a serious proprietary and sensitive data exfiltration attack risk." Vertex AI is Google's ML platform for training and deploying custom ML models and artificial intelligence (AI) applications at scale. It was first introduced in May 2021. Crucial to leveraging the privilege escalation flaw is a feature called Vertex AI Pipelines , which allows users to automat...
Google Warns of Rising Cloaking Scams, AI-Driven Fraud, and Crypto Schemes

Google Warns of Rising Cloaking Scams, AI-Driven Fraud, and Crypto Schemes

Nov 14, 2024 Artificial Intelligence / Cryptocurrency
Google has revealed that bad actors are leveraging techniques like landing page cloaking to conduct scams by impersonating legitimate sites. "Cloaking is specifically designed to prevent moderation systems and teams from reviewing policy-violating content which enables them to deploy the scam directly to users," Laurie Richardson, VP and Head of Trust and Safety at Google, said . "The landing pages often mimic well-known sites and create a sense of urgency to manipulate users into purchasing counterfeit products or unrealistic products." Cloaking refers to the practice of serving different content to search engines like Google and users with the ultimate goal of manipulating search rankings and deceiving users. The tech giant said it has also observed a cloaking trend wherein users clicking on ads are redirected via tracking templates to scareware sites that claim their devices are compromised with malware and lead them to other phony customer support sites, w...
Google’s AI Tool Big Sleep Finds Zero-Day Vulnerability in SQLite Database Engine

Google's AI Tool Big Sleep Finds Zero-Day Vulnerability in SQLite Database Engine

Nov 04, 2024 Artificial Intelligence / Vulnerability
Google said it discovered a zero-day vulnerability in the SQLite open-source database engine using its large language model (LLM) assisted framework called Big Sleep (formerly Project Naptime). The tech giant described the development as the "first real-world vulnerability" uncovered using the artificial intelligence (AI) agent. "We believe this is the first public example of an AI agent finding a previously unknown exploitable memory-safety issue in widely used real-world software," the Big Sleep team said in a blog post shared with The Hacker News. The vulnerability in question is a stack buffer underflow in SQLite, which occurs when a piece of software references a memory location prior to the beginning of the memory buffer, thereby resulting in a crash or arbitrary code execution. "This typically occurs when a pointer or its index is decremented to a position before the buffer, when pointer arithmetic results in a position before the beginning of t...
Microsoft Delays Windows Copilot+ Recall Release Over Privacy Concerns

Microsoft Delays Windows Copilot+ Recall Release Over Privacy Concerns

Nov 01, 2024 Data Security / Artificial Intelligence
Microsoft is further delaying the release of its controversial Recall feature for Windows Copilot+ PCs, stating it's taking the time to improve the experience. The development was first reported by The Verge. The artificial intelligence-powered tool was initially slated for a preview release starting in October. "We are committed to delivering a secure and trusted experience with Recall," the company said in an updated statement released Thursday. "To ensure we deliver on these important updates, we're taking additional time to refine the experience before previewing it with Windows Insiders. Originally planned for October, Recall will now be available for preview with Windows Insiders on Copilot+ PCs by December" Microsoft unveiled Recall earlier this May, describing it as a way for users to explore a "visual timeline" of their screens over time and help find things from apps, websites, images, and documents. The search experience was meant...
Apple Opens PCC Source Code for Researchers to Identify Bugs in Cloud AI Security

Apple Opens PCC Source Code for Researchers to Identify Bugs in Cloud AI Security

Oct 25, 2024 Cloud Security / Artificial Intelligence
Apple has publicly made available its Private Cloud Compute (PCC) Virtual Research Environment (VRE), allowing the research community to inspect and verify the privacy and security guarantees of its offering. PCC, which Apple unveiled earlier this June, has been marketed as the "most advanced security architecture ever deployed for cloud AI compute at scale." With the new technology, the idea is to offload computationally complex Apple Intelligence requests to the cloud in a manner that doesn't sacrifice user privacy. Apple said it's inviting "all security and privacy researchers — or anyone with interest and a technical curiosity — to learn more about PCC and perform their own independent verification of our claims." To further incentivize research, the iPhone maker said it's expanding the Apple Security Bounty program to include PCC by offering monetary payouts ranging from $50,000 to $1,000,000 for security vulnerabilities identified in it. Th...
Eliminating AI Deepfake Threats: Is Your Identity Security AI-Proof?

Eliminating AI Deepfake Threats: Is Your Identity Security AI-Proof?

Oct 25, 2024 Artificial Intelligence / Identity Security
Artificial Intelligence (AI) has rapidly evolved from a futuristic concept to a potent weapon in the hands of bad actors. Today, AI-based attacks are not just theoretical threats—they're happening across industries and outpacing traditional defense mechanisms.  The solution, however, is not futuristic. It turns out a properly designed identity security platform is able to deliver defenses against AI impersonation fraud. Read more about how a secure-by-design identity platform can eliminate AI deepfake fraud and serve as a critical component in this new era of cyber defense.  The Growing Threat of AI Impersonation Fraud Recent incidents highlight the alarming potential of AI-powered fraud: A staggering $25 million was fraudulently transferred during a video call where deepfakes impersonated multiple executives. KnowBe4, a cybersecurity leader, was duped by deepfakes during hiring interviews, allowing a North Korean attacker to be onboarded to the organization. CEO of ...
Expert Insights / Articles Videos
Cybersecurity Resources