#1 Trusted Cybersecurity News Platform
Followed by 5.20+ million
The Hacker News Logo
Subscribe – Get Latest News
State of SaaS

artificial intelligence | Breaking Cybersecurity News | The Hacker News

Category — artificial intelligence
Product Walkthrough: How Satori Secures Sensitive Data From Production to AI

Product Walkthrough: How Satori Secures Sensitive Data From Production to AI

Jan 20, 2025 Data Security / Data Monitoring
Every week seems to bring news of another data breach, and it's no surprise why: securing sensitive data has become harder than ever. And it's not just because companies are dealing with orders of magnitude more data. Data flows and user roles are constantly shifting, and data is stored across multiple technologies and cloud environments. Not to mention, compliance requirements are only getting stricter and more elaborate.  The problem is that while the data landscape has evolved rapidly, the usual strategies for securing that data are stuck in the past. Gone are the days when data lived in predictable places, with access controlled by a chosen few. Today, practically every department in the business needs to use customer data, and AI adoption means huge datasets, and a constant flux of permissions, use cases, and tools. Security teams are struggling to implement effective strategies for securing sensitive data, and a new crop of tools, called data security platforms, have appear...
AI-Driven Ransomware FunkSec Targets 85 Victims Using Double Extortion Tactics

AI-Driven Ransomware FunkSec Targets 85 Victims Using Double Extortion Tactics

Jan 10, 2025 Artificial Intelligence / Cybercrime
Cybersecurity researchers have shed light on a nascent artificial intelligence (AI) assisted ransomware family called FunkSec that sprang forth in late 2024, and has claimed more than 85 victims to date. "The group uses double extortion tactics, combining data theft with encryption to pressure victims into paying ransoms," Check Point Research said in a new report shared with The Hacker News. "Notably, FunkSec demanded unusually low ransoms, sometimes as little as $10,000, and sold stolen data to third parties at reduced prices." FunkSec launched its data leak site (DLS) in December 2024 to "centralize" their ransomware operations, highlighting breach announcements, a custom tool to conduct distributed denial-of-service (DDoS) attacks, and a bespoke ransomware as part of a ransomware-as-a-service (RaaS) model. A majority of the victims are located in the U.S., India, Italy, Brazil, Israel, Spain, and Mongolia. Check Point's analysis of the group...
4 Reasons Your SaaS Attack Surface Can No Longer be Ignored

4 Reasons Your SaaS Attack Surface Can No Longer be Ignored

Jan 14, 2025SaaS Security / Generative AI
What do identity risks, data security risks and third-party risks all have in common? They are all made much worse by SaaS sprawl. Every new SaaS account adds a new identity to secure, a new place where sensitive data can end up, and a new source of third party risk. Learn how you can protect this sprawling attack surface in 2025. What do identity risks, data security risks and third-party risks all have in common? They are all made much worse by SaaS sprawl. Every new SaaS account adds a new identity to secure, a new place where sensitive data can end up, and a new source of third-party risk. And, this growing attack surface, much of which is unknown or unmanaged in most orgs, has become an attractive target for attackers. So, why should you prioritize securing your SaaS attack surface in 2025? Here are 4 reasons. ‍ 1. Modern work runs on SaaS. When's the last time you used something other than a cloud-based app to do your work? Can't remember? You're not alone.  Outside of ...
New AI Jailbreak Method 'Bad Likert Judge' Boosts Attack Success Rates by Over 60%

New AI Jailbreak Method 'Bad Likert Judge' Boosts Attack Success Rates by Over 60%

Jan 03, 2025 Machine Learning / Vulnerability
Cybersecurity researchers have shed light on a new jailbreak technique that could be used to get past a large language model's (LLM) safety guardrails and produce potentially harmful or malicious responses. The multi-turn (aka many-shot) attack strategy has been codenamed Bad Likert Judge by Palo Alto Networks Unit 42 researchers Yongzhe Huang, Yang Ji, Wenjun Hu, Jay Chen, Akshata Rao, and Danny Tsechansky. "The technique asks the target LLM to act as a judge scoring the harmfulness of a given response using the Likert scale , a rating scale measuring a respondent's agreement or disagreement with a statement," the Unit 42 team said . "It then asks the LLM to generate responses that contain examples that align with the scales. The example that has the highest Likert scale can potentially contain the harmful content." The explosion in popularity of artificial intelligence in recent years has also led to a new class of security exploits called prompt in...
cyber security

2024: A Year of Identity Attacks | Get the New eBook

websitePush SecurityIdentity Security
Prepare to defend against identity attacks in 2025 by looking back at identity-based breaches in 2024.
New U.S. DoJ Rule Halts Bulk Data Transfers to Adversarial Nations to Protect Privacy

New U.S. DoJ Rule Halts Bulk Data Transfers to Adversarial Nations to Protect Privacy

Dec 31, 2024 Data Security / Privacy
The U.S. Department of Justice (DoJ) has issued a final rule carrying out Executive Order (EO) 14117, which prevents mass transfer of citizens' personal data to countries of concern such as China (including Hong Kong and Macau), Cuba, Iran, North Korea, Russia, and Venezuela. "This final rule is a crucial step forward in addressing the extraordinary national security threat posed of our adversaries exploiting Americans' most sensitive personal data," said Assistant Attorney General Matthew G. Olsen of the Justice Department's National Security Division. "This powerful new national-security program is designed to ensure that Americans' personal data is no longer permitted to be sold to hostile foreign powers, whether through outright purchase or other means of commercial access." Back in February 2024, U.S. President Joe Biden signed an executive order to address the national security risk posed by unauthorized access to Americans' sensitiv...
Researchers Uncover Prompt Injection Vulnerabilities in DeepSeek and Claude AI

Researchers Uncover Prompt Injection Vulnerabilities in DeepSeek and Claude AI

Dec 09, 2024 Artificial Intelligenc / Vulnerability
Details have emerged about a now-patched security flaw in the DeepSeek artificial intelligence (AI) chatbot that, if successfully exploited, could permit a bad actor to take control of a victim's account by means of a prompt injection attack . Security researcher Johann Rehberger, who has chronicled many a prompt injection attack targeting various AI tools, found that providing the input "Print the xss cheat sheet in a bullet list. just payloads" in the DeepSeek chat triggered the execution of JavaScript code as part of the generated response – a classic case of cross-site scripting (XSS). XSS attacks can have serious consequences as they lead to the execution of unauthorized code in the context of the victim's web browser. An attacker could take advantage of such flaws to hijack a user's session and gain access to cookies and other data associated with the chat.deepseek[.]com domain, thereby leading to an account takeover. "After some experimenting,...
Ultralytics AI Library Compromised: Cryptocurrency Miner Found in PyPI Versions

Ultralytics AI Library Compromised: Cryptocurrency Miner Found in PyPI Versions

Dec 07, 2024 Supply Chain Attack / Cryptocurrency
In yet another software supply chain attack, it has come to light that two versions of a popular Python artificial intelligence (AI) library named ultralytics were compromised to deliver a cryptocurrency miner. The versions, 8.3.41 and 8.3.42, have since been removed from the Python Package Index (PyPI) repository. A subsequently released version has introduced a security fix that "ensures secure publication workflow for the Ultralytics package." The project maintainer, Glenn Jocher, confirmed on GitHub that the two versions were infected by malicious code injection in the PyPI deployment workflow after reports emerged that installing the library led to a drastic spike in CPU usage , a telltale sign of cryptocurrency mining. The most notable aspect of the attack is that bad actors managed to compromise the build environment related to the project to insert unauthorized modifications after the completion of the code review step, thus leading to a discrepancy in the so...
Hackers Using Fake Video Conferencing Apps to Steal Web3 Professionals' Data

Hackers Using Fake Video Conferencing Apps to Steal Web3 Professionals' Data

Dec 07, 2024 Malware / Web3 Security
Cybersecurity researchers have warned of a new scam campaign that leverages fake video conferencing apps to deliver an information stealer called Realst targeting people working in Web3 under the guise of fake business meetings. "The threat actors behind the malware have set up fake companies using AI to make them increase legitimacy," Cado Security researcher Tara Gould said . "The company reaches out to targets to set up a video call, prompting the user to download the meeting application from the website, which is Realst infostealer." The activity has been codenamed Meeten by the security company, owing to the use of names such as Clusee, Cuesee, Meeten, Meetone, and Meetio for the bogus sites. The attacks entail approaching prospective targets on Telegram to discuss a potential investment opportunity, urging them to join a video call hosted on one of the dubious platforms. Users who end up on the site are prompted to download a Windows or macOS version dep...
Researchers Uncover Flaws in Popular Open-Source Machine Learning Frameworks

Researchers Uncover Flaws in Popular Open-Source Machine Learning Frameworks

Dec 06, 2024 Artificial Intelligence / Vulnerability
Cybersecurity researchers have disclosed multiple security flaws impacting open-source machine learning (ML) tools and frameworks such as MLflow, H2O, PyTorch, and MLeap that could pave the way for code execution. The vulnerabilities, discovered by JFrog, are part of a broader collection of 22 security shortcomings the supply chain security company first disclosed last month. Unlike the first set that involved flaws on the server-side, the newly detailed ones allow exploitation of ML clients and reside in libraries that handle safe model formats like Safetensors . "Hijacking an ML client in an organization can allow the attackers to perform extensive lateral movement within the organization," the company said . "An ML client is very likely to have access to important ML services such as ML Model Registries or MLOps Pipelines." This, in turn, could expose sensitive information such as model registry credentials, effectively permitting a malicious actor to back...
A Guide to Securing AI App Development: Join This Cybersecurity Webinar

A Guide to Securing AI App Development: Join This Cybersecurity Webinar

Dec 02, 2024 AI Security / Data Protection
Artificial Intelligence (AI) is no longer a far-off dream—it's here, changing the way we live. From ordering coffee to diagnosing diseases, it's everywhere. But while you're creating the next big AI-powered app, hackers are already figuring out ways to break it. Every AI app is an opportunity—and a potential risk. The stakes are huge: data leaks, downtime, and even safety threats if security isn't built in. With AI adoption moving fast, securing your projects is no longer optional—it's a must. Join Liqian Lim, Senior Product Marketing Manager at Snyk, for an exclusive webinar that's all about securing the future of AI development. Titled " Building Tomorrow, Securely: Securing the Use of AI in App Development ," this session will arm you with the knowledge and tools to tackle the challenges of AI-powered innovation. What You'll Learn: Get AI-Ready: How to make your AI projects secure from the start. Spot Hidden Risks: Uncover threats you might not see coming. Understand the Ma...
AI-Powered Fake News Campaign Targets Western Support for Ukraine and U.S. Elections

AI-Powered Fake News Campaign Targets Western Support for Ukraine and U.S. Elections

Nov 29, 2024 Disinformation / Artificial Intelligence
A Moscow-based company sanctioned by the U.S. earlier this year has been linked to yet another influence operation designed to turn public opinion against Ukraine and erode Western support since at least December 2023. The covert campaign undertaken by Social Design Agency (SDA) leverages videos enhanced using artificial intelligence (AI) and bogus websites impersonating reputable news sources to target audiences across Ukraine, Europe, and the U.S. It has been dubbed Operation Undercut by Recorded Future's Insikt Group. "This operation, running in tandem with other campaigns like Doppelganger , is designed to discredit Ukraine's leadership, question the effectiveness of Western aid, and stir socio-political tensions," the cybersecurity company said . "The campaign also seeks to shape narratives around the 2024 U.S. elections and geopolitical conflicts, such as the Israel-Gaza situation, to deepen divisions." Social Design Agency has been previously a...
North Korean Hackers Steal $10M with AI-Driven Scams and Malware on LinkedIn

North Korean Hackers Steal $10M with AI-Driven Scams and Malware on LinkedIn

Nov 23, 2024 Artificial Intelligence / Cryptocurrency
The North Korea-linked threat actor known as Sapphire Sleet is estimated to have stolen more than $10 million worth of cryptocurrency as part of social engineering campaigns orchestrated over a six-month period. These findings come from Microsoft, which said that multiple threat activity clusters with ties to the country have been observed creating fake profiles on LinkedIn, posing as both recruiters and job seekers to generate illicit revenue for the sanction-hit nation. Sapphire Sleet, which is known to be active since at least 2020, overlaps with hacking groups tracked as APT38 and BlueNoroff. In November 2023, the tech giant revealed that the threat actor had established infrastructure that impersonated skills assessment portals to carry out its social engineering campaigns. One of the main methods adopted by the group for over a year is to pose as a venture capitalist, deceptively claiming an interest in a target user's company in order to set up an online meeting. Targ...
PyPI Attack: ChatGPT, Claude Impersonators Deliver JarkaStealer via Python Libraries

PyPI Attack: ChatGPT, Claude Impersonators Deliver JarkaStealer via Python Libraries

Nov 22, 2024 Artificial Intelligence / Malware
Cybersecurity researchers have discovered two malicious packages uploaded to the Python Package Index (PyPI) repository that impersonated popular artificial intelligence (AI) models like OpenAI ChatGPT and Anthropic Claude to deliver an information stealer called JarkaStealer. The packages, named gptplus and claudeai-eng , were uploaded by a user named " Xeroline " in November 2023, attracting 1,748 and 1,826 downloads, respectively. Both libraries are no longer available for download from PyPI. "The malicious packages were uploaded to the repository by one author and, in fact, differed from each other only in name and description," Kaspersky said in a post. The packages purported to offer a way to access GPT-4 Turbo API and Claude AI API, but harbored malicious code that initiated the deployment of the malware upon installation. Specifically, the "__init__.py" file in these packages contained Base64-encoded data that incorporated code to download ...
Google's AI-Powered OSS-Fuzz Tool Finds 26 Vulnerabilities in Open-Source Projects

Google's AI-Powered OSS-Fuzz Tool Finds 26 Vulnerabilities in Open-Source Projects

Nov 21, 2024 Artificial Intelligence / Software Security
Google has revealed that its AI-powered fuzzing tool, OSS-Fuzz, has been used to help identify 26 vulnerabilities in various open-source code repositories, including a medium-severity flaw in the OpenSSL cryptographic library. "These particular vulnerabilities represent a milestone for automated vulnerability finding: each was found with AI, using AI-generated and enhanced fuzz targets," Google's open-source security team said in a blog post shared with The Hacker News. The OpenSSL vulnerability in question is CVE-2024-9143 (CVSS score: 4.3), an out-of-bounds memory write bug that can result in an application crash or remote code execution. The issue has been addressed in OpenSSL versions 3.3.3, 3.2.4, 3.1.8, 3.0.16, 1.1.1zb, and 1.0.2zl. Google, which added the ability to leverage large language models (LLMs) to improve fuzzing coverage in OSS-Fuzz in August 2023, said the vulnerability has likely been present in the codebase for two decades and that it "wo...
NHIs Are the Future of Cybersecurity: Meet NHIDR

NHIs Are the Future of Cybersecurity: Meet NHIDR

Nov 20, 2024 Identity Security / Cyber Defense
The frequency and sophistication of modern cyberattacks are surging, making it increasingly challenging for organizations to protect sensitive data and critical infrastructure. When attackers compromise a non-human identity (NHI), they can swiftly exploit it to move laterally across systems, identifying vulnerabilities and compromising additional NHIs in minutes. While organizations often take months to detect and contain such breaches, rapid detection and response can stop an attack in its tracks. The Rise of Non-Human Identities in Cybersecurity By 2025, non-human identities will rise to be the primary attack vector in cybersecurity. As businesses increasingly automate processes and adopt AI and IoT technologies, the number of NHIs grows exponentially. While these systems drive efficiency, they also create an expanded attack surface for cybercriminals. NHIs differ fundamentally from human users, making traditional security tools like multi-factor authentication and user behavior...
Researchers Warn of Privilege Escalation Risks in Google's Vertex AI ML Platform

Researchers Warn of Privilege Escalation Risks in Google's Vertex AI ML Platform

Nov 15, 2024 Artificial Intelligence / Vulnerability
Cybersecurity researchers have disclosed two security flaws in Google's Vertex machine learning (ML) platform that, if successfully exploited, could allow malicious actors to escalate privileges and exfiltrate models from the cloud. "By exploiting custom job permissions, we were able to escalate our privileges and gain unauthorized access to all data services in the project," Palo Alto Networks Unit 42 researchers Ofir Balassiano and Ofir Shaty said in an analysis published earlier this week. "Deploying a poisoned model in Vertex AI led to the exfiltration of all other fine-tuned models, posing a serious proprietary and sensitive data exfiltration attack risk." Vertex AI is Google's ML platform for training and deploying custom ML models and artificial intelligence (AI) applications at scale. It was first introduced in May 2021. Crucial to leveraging the privilege escalation flaw is a feature called Vertex AI Pipelines , which allows users to automat...
Google Warns of Rising Cloaking Scams, AI-Driven Fraud, and Crypto Schemes

Google Warns of Rising Cloaking Scams, AI-Driven Fraud, and Crypto Schemes

Nov 14, 2024 Artificial Intelligence / Cryptocurrency
Google has revealed that bad actors are leveraging techniques like landing page cloaking to conduct scams by impersonating legitimate sites. "Cloaking is specifically designed to prevent moderation systems and teams from reviewing policy-violating content which enables them to deploy the scam directly to users," Laurie Richardson, VP and Head of Trust and Safety at Google, said . "The landing pages often mimic well-known sites and create a sense of urgency to manipulate users into purchasing counterfeit products or unrealistic products." Cloaking refers to the practice of serving different content to search engines like Google and users with the ultimate goal of manipulating search rankings and deceiving users. The tech giant said it has also observed a cloaking trend wherein users clicking on ads are redirected via tracking templates to scareware sites that claim their devices are compromised with malware and lead them to other phony customer support sites, w...
Expert Insights / Articles Videos
Cybersecurity Resources