#1 Trusted Cybersecurity News Platform
Followed by 5.20+ million
The Hacker News Logo
Subscribe – Get Latest News

artificial intelligence | Breaking Cybersecurity News | The Hacker News

Category — artificial intelligence
Researchers Uncover Flaws in Popular Open-Source Machine Learning Frameworks

Researchers Uncover Flaws in Popular Open-Source Machine Learning Frameworks

Dec 06, 2024 Artificial Intelligence / Vulnerability
Cybersecurity researchers have disclosed multiple security flaws impacting open-source machine learning (ML) tools and frameworks such as MLflow, H2O, PyTorch, and MLeap that could pave the way for code execution. The vulnerabilities, discovered by JFrog, are part of a broader collection of 22 security shortcomings the supply chain security company first disclosed last month. Unlike the first set that involved flaws on the server-side, the newly detailed ones allow exploitation of ML clients and reside in libraries that handle safe model formats like Safetensors . "Hijacking an ML client in an organization can allow the attackers to perform extensive lateral movement within the organization," the company said . "An ML client is very likely to have access to important ML services such as ML Model Registries or MLOps Pipelines." This, in turn, could expose sensitive information such as model registry credentials, effectively permitting a malicious actor to back...
A Guide to Securing AI App Development: Join This Cybersecurity Webinar

A Guide to Securing AI App Development: Join This Cybersecurity Webinar

Dec 02, 2024 AI Security / Data Protection
Artificial Intelligence (AI) is no longer a far-off dream—it's here, changing the way we live. From ordering coffee to diagnosing diseases, it's everywhere. But while you're creating the next big AI-powered app, hackers are already figuring out ways to break it. Every AI app is an opportunity—and a potential risk. The stakes are huge: data leaks, downtime, and even safety threats if security isn't built in. With AI adoption moving fast, securing your projects is no longer optional—it's a must. Join Liqian Lim, Senior Product Marketing Manager at Snyk, for an exclusive webinar that's all about securing the future of AI development. Titled " Building Tomorrow, Securely: Securing the Use of AI in App Development ," this session will arm you with the knowledge and tools to tackle the challenges of AI-powered innovation. What You'll Learn: Get AI-Ready: How to make your AI projects secure from the start. Spot Hidden Risks: Uncover threats you might not see coming. Understand the Ma...
How AI Is Transforming IAM and Identity Security

How AI Is Transforming IAM and Identity Security

Nov 15, 2024Machine Learning / Identity Security
In recent years, artificial intelligence (AI) has begun revolutionizing Identity Access Management (IAM), reshaping how cybersecurity is approached in this crucial field. Leveraging AI in IAM is about tapping into its analytical capabilities to monitor access patterns and identify anomalies that could signal a potential security breach. The focus has expanded beyond merely managing human identities — now, autonomous systems, APIs, and connected devices also fall within the realm of AI-driven IAM, creating a dynamic security ecosystem that adapts and evolves in response to sophisticated cyber threats. The Role of AI and Machine Learning in IAM AI and machine learning (ML) are creating a more robust, proactive IAM system that continuously learns from the environment to enhance security. Let's explore how AI impacts key IAM components: Intelligent Monitoring and Anomaly Detection AI enables continuous monitoring of both human and non-human identities , including APIs, service acc...
AI-Powered Fake News Campaign Targets Western Support for Ukraine and U.S. Elections

AI-Powered Fake News Campaign Targets Western Support for Ukraine and U.S. Elections

Nov 29, 2024 Disinformation / Artificial Intelligence
A Moscow-based company sanctioned by the U.S. earlier this year has been linked to yet another influence operation designed to turn public opinion against Ukraine and erode Western support since at least December 2023. The covert campaign undertaken by Social Design Agency (SDA) leverages videos enhanced using artificial intelligence (AI) and bogus websites impersonating reputable news sources to target audiences across Ukraine, Europe, and the U.S. It has been dubbed Operation Undercut by Recorded Future's Insikt Group. "This operation, running in tandem with other campaigns like Doppelganger , is designed to discredit Ukraine's leadership, question the effectiveness of Western aid, and stir socio-political tensions," the cybersecurity company said . "The campaign also seeks to shape narratives around the 2024 U.S. elections and geopolitical conflicts, such as the Israel-Gaza situation, to deepen divisions." Social Design Agency has been previously a...
cyber security

Creating, Managing and Securing Non-Human Identities

websitePermisoCybersecurity / Identity Security
A new class of identities has emerged alongside traditional human users: non-human identities (NHIs). Permiso Security's new eBook details everything you need to know about managing and securing non-human identities, and strategies to unify identity security without compromising agility.
North Korean Hackers Steal $10M with AI-Driven Scams and Malware on LinkedIn

North Korean Hackers Steal $10M with AI-Driven Scams and Malware on LinkedIn

Nov 23, 2024 Artificial Intelligence / Cryptocurrency
The North Korea-linked threat actor known as Sapphire Sleet is estimated to have stolen more than $10 million worth of cryptocurrency as part of social engineering campaigns orchestrated over a six-month period. These findings come from Microsoft, which said that multiple threat activity clusters with ties to the country have been observed creating fake profiles on LinkedIn, posing as both recruiters and job seekers to generate illicit revenue for the sanction-hit nation. Sapphire Sleet, which is known to be active since at least 2020, overlaps with hacking groups tracked as APT38 and BlueNoroff. In November 2023, the tech giant revealed that the threat actor had established infrastructure that impersonated skills assessment portals to carry out its social engineering campaigns. One of the main methods adopted by the group for over a year is to pose as a venture capitalist, deceptively claiming an interest in a target user's company in order to set up an online meeting. Targ...
PyPI Attack: ChatGPT, Claude Impersonators Deliver JarkaStealer via Python Libraries

PyPI Attack: ChatGPT, Claude Impersonators Deliver JarkaStealer via Python Libraries

Nov 22, 2024 Artificial Intelligence / Malware
Cybersecurity researchers have discovered two malicious packages uploaded to the Python Package Index (PyPI) repository that impersonated popular artificial intelligence (AI) models like OpenAI ChatGPT and Anthropic Claude to deliver an information stealer called JarkaStealer. The packages, named gptplus and claudeai-eng , were uploaded by a user named " Xeroline " in November 2023, attracting 1,748 and 1,826 downloads, respectively. Both libraries are no longer available for download from PyPI. "The malicious packages were uploaded to the repository by one author and, in fact, differed from each other only in name and description," Kaspersky said in a post. The packages purported to offer a way to access GPT-4 Turbo API and Claude AI API, but harbored malicious code that initiated the deployment of the malware upon installation. Specifically, the "__init__.py" file in these packages contained Base64-encoded data that incorporated code to download ...
Google's AI-Powered OSS-Fuzz Tool Finds 26 Vulnerabilities in Open-Source Projects

Google's AI-Powered OSS-Fuzz Tool Finds 26 Vulnerabilities in Open-Source Projects

Nov 21, 2024 Artificial Intelligence / Software Security
Google has revealed that its AI-powered fuzzing tool, OSS-Fuzz, has been used to help identify 26 vulnerabilities in various open-source code repositories, including a medium-severity flaw in the OpenSSL cryptographic library. "These particular vulnerabilities represent a milestone for automated vulnerability finding: each was found with AI, using AI-generated and enhanced fuzz targets," Google's open-source security team said in a blog post shared with The Hacker News. The OpenSSL vulnerability in question is CVE-2024-9143 (CVSS score: 4.3), an out-of-bounds memory write bug that can result in an application crash or remote code execution. The issue has been addressed in OpenSSL versions 3.3.3, 3.2.4, 3.1.8, 3.0.16, 1.1.1zb, and 1.0.2zl. Google, which added the ability to leverage large language models (LLMs) to improve fuzzing coverage in OSS-Fuzz in August 2023, said the vulnerability has likely been present in the codebase for two decades and that it "wo...
NHIs Are the Future of Cybersecurity: Meet NHIDR

NHIs Are the Future of Cybersecurity: Meet NHIDR

Nov 20, 2024 Identity Security / Cyber Defense
The frequency and sophistication of modern cyberattacks are surging, making it increasingly challenging for organizations to protect sensitive data and critical infrastructure. When attackers compromise a non-human identity (NHI), they can swiftly exploit it to move laterally across systems, identifying vulnerabilities and compromising additional NHIs in minutes. While organizations often take months to detect and contain such breaches, rapid detection and response can stop an attack in its tracks. The Rise of Non-Human Identities in Cybersecurity By 2025, non-human identities will rise to be the primary attack vector in cybersecurity. As businesses increasingly automate processes and adopt AI and IoT technologies, the number of NHIs grows exponentially. While these systems drive efficiency, they also create an expanded attack surface for cybercriminals. NHIs differ fundamentally from human users, making traditional security tools like multi-factor authentication and user behavior...
Researchers Warn of Privilege Escalation Risks in Google's Vertex AI ML Platform

Researchers Warn of Privilege Escalation Risks in Google's Vertex AI ML Platform

Nov 15, 2024 Artificial Intelligence / Vulnerability
Cybersecurity researchers have disclosed two security flaws in Google's Vertex machine learning (ML) platform that, if successfully exploited, could allow malicious actors to escalate privileges and exfiltrate models from the cloud. "By exploiting custom job permissions, we were able to escalate our privileges and gain unauthorized access to all data services in the project," Palo Alto Networks Unit 42 researchers Ofir Balassiano and Ofir Shaty said in an analysis published earlier this week. "Deploying a poisoned model in Vertex AI led to the exfiltration of all other fine-tuned models, posing a serious proprietary and sensitive data exfiltration attack risk." Vertex AI is Google's ML platform for training and deploying custom ML models and artificial intelligence (AI) applications at scale. It was first introduced in May 2021. Crucial to leveraging the privilege escalation flaw is a feature called Vertex AI Pipelines , which allows users to automat...
Google Warns of Rising Cloaking Scams, AI-Driven Fraud, and Crypto Schemes

Google Warns of Rising Cloaking Scams, AI-Driven Fraud, and Crypto Schemes

Nov 14, 2024 Artificial Intelligence / Cryptocurrency
Google has revealed that bad actors are leveraging techniques like landing page cloaking to conduct scams by impersonating legitimate sites. "Cloaking is specifically designed to prevent moderation systems and teams from reviewing policy-violating content which enables them to deploy the scam directly to users," Laurie Richardson, VP and Head of Trust and Safety at Google, said . "The landing pages often mimic well-known sites and create a sense of urgency to manipulate users into purchasing counterfeit products or unrealistic products." Cloaking refers to the practice of serving different content to search engines like Google and users with the ultimate goal of manipulating search rankings and deceiving users. The tech giant said it has also observed a cloaking trend wherein users clicking on ads are redirected via tracking templates to scareware sites that claim their devices are compromised with malware and lead them to other phony customer support sites, w...
Google’s AI Tool Big Sleep Finds Zero-Day Vulnerability in SQLite Database Engine

Google's AI Tool Big Sleep Finds Zero-Day Vulnerability in SQLite Database Engine

Nov 04, 2024 Artificial Intelligence / Vulnerability
Google said it discovered a zero-day vulnerability in the SQLite open-source database engine using its large language model (LLM) assisted framework called Big Sleep (formerly Project Naptime). The tech giant described the development as the "first real-world vulnerability" uncovered using the artificial intelligence (AI) agent. "We believe this is the first public example of an AI agent finding a previously unknown exploitable memory-safety issue in widely used real-world software," the Big Sleep team said in a blog post shared with The Hacker News. The vulnerability in question is a stack buffer underflow in SQLite, which occurs when a piece of software references a memory location prior to the beginning of the memory buffer, thereby resulting in a crash or arbitrary code execution. "This typically occurs when a pointer or its index is decremented to a position before the buffer, when pointer arithmetic results in a position before the beginning of t...
Microsoft Delays Windows Copilot+ Recall Release Over Privacy Concerns

Microsoft Delays Windows Copilot+ Recall Release Over Privacy Concerns

Nov 01, 2024 Data Security / Artificial Intelligence
Microsoft is further delaying the release of its controversial Recall feature for Windows Copilot+ PCs, stating it's taking the time to improve the experience. The development was first reported by The Verge. The artificial intelligence-powered tool was initially slated for a preview release starting in October. "We are committed to delivering a secure and trusted experience with Recall," the company said in an updated statement released Thursday. "To ensure we deliver on these important updates, we're taking additional time to refine the experience before previewing it with Windows Insiders. Originally planned for October, Recall will now be available for preview with Windows Insiders on Copilot+ PCs by December" Microsoft unveiled Recall earlier this May, describing it as a way for users to explore a "visual timeline" of their screens over time and help find things from apps, websites, images, and documents. The search experience was meant...
Apple Opens PCC Source Code for Researchers to Identify Bugs in Cloud AI Security

Apple Opens PCC Source Code for Researchers to Identify Bugs in Cloud AI Security

Oct 25, 2024 Cloud Security / Artificial Intelligence
Apple has publicly made available its Private Cloud Compute (PCC) Virtual Research Environment (VRE), allowing the research community to inspect and verify the privacy and security guarantees of its offering. PCC, which Apple unveiled earlier this June, has been marketed as the "most advanced security architecture ever deployed for cloud AI compute at scale." With the new technology, the idea is to offload computationally complex Apple Intelligence requests to the cloud in a manner that doesn't sacrifice user privacy. Apple said it's inviting "all security and privacy researchers — or anyone with interest and a technical curiosity — to learn more about PCC and perform their own independent verification of our claims." To further incentivize research, the iPhone maker said it's expanding the Apple Security Bounty program to include PCC by offering monetary payouts ranging from $50,000 to $1,000,000 for security vulnerabilities identified in it. Th...
Eliminating AI Deepfake Threats: Is Your Identity Security AI-Proof?

Eliminating AI Deepfake Threats: Is Your Identity Security AI-Proof?

Oct 25, 2024 Artificial Intelligence / Identity Security
Artificial Intelligence (AI) has rapidly evolved from a futuristic concept to a potent weapon in the hands of bad actors. Today, AI-based attacks are not just theoretical threats—they're happening across industries and outpacing traditional defense mechanisms.  The solution, however, is not futuristic. It turns out a properly designed identity security platform is able to deliver defenses against AI impersonation fraud. Read more about how a secure-by-design identity platform can eliminate AI deepfake fraud and serve as a critical component in this new era of cyber defense.  The Growing Threat of AI Impersonation Fraud Recent incidents highlight the alarming potential of AI-powered fraud: A staggering $25 million was fraudulently transferred during a video call where deepfakes impersonated multiple executives. KnowBe4, a cybersecurity leader, was duped by deepfakes during hiring interviews, allowing a North Korean attacker to be onboarded to the organization. CEO of ...
Researchers Reveal 'Deceptive Delight' Method to Jailbreak AI Models

Researchers Reveal 'Deceptive Delight' Method to Jailbreak AI Models

Oct 23, 2024 Artificial Intelligence / Vulnerability
Cybersecurity researchers have shed light on a new adversarial technique that could be used to jailbreak large language models (LLMs) during the course of an interactive conversation by sneaking in an undesirable instruction between benign ones. The approach has been codenamed Deceptive Delight by Palo Alto Networks Unit 42, which described it as both simple and effective, achieving an average attack success rate (ASR) of 64.6% within three interaction turns. "Deceptive Delight is a multi-turn technique that engages large language models (LLM) in an interactive conversation, gradually bypassing their safety guardrails and eliciting them to generate unsafe or harmful content," Unit 42's Jay Chen and Royce Lu said. It's also a little different from multi-turn jailbreak (aka many-shot jailbreak) methods like Crescendo , wherein unsafe or restricted topics are sandwiched between innocuous instructions, as opposed to gradually leading the model to produce harmful outpu...
From Misuse to Abuse: AI Risks and Attacks

From Misuse to Abuse: AI Risks and Attacks

Oct 16, 2024 Artificial Intelligence / Cybercrime
AI from the attacker's perspective: See how cybercriminals are leveraging AI and exploiting its vulnerabilities to compromise systems, users, and even other AI applications Cybercriminals and AI: The Reality vs. Hype "AI will not replace humans in the near future. But humans who know how to use AI are going to replace those humans who don't know how to use AI," says Etay Maor, Chief Security Strategist at Cato Networks and founding member of Cato CTRL . "Similarly, attackers are also turning to AI to augment their own capabilities." Yet, there is a lot more hype than reality around AI's role in cybercrime. Headlines often sensationalize AI threats, with terms like "Chaos-GPT" and "Black Hat AI Tools," even claiming they seek to destroy humanity. However, these articles are more fear-inducing than descriptive of serious threats. For instance, when explored in underground forums, several of these so-called "AI cyber tools" were found to be nothing...
FBI Creates Fake Cryptocurrency to Expose Widespread Crypto Market Manipulation

FBI Creates Fake Cryptocurrency to Expose Widespread Crypto Market Manipulation

Oct 12, 2024 Cryptocurrency / Cybercrime
The U.S. Department of Justice (DoJ) has announced arrests and charges against several individuals and entities in connection with allegedly manipulating digital asset markets as part of a widespread fraud operation. The law enforcement action – codenamed Operation Token Mirrors – is the result of the U.S. Federal Bureau of Investigation (FBI) taking the "unprecedented step" of creating its own cryptocurrency token and company called NexFundAI . NexFundAI, as per information on the website, was marketed as redefining the "intersection between finance and artificial intelligence" and that its aim was to "create a cryptocurrency token that not only serves as a secure store of value but also acts as a catalyst for positive change in the world of AI." "Three market makers — ZM Quant, CLS Global, and MyTrade — along with their employees are charged with allegedly wash trading and/or conspiring to wash trade on behalf of NexFundAI, a cryptocurrency co...
The Value of AI-Powered Identity

The Value of AI-Powered Identity

Oct 08, 2024 Machine Learning / Data Security
Introduction Artificial intelligence (AI) deepfakes and misinformation may cause worry in the world of technology and investment, but this powerful, foundational technology has the potential to benefit organizations of all kinds when harnessed appropriately. In the world of cybersecurity, one of the most important areas of application of AI is augmenting and enhancing identity management systems. AI-powered identity lifecycle management is at the vanguard of digital identity and is used to enhance security, streamline governance and improve the UX of an identity system. Benefits of an AI-powered identity AI is a technology that crosses barriers between traditionally opposing business area drivers, bringing previously conflicting areas together: AI enables better operational efficiency by reducing risk and improving security AI enables businesses to achieve goals by securing cyber-resilience AI facilitates agile and secure access by ensuring regulatory compliance AI and unifi...
Expert Insights / Articles Videos
Cybersecurity Resources