#1 Trusted Cybersecurity News Platform
Followed by 5.20+ million
The Hacker News Logo
Subscribe – Get Latest News
DevSecOps

AI Security | Breaking Cybersecurity News | The Hacker News

Category — AI Security
The Evolving Role of PAM in Cybersecurity Leadership Agendas for 2025

The Evolving Role of PAM in Cybersecurity Leadership Agendas for 2025

Feb 06, 2025 AI Security / Cybersecurity
Privileged Access Management (PAM) has emerged as a cornerstone of modern cybersecurity strategies, shifting from a technical necessity to a critical pillar in leadership agendas. With the PAM market projected to reach $42.96 billion by 2037 (according to Research Nester), organizations invest heavily in PAM solutions. Why is PAM climbing the ranks of leadership priorities? While Gartner highlights key reasons such as enhanced security, regulatory compliance readiness, and insurance requirements, the impact of PAM extends across multiple strategic areas. PAM can help organizations enhance their overall operational efficiency and tackle many challenges they face today. To explore more about PAM's transformative impact on businesses, read The Cyber Guardian: PAM's Role in Shaping Leadership Agendas for 2025 by a renowned cybersecurity expert and former Gartner lead analyst Jonathan Care.  What cybersecurity challenges may organizations face in 2025? The cybersecurity landsca...
Watch Out For These 8 Cloud Security Shifts in 2025

Watch Out For These 8 Cloud Security Shifts in 2025

Feb 04, 2025 Threat Detection / Cloud Security
As cloud security evolves in 2025 and beyond, organizations must adapt to both new and evolving realities, including the increasing reliance on cloud infrastructure for AI-driven workflows and the vast quantities of data being migrated to the cloud. But there are other developments that could impact your organizations and drive the need for an even more robust security strategy. Let's take a look… #1: Increased Threat Landscape Encourages Market Consolidation Cyberattacks targeting cloud environments are becoming more sophisticated, emphasizing the need for security solutions that go beyond detection. Organizations will need proactive defense mechanisms to prevent risks from reaching production. Because of this need, the market will favor vendors offering comprehensive, end-to-end security platforms that streamline risk mitigation and enhance operational efficiency. #2: Cloud Security Unifies with SOC Priorities Security operations centers (SOC) and cloud security functions are c...
SOC Analysts - Reimagining Their Role Using AI

SOC Analysts - Reimagining Their Role Using AI

Jan 30, 2025AI Security / SOC Automation
The job of a SOC analyst has never been easy. Faced with an overwhelming flood of daily alerts, analysts (and sometimes IT teams who are doubling as SecOps) must try and triage thousands of security alerts—often false positives—just to identify a handful of real threats. This relentless, 24/7 work leads to alert fatigue, desensitization, and increased risk of missing critical security incidents. Studies show that 70% of SOC analysts experience severe stress, and 65% consider leaving their jobs within a year . This makes retention a major challenge for security teams, especially in light of the existing shortage of skilled security analysts . On the operational side, analysts spend more time on repetitive, manual tasks like investigating alerts, and resolving and documenting incidents than they do on proactive security measures. Security teams struggle with configuring and maintaining SOAR playbooks as the cyber landscape rapidly changes. To top this all off, tool overload and siloed ...
Italy Bans Chinese DeepSeek AI Over Data Privacy and Ethical Concerns

Italy Bans Chinese DeepSeek AI Over Data Privacy and Ethical Concerns

Jan 31, 2025 AI Ethics / Machine Learning
Italy's data protection watchdog has blocked Chinese artificial intelligence (AI) firm DeepSeek's service within the country, citing a lack of information on its use of users' personal data. The development comes days after the authority, the Garante, sent a series of questions to DeepSeek, asking about its data handling practices and where it obtained its training data. In particular, it wanted to know what personal data is collected by its web platform and mobile app, from which sources, for what purposes, on what legal basis, and whether it is stored in China. In a statement issued January 30, 2025, the Garante said it arrived at the decision after DeepSeek provided information that it said was "completely insufficient." The entities behind the service, Hangzhou DeepSeek Artificial Intelligence and Beijing DeepSeek Artificial Intelligence, have "declared that they do not operate in Italy and that European legislation does not apply to them," it...
cyber security

Practical, Tactical Guide to Securing AI in the Enterprise

websiteTinesEnterprise Security / AI Security
Supercharge your organization's AI adoption strategy, and go from complex challenges to secure success.
Lightning AI Studio Vulnerability Could've Allowed RCE via Hidden URL Parameter

Lightning AI Studio Vulnerability Could've Allowed RCE via Hidden URL Parameter

Jan 30, 2025 Vulnerability / Cloud Security
Cybersecurity researchers have disclosed a critical security flaw in the Lightning AI Studio development platform that, if successfully exploited, could have allowed for remote code execution. The vulnerability, rated a CVSS score of 9.4, enables "attackers to potentially execute arbitrary commands with root privileges" by exploiting a hidden URL parameter, application security firm Noma said in a report shared with The Hacker News. "This level of access could hypothetically be leveraged for a range of malicious activities, including the extraction of sensitive keys from targeted accounts," researchers Sasi Levi, Alon Tron, and Gal Moyal said. The issue is embedded in a piece of JavaScript code that could facilitate unfettered access to a victim's development environment, as well as run arbitrary commands on an authenticated target in a privileged context. Noma said it found a hidden parameter called "command" in user-specific URLs – e.g., ...
SOC Analysts - Reimagining Their Role Using AI

SOC Analysts - Reimagining Their Role Using AI

Jan 30, 2025 AI Security / SOC Automation
The job of a SOC analyst has never been easy. Faced with an overwhelming flood of daily alerts, analysts (and sometimes IT teams who are doubling as SecOps) must try and triage thousands of security alerts—often false positives—just to identify a handful of real threats. This relentless, 24/7 work leads to alert fatigue, desensitization, and increased risk of missing critical security incidents. Studies show that 70% of SOC analysts experience severe stress, and 65% consider leaving their jobs within a year . This makes retention a major challenge for security teams, especially in light of the existing shortage of skilled security analysts . On the operational side, analysts spend more time on repetitive, manual tasks like investigating alerts, and resolving and documenting incidents than they do on proactive security measures. Security teams struggle with configuring and maintaining SOAR playbooks as the cyber landscape rapidly changes. To top this all off, tool overload and siloed ...
AI SOC Analysts: Propelling SecOps into the future

AI SOC Analysts: Propelling SecOps into the future

Jan 28, 2025 Threat Hunting / SecOps
Triaging and investigating alerts is central to security operations. As SOC teams strive to keep up with ever-increasing alert volumes and complexity, modernizing SOC automation strategies with AI has emerged as a critical solution. This blog explores how an AI SOC Analyst transforms alert management, addressing key SOC challenges while enabling faster investigations and responses. Security operations teams are under constant pressure to manage the relentless flow of security alerts from an expanding array of tools. Every alert carries the risk of serious consequences if ignored, yet the majority are false positives. This flood of alerts bogs down teams in a cycle of tedious, repetitive tasks, consuming valuable time and resources. The result? Overstretched teams are struggling to balance reactive alert "whack-a-mole" chasing with proactive threat hunting and other strategic security initiatives.  Core challenges High alert volumes: Security operations teams receive hundreds t...
Meta's Llama Framework Flaw Exposes AI Systems to Remote Code Execution Risks

Meta's Llama Framework Flaw Exposes AI Systems to Remote Code Execution Risks

Jan 26, 2025 AI Security / Vulnerability
A high-severity security flaw has been disclosed in Meta's Llama large language model (LLM) framework that, if successfully exploited, could allow an attacker to execute arbitrary code on the llama-stack inference server.  The vulnerability, tracked as CVE-2024-50050 , has been assigned a CVSS score of 6.3 out of 10.0. Supply chain security firm Snyk, on the other hand, has assigned it a critical severity rating of 9.3. "Affected versions of meta-llama are vulnerable to deserialization of untrusted data, meaning that an attacker can execute arbitrary code by sending malicious data that is deserialized," Oligo Security researcher Avi Lumelsky said in an analysis earlier this week. The shortcoming, per the cloud security company, resides in a component called Llama Stack , which defines a set of API interfaces for artificial intelligence (AI) application development, including using Meta's own Llama models. Specifically, it has to do with a remote code execution ...
Microsoft Sues Hacking Group Exploiting Azure AI for Harmful Content Creation

Microsoft Sues Hacking Group Exploiting Azure AI for Harmful Content Creation

Jan 11, 2025 AI Security / Cybersecurity
Microsoft has revealed that it's pursuing legal action against a "foreign-based threat–actor group" for operating a hacking-as-a-service infrastructure to intentionally get around the safety controls of its generative artificial intelligence (AI) services and produce offensive and harmful content. The tech giant's Digital Crimes Unit (DCU) said it has observed the threat actors "develop sophisticated software that exploited exposed customer credentials scraped from public websites," and "sought to identify and unlawfully access accounts with certain generative AI services and purposely alter the capabilities of those services." The adversaries then used these services, such as Azure OpenAI Service, and monetized the access by selling them to other malicious actors, providing them with detailed instructions as to how to use these custom tools to generate harmful content. Microsoft said it discovered the activity in July 2024. The Windows maker...
Product Walkthrough: How Reco Discovers Shadow AI in SaaS

Product Walkthrough: How Reco Discovers Shadow AI in SaaS

Jan 09, 2025 AI Security / SaaS Security
As SaaS providers race to integrate AI into their product offerings to stay competitive and relevant, a new challenge has emerged in the world of AI: shadow AI.  Shadow AI refers to the unauthorized use of AI tools and copilots at organizations. For example, a developer using ChatGPT to assist with writing code, a salesperson downloading an AI-powered meeting transcription tool, or a customer support person using Agentic AI to automate tasks – without going through the proper channels. When these tools are used without IT or the Security team's knowledge, they often lack sufficient security controls, putting company data at risk. Shadow AI Detection Challenges Because shadow AI tools often embed themselves in approved business applications via AI assistants, copilots, and agents they are even more tricky to discover than traditional shadow IT. While traditional shadow apps can be identified through network monitoring methodologies that scan for unauthorized connections based on...
A Guide to Securing AI App Development: Join This Cybersecurity Webinar

A Guide to Securing AI App Development: Join This Cybersecurity Webinar

Dec 02, 2024 AI Security / Data Protection
Artificial Intelligence (AI) is no longer a far-off dream—it's here, changing the way we live. From ordering coffee to diagnosing diseases, it's everywhere. But while you're creating the next big AI-powered app, hackers are already figuring out ways to break it. Every AI app is an opportunity—and a potential risk. The stakes are huge: data leaks, downtime, and even safety threats if security isn't built in. With AI adoption moving fast, securing your projects is no longer optional—it's a must. Join Liqian Lim, Senior Product Marketing Manager at Snyk, for an exclusive webinar that's all about securing the future of AI development. Titled " Building Tomorrow, Securely: Securing the Use of AI in App Development ," this session will arm you with the knowledge and tools to tackle the challenges of AI-powered innovation. What You'll Learn: Get AI-Ready: How to make your AI projects secure from the start. Spot Hidden Risks: Uncover threats you might not see coming. Understand the Ma...
Microsoft Fixes AI, Cloud, and ERP Security Flaws; One Exploited in Active Attacks

Microsoft Fixes AI, Cloud, and ERP Security Flaws; One Exploited in Active Attacks

Nov 29, 2024 AI Security / Cloud Security
Microsoft has addressed four security flaws impacting its artificial intelligence (AI), cloud, enterprise resource planning, and Partner Center offerings, including one that it said has been exploited in the wild. The vulnerability that has been tagged with an "Exploitation Detected" assessment is CVE-2024-49035 (CVSS score: 8.7), a privilege escalation flaw in partner.microsoft[.]com. "An improper access control vulnerability in partner.microsoft[.]com allows an unauthenticated attacker to elevate privileges over a network," the tech giant said in an advisory released this week. Microsoft credited Gautam Peri, Apoorv Wadhwa, and an anonymous researcher for reporting the flaw, but did not reveal any specifics on how it's being exploited in real-world attacks. Fixes for the shortcomings are being rolled out automatically as part of updates to the online version of Microsoft Power Apps. Also addressed by Redmond are three other vulnerabilities, two of which...
Security Flaws in Popular ML Toolkits Enable Server Hijacks, Privilege Escalation

Security Flaws in Popular ML Toolkits Enable Server Hijacks, Privilege Escalation

Nov 11, 2024 Machine Learning / Vulnerability
Cybersecurity researchers have uncovered nearly two dozen security flaws spanning 15 different machine learning (ML) related open-source projects. These comprise vulnerabilities discovered both on the server- and client-side, software supply chain security firm JFrog said in an analysis published last week. The server-side weaknesses "allow attackers to hijack important servers in the organization such as ML model registries, ML databases and ML pipelines," it said . The vulnerabilities, discovered in Weave, ZenML, Deep Lake, Vanna.AI, and Mage AI, have been broken down into broader sub-categories that allow for remotely hijacking model registries, ML database frameworks, and taking over ML Pipelines. A brief description of the identified flaws is below - CVE-2024-7340 (CVSS score: 8.8) - A directory traversal vulnerability in the Weave ML toolkit that allows for reading files across the whole filesystem, effectively allowing a low-privileged authenticated user to es...
Expert Insights / Articles Videos
Cybersecurity Resources