#1 Trusted Cybersecurity News Platform
Followed by 5.20+ million
The Hacker News Logo
Subscribe – Get Latest News
State of SaaS

Generative AI | Breaking Cybersecurity News | The Hacker News

Category — Generative AI
Microsoft Sues Hacking Group Exploiting Azure AI for Harmful Content Creation

Microsoft Sues Hacking Group Exploiting Azure AI for Harmful Content Creation

Jan 11, 2025 AI Security / Cybersecurity
Microsoft has revealed that it's pursuing legal action against a "foreign-based threat–actor group" for operating a hacking-as-a-service infrastructure to intentionally get around the safety controls of its generative artificial intelligence (AI) services and produce offensive and harmful content. The tech giant's Digital Crimes Unit (DCU) said it has observed the threat actors "develop sophisticated software that exploited exposed customer credentials scraped from public websites," and "sought to identify and unlawfully access accounts with certain generative AI services and purposely alter the capabilities of those services." The adversaries then used these services, such as Azure OpenAI Service, and monetized the access by selling them to other malicious actors, providing them with detailed instructions as to how to use these custom tools to generate harmful content. Microsoft said it discovered the activity in July 2024. The Windows maker...
Iranian and Russian Entities Sanctioned for Election Interference Using AI and Cyber Tactics

Iranian and Russian Entities Sanctioned for Election Interference Using AI and Cyber Tactics

Jan 01, 2025 Generative AI / Election Interference
The U.S. Treasury Department's Office of Foreign Assets Control (OFAC) on Tuesday leveled sanctions against two entities in Iran and Russia for their attempts to interfere with the November 2024 presidential election. The federal agency said the entities – a subordinate organization of Iran's Islamic Revolutionary Guard Corps and a Moscow-based affiliate of Russia's Main Intelligence Directorate (GRU) – sought to influence the electoral outcome and divide the American people through targeted disinformation campaigns. "As affiliates of the IRGC and GRU, these actors aimed to stoke socio-political tensions and influence the U.S. electorate during the 2024 U.S. election," it noted in a press release. In August 2024, the Office of the Director of National Intelligence (ODNI), the Federal Bureau of Investigation (FBI), and the Cybersecurity and Infrastructure Security Agency (CISA) jointly accused Iran of attempting to undermine democratic processes, including b...
From $22M in Ransom to +100M Stolen Records: 2025's All-Star SaaS Threat Actors to Watch

From $22M in Ransom to +100M Stolen Records: 2025's All-Star SaaS Threat Actors to Watch

Jan 06, 2025SaaS Security / Threat Detection
In 2024, cyber threats targeting SaaS surged, with 7,000 password attacks blocked per second (just in Entra ID)—a 75% increase from last year—and phishing attempts up by 58%, causing $3.5 billion in losses (source: Microsoft Digital Defense Report 2024 ). SaaS attacks are increasing, with hackers often evading detection through legitimate usage patterns. The cyber threat arena saw standout players, unexpected underdogs, and relentless scorers leaving their mark on the SaaS security playing field.  As we enter 2025, security teams must prioritize SaaS security risk assessments to uncover vulnerabilities, adopt SSPM tools for continuous monitoring, and proactively defend their systems. Here are the Cyber Threat All-Stars to watch out for—the MVPs, rising stars, and master strategists who shaped the game. 1. ShinyHunters: The Most Valuable Player Playstyle: Precision Shots (Cybercriminal Organization) Biggest Wins: Snowflake, Ticketmaster and Authy Notable Drama: Exploited on...
AI Could Generate 10,000 Malware Variants, Evading Detection in 88% of Case

AI Could Generate 10,000 Malware Variants, Evading Detection in 88% of Case

Dec 23, 2024 Machine Learning / Threat Analysis
Cybersecurity researchers have found that it's possible to use large language models (LLMs) to generate new variants of malicious JavaScript code at scale in a manner that can better evade detection. "Although LLMs struggle to create malware from scratch, criminals can easily use them to rewrite or obfuscate existing malware, making it harder to detect," Palo Alto Networks Unit 42 researchers said in a new analysis. "Criminals can prompt LLMs to perform transformations that are much more natural-looking, which makes detecting this malware more challenging." With enough transformations over time, the approach could have the advantage of degrading the performance of malware classification systems, tricking them into believing that a piece of nefarious code is actually benign. While LLM providers have increasingly enforced security guardrails to prevent them from going off the rails and producing unintended output, bad actors have advertised tools like WormGPT...
cyber security

Advance Your Cybersecurity Career at SANS Security East Baltimore 2025

websiteSANS SecurityCybersecurity Training
Choose from 25+ cutting-edge courses & train in-person to unlock $999. Elevate your skills now!
Why Phishing-Resistant MFA Is No Longer Optional: The Hidden Risks of Legacy MFA

Why Phishing-Resistant MFA Is No Longer Optional: The Hidden Risks of Legacy MFA

Oct 24, 2024 Ransomware / Generative AI
Sometimes, it turns out that the answers we struggled so hard to find were sitting right in front of us for so long that we somehow overlooked them. When the Department of Homeland Security, through the Cybersecurity and Infrastructure Security Agency (CISA), in coordination with the FBI, issues a cybersecurity warning and prescribes specific action, it's a pretty good idea to at least read the joint advisory. In their advisory AA24-242A, DHS/CISA and the FBI told the entire cybercriminal-stopping world that to stop ransomware attacks, organizations needed to implement phishing-resistant MFA and ditch SMS-based OTP MFA.  The Best Advice I Never Followed  This year, we have experienced an astonishing surge in ransomware payments, with the average payment increasing by a staggering 500%. Per the "State of Ransomware 2024" report from cybersecurity leader Sophos, the average ransom has jumped by 5X reaching $2 million from $400,000 last year. Even more troubling, RISK &...
5 Actionable Steps to Prevent GenAI Data Leaks Without Fully Blocking AI Usage

5 Actionable Steps to Prevent GenAI Data Leaks Without Fully Blocking AI Usage

Oct 01, 2024 Generative AI / Data Protection
Since its emergence, Generative AI has revolutionized enterprise productivity. GenAI tools enable faster and more effective software development, financial analysis, business planning, and customer engagement. However, this business agility comes with significant risks, particularly the potential for sensitive data leakage. As organizations attempt to balance productivity gains with security concerns, many have been forced to choose between unrestricted GenAI usage to banning it altogether. A new e-guide by LayerX titled 5 Actionable Measures to Prevent Data Leakage Through Generative AI Tools is designed to help organizations navigate the challenges of GenAI usage in the workplace. The guide offers practical steps for security managers to protect sensitive corporate data while still reaping the productivity benefits of GenAI tools like ChatGPT. This approach is intended to allow companies to strike the right balance between innovation and security. Why Worry About ChatGPT? The e...
The AI Hangover is Here – The End of the Beginning

The AI Hangover is Here – The End of the Beginning

Aug 12, 2024 AI Technology / Machine Learning
After a good year of sustained exuberance, the hangover is finally here. It's a gentle one (for now), as the market corrects the share price of the major players (like Nvidia, Microsoft, and Google), while other players reassess the market and adjust priorities. Gartner calls it the trough of disillusionment , when interest wanes and implementations fail to deliver the promised breakthroughs. Producers of the technology shake out or fail. Investment continues only if the surviving providers improve their products to the satisfaction of early adopters.  Let's be clear, this was always going to be the case: the post-human revolution promised by the AI cheerleaders was never a realistic goal, and the incredible excitement triggered by the early LLMs was not based on market success.  AI is here to stay  What's next for AI then? Well, if it follows the Gartner hype cycle, the deep crash is followed by the slope of enlightenment where the maturing technology regains its foo...
Offensive AI: The Sine Qua Non of Cybersecurity

Offensive AI: The Sine Qua Non of Cybersecurity

Jul 26, 2024 Digital Warfare / Cybersecurity Training
"Peace is the virtue of civilization. War is its crime. Yet it is often in the furnace of war that the sharpest tools of peace are forged." - Victor Hugo. In 1971, an unsettling message started appearing on several computers that comprised ARPANET, the precursor to what we now know as the Internet. The message, which read "I'm the Creeper: catch me if you can." was the output of a program named Creeper, which was developed by the famous programmer Bob Thomas while he worked at BBN Technologies. While Thomas's intentions were not malicious, the Creeper program represents the advent of what we now call a computer virus. The appearance of Creeper on ARPANET set the stage for the emergence of the first Antivirus software. While unconfirmed, it is believed that Ray Thomlinson, famously known for inventing email, developed Reaper, a program designed to remove Creeper from Infected Machines. The development of this tool used to defensively chase down and remove ...
Summary of "AI Leaders Spill Their Secrets" Webinar

Summary of "AI Leaders Spill Their Secrets" Webinar

Jul 19, 2024 Technology / Artificial Intelligence
Event Overview The " AI Leaders Spill Their Secrets " webinar, hosted by Sigma Computing, featured prominent AI experts sharing their experiences and strategies for success in the AI industry. The panel included Michael Ward from Sardine, Damon Bryan from Hyperfinity, and Stephen Hillian from Astronomer, moderated by Zalak Trivedi, Sigma Computing's Product Manager. Key Speakers and Their Backgrounds 1. Michael Ward Senior Risk Data Analyst at Sardine. Over 25 years of experience in software engineering. Focuses on data science, analytics, and machine learning to prevent fraud and money laundering. 2. Damon Bryan Co-founder and CTO at Hyperfinity. Specializes in decision intelligence software for retailers and brands. Background in data science, AI, and analytics, transitioning from consultancy to a full-fledged software company. 3. Stephen Hillion SVP of Data and AI at Astronomer. Manages data science teams and focuses on the development and scaling of...
How MFA Failures are Fueling a 500% Surge in Ransomware Losses

How MFA Failures are Fueling a 500% Surge in Ransomware Losses

Jul 02, 2024 Multi-Factor Authentication
The cybersecurity threat landscape has witnessed a dramatic and alarming rise in the average ransomware payment, an increase exceeding 500%. Sophos, a global leader in cybersecurity, revealed in its annual "State of Ransomware 2024" report that the average ransom payment has increased 500% in the last year with organizations that paid a ransom reporting an average payment of $2 million, up from $400,000 in 2023. Separately, RISK & INSURANCE, a leading media source for the insurance industry reported recently that in 2023 the median ransom demand soared to $20 million in 2023 from $1.4 million in 2022, and payment skyrocketed to $6.5 million in 2023 from $335,000 in 2022, much more than 500%. This shocking surge is a testament to the increasing sophistication of cyberattacks and the significant vulnerabilities inherent in outdated security methods. The most significant factor contributing to this trend is a broad reliance on twenty-year-old, legacy Multi-Factor Authentic...
Apple Launches Private Cloud Compute for Privacy-Centric AI Processing

Apple Launches Private Cloud Compute for Privacy-Centric AI Processing

Jun 11, 2024 Cloud Computing / Artificial Intelligence
Apple has announced the launch of a "groundbreaking cloud intelligence system" called Private Cloud Compute (PCC) that's designed for processing artificial intelligence (AI) tasks in a privacy-preserving manner in the cloud. The tech giant described PCC as the "most advanced security architecture ever deployed for cloud AI compute at scale." PCC coincides with the arrival of new generative AI (GenAI) features – collectively dubbed Apple Intelligence , or AI for short – that the iPhone maker unveiled in its next generation of software, including iOS 18 , iPadOS 18 , and macOS Sequoia . All of the Apple Intelligence features, both the ones that run on-device and those that rely on PCC, leverage in-house generative models trained on "licensed data, including data selected to enhance specific features, as well as publicly available data collected by our web-crawler, AppleBot." With PCC, the idea is to essentially offload complex requests that requir...
The AI Debate: Google's Guidelines, Meta's GDPR Dispute, Microsoft's Recall Backlash

The AI Debate: Google's Guidelines, Meta's GDPR Dispute, Microsoft's Recall Backlash

Jun 07, 2024 Artificial Intelligence / Privacy
Google is urging third-party Android app developers to incorporate generative artificial intelligence (GenAI) features in a responsible manner. The new guidance from the search and advertising giant is an effort to combat problematic content, including sexual content and hate speech, created through such tools. To that end, apps that generate content using AI must ensure they don't create Restricted Content , have a mechanism for users to report or flag offensive information , and market them in a manner that accurately represents the app's capabilities. App developers are also being recommended to rigorously test their AI models to ensure they respect user safety and privacy. "Be sure to test your apps across various user scenarios and safeguard them against prompts that could manipulate your generative AI feature to create harmful or offensive content," Prabhat Sharma, director of trust and safety for Google Play, Android, and Chrome, said . The development com...
How to Build Your Autonomous SOC Strategy

How to Build Your Autonomous SOC Strategy

May 30, 2024 Endpoint Security / Threat Detection
Security leaders are in a tricky position trying to discern how much new AI-driven cybersecurity tools could actually benefit a security operations center (SOC). The hype about generative AI is still everywhere, but security teams have to live in reality. They face constantly incoming alerts from endpoint security platforms, SIEM tools, and phishing emails reported by internal users. Security teams also face an acute talent shortage.  In this guide, we'll lay out practical steps organizations can take to automate more of their processes and build an autonomous SOC strategy . This should address the acute talent shortage in security teams, by employing artificial intelligence and machine learning with a variety of techniques, these systems simulate the decision-making and investigative processes of human analysts. First, we'll define objectives for an autonomous SOC strategy and then consider key processes that could be automated. Next, we'll consider different AI and automation ...
Expert Insights / Articles Videos
Cybersecurity Resources