#1 Trusted Cybersecurity News Platform
Followed by 5.20+ million
The Hacker News Logo
Subscribe – Get Latest News
Cloud Security

Generative AI | Breaking Cybersecurity News | The Hacker News

Category — Generative AI
Artificial Intelligence – What's all the fuss?

Artificial Intelligence – What's all the fuss?

Apr 17, 2025 Artificial Intelligence / Threat Intelligence
Talking about AI: Definitions Artificial Intelligence (AI) — AI refers to the simulation of human intelligence in machines, enabling them to perform tasks that typically require human intelligence, such as decision-making and problem-solving. AI is the broadest concept in this field, encompassing various technologies and methodologies, including Machine Learning (ML) and Deep Learning. Machine Learning (ML) — ML is a subset of AI that focuses on developing algorithms and statistical models that allow machines to learn from and make predictions or decisions based on data. ML is a specific approach within AI, emphasizing data-driven learning and improvement over time. Deep Learning (DL) — Deep Learning is a specialized subset of ML that uses neural networks with multiple layers to analyze and interpret complex data patterns. This advanced form of ML is particularly effective for tasks such as image and speech recognition, making it a crucial component of many AI applications. Larg...
Microsoft Adds Inline Data Protection to Edge for Business to Block GenAI Data Leaks

Microsoft Adds Inline Data Protection to Edge for Business to Block GenAI Data Leaks

Mar 24, 2025 Enterprise Security / Browser Security
Microsoft on Monday announced a new feature called inline data protection for its enterprise-focused Edge for Business web browser. The native data security control is designed to prevent employees from sharing sensitive company-related data into consumer generative artificial intelligence (GenAI) apps like OpenAI ChatGPT, Google Gemini, and DeepSeek. The list will be expanded over time to include other genAI, email, collaboration, and social media apps. "With the new inline protection capability for Edge for Business, you can prevent data leakage across the various ways that users interact with sensitive data in the browser, including typing of text directly into a web application or generative AI prompt," the tech giant said. The Microsoft Purview browser data loss prevention ( DLP ) controls come as the company announced the General Availability of collaboration security for Microsoft Teams in an effort to tackle phishing attacks against users of the enterprise com...
cyber security

10 Steps to Microsoft 365 Cyber Resilience

websiteVeeamCyber Resilience / Data Security
75% of organizations get hit by cyberattacks, and most report getting hit more than once. Read this ebook to learn 10 steps to take to build a more proactive approach to securing your organization's Microsoft 365 data from cyberattacks and ensuring cyber resilience.
Microsoft Exposes LLMjacking Cybercriminals Behind Azure AI Abuse Scheme

Microsoft Exposes LLMjacking Cybercriminals Behind Azure AI Abuse Scheme

Feb 28, 2025 API Security / AI Security
Microsoft on Thursday unmasked four of the individuals that it said were behind an Azure Abuse Enterprise scheme that involves leveraging unauthorized access to generative artificial intelligence (GenAI) services in order to produce offensive and harmful content. The campaign, called LLMjacking, has targeted various AI offerings, including Microsoft's Azure OpenAI Service. The tech giant is tracking the cybercrime network as Storm-2139. The individuals named are - Arian Yadegarnia aka "Fiz" of Iran, Alan Krysiak aka "Drago" of United Kingdom, Ricky Yuen aka "cg-dot" of Hong Kong, China, and Phát Phùng Tấn aka "Asakuri" of Vietnam "Members of Storm-2139 exploited exposed customer credentials scraped from public sources to unlawfully access accounts with certain generative AI services," Steven Masada, assistant general counsel for Microsoft's Digital Crimes Unit (DCU), said . "They then altered the capabilities of ...
cyber security

The Ultimate Guide to SaaS Identity Security in 2025

websiteWing SecuritySaaS Security / Identity Threat Detection
Discover how to protect your SaaS apps from identity-based breaches with this expert 2025 guide—learn practical steps to secure every account and keep your data safe.
4 Reasons Your SaaS Attack Surface Can No Longer be Ignored

4 Reasons Your SaaS Attack Surface Can No Longer be Ignored

Jan 14, 2025 SaaS Security / Generative AI
What do identity risks, data security risks and third-party risks all have in common? They are all made much worse by SaaS sprawl. Every new SaaS account adds a new identity to secure, a new place where sensitive data can end up, and a new source of third party risk. Learn how you can protect this sprawling attack surface in 2025. What do identity risks, data security risks and third-party risks all have in common? They are all made much worse by SaaS sprawl. Every new SaaS account adds a new identity to secure, a new place where sensitive data can end up, and a new source of third-party risk. And, this growing attack surface, much of which is unknown or unmanaged in most orgs, has become an attractive target for attackers. So, why should you prioritize securing your SaaS attack surface in 2025? Here are 4 reasons. ‍ 1. Modern work runs on SaaS. When's the last time you used something other than a cloud-based app to do your work? Can't remember? You're not alone.  Outside of ...
Microsoft Sues Hacking Group Exploiting Azure AI for Harmful Content Creation

Microsoft Sues Hacking Group Exploiting Azure AI for Harmful Content Creation

Jan 11, 2025 AI Security / Cybersecurity
Microsoft has revealed that it's pursuing legal action against a "foreign-based threat–actor group" for operating a hacking-as-a-service infrastructure to intentionally get around the safety controls of its generative artificial intelligence (AI) services and produce offensive and harmful content. The tech giant's Digital Crimes Unit (DCU) said it has observed the threat actors "develop sophisticated software that exploited exposed customer credentials scraped from public websites," and "sought to identify and unlawfully access accounts with certain generative AI services and purposely alter the capabilities of those services." The adversaries then used these services, such as Azure OpenAI Service, and monetized the access by selling them to other malicious actors, providing them with detailed instructions as to how to use these custom tools to generate harmful content. Microsoft said it discovered the activity in July 2024. The Windows maker...
Iranian and Russian Entities Sanctioned for Election Interference Using AI and Cyber Tactics

Iranian and Russian Entities Sanctioned for Election Interference Using AI and Cyber Tactics

Jan 01, 2025 Generative AI / Election Interference
The U.S. Treasury Department's Office of Foreign Assets Control (OFAC) on Tuesday leveled sanctions against two entities in Iran and Russia for their attempts to interfere with the November 2024 presidential election. The federal agency said the entities – a subordinate organization of Iran's Islamic Revolutionary Guard Corps and a Moscow-based affiliate of Russia's Main Intelligence Directorate (GRU) – sought to influence the electoral outcome and divide the American people through targeted disinformation campaigns. "As affiliates of the IRGC and GRU, these actors aimed to stoke socio-political tensions and influence the U.S. electorate during the 2024 U.S. election," it noted in a press release. In August 2024, the Office of the Director of National Intelligence (ODNI), the Federal Bureau of Investigation (FBI), and the Cybersecurity and Infrastructure Security Agency (CISA) jointly accused Iran of attempting to undermine democratic processes, including b...
AI Could Generate 10,000 Malware Variants, Evading Detection in 88% of Case

AI Could Generate 10,000 Malware Variants, Evading Detection in 88% of Case

Dec 23, 2024 Machine Learning / Threat Analysis
Cybersecurity researchers have found that it's possible to use large language models (LLMs) to generate new variants of malicious JavaScript code at scale in a manner that can better evade detection. "Although LLMs struggle to create malware from scratch, criminals can easily use them to rewrite or obfuscate existing malware, making it harder to detect," Palo Alto Networks Unit 42 researchers said in a new analysis. "Criminals can prompt LLMs to perform transformations that are much more natural-looking, which makes detecting this malware more challenging." With enough transformations over time, the approach could have the advantage of degrading the performance of malware classification systems, tricking them into believing that a piece of nefarious code is actually benign. While LLM providers have increasingly enforced security guardrails to prevent them from going off the rails and producing unintended output, bad actors have advertised tools like WormGPT...
Why Phishing-Resistant MFA Is No Longer Optional: The Hidden Risks of Legacy MFA

Why Phishing-Resistant MFA Is No Longer Optional: The Hidden Risks of Legacy MFA

Oct 24, 2024 Ransomware / Generative AI
Sometimes, it turns out that the answers we struggled so hard to find were sitting right in front of us for so long that we somehow overlooked them. When the Department of Homeland Security, through the Cybersecurity and Infrastructure Security Agency (CISA), in coordination with the FBI, issues a cybersecurity warning and prescribes specific action, it's a pretty good idea to at least read the joint advisory. In their advisory AA24-242A, DHS/CISA and the FBI told the entire cybercriminal-stopping world that to stop ransomware attacks, organizations needed to implement phishing-resistant MFA and ditch SMS-based OTP MFA.  The Best Advice I Never Followed  This year, we have experienced an astonishing surge in ransomware payments, with the average payment increasing by a staggering 500%. Per the "State of Ransomware 2024" report from cybersecurity leader Sophos, the average ransom has jumped by 5X reaching $2 million from $400,000 last year. Even more troubling, RISK &...
5 Actionable Steps to Prevent GenAI Data Leaks Without Fully Blocking AI Usage

5 Actionable Steps to Prevent GenAI Data Leaks Without Fully Blocking AI Usage

Oct 01, 2024 Generative AI / Data Protection
Since its emergence, Generative AI has revolutionized enterprise productivity. GenAI tools enable faster and more effective software development, financial analysis, business planning, and customer engagement. However, this business agility comes with significant risks, particularly the potential for sensitive data leakage. As organizations attempt to balance productivity gains with security concerns, many have been forced to choose between unrestricted GenAI usage to banning it altogether. A new e-guide by LayerX titled 5 Actionable Measures to Prevent Data Leakage Through Generative AI Tools is designed to help organizations navigate the challenges of GenAI usage in the workplace. The guide offers practical steps for security managers to protect sensitive corporate data while still reaping the productivity benefits of GenAI tools like ChatGPT. This approach is intended to allow companies to strike the right balance between innovation and security. Why Worry About ChatGPT? The e...
The AI Hangover is Here – The End of the Beginning

The AI Hangover is Here – The End of the Beginning

Aug 12, 2024 AI Technology / Machine Learning
After a good year of sustained exuberance, the hangover is finally here. It's a gentle one (for now), as the market corrects the share price of the major players (like Nvidia, Microsoft, and Google), while other players reassess the market and adjust priorities. Gartner calls it the trough of disillusionment , when interest wanes and implementations fail to deliver the promised breakthroughs. Producers of the technology shake out or fail. Investment continues only if the surviving providers improve their products to the satisfaction of early adopters.  Let's be clear, this was always going to be the case: the post-human revolution promised by the AI cheerleaders was never a realistic goal, and the incredible excitement triggered by the early LLMs was not based on market success.  AI is here to stay  What's next for AI then? Well, if it follows the Gartner hype cycle, the deep crash is followed by the slope of enlightenment where the maturing technology regains its foo...
Expert Insights / Articles Videos
Cybersecurity Resources