-->
#1 Trusted Cybersecurity News Platform
Followed by 5.20+ million
The Hacker News Logo
Subscribe – Get Latest News
Security Service Edge

Search results for DLP + LLM use | Breaking Cybersecurity News | The Hacker News

Securing GenAI in the Browser: Policy, Isolation, and Data Controls That Actually Work

Securing GenAI in the Browser: Policy, Isolation, and Data Controls That Actually Work

Dec 12, 2025 Data Protection / Browser Security
The browser has become the main interface to GenAI for most enterprises: from web-based LLMs and copilots, to GenAI‑powered extensions and agentic browsers like ChatGPT Atlas . Employees are leveraging the power of GenAI to draft emails, summarize documents, work on code, and analyze data, often by copying/pasting sensitive information directly into prompts or uploading files.  Traditional security controls were not designed to understand this new prompt‑driven interaction pattern, leaving a critical blind spot where risk is highest. Security teams are simultaneously under pressure to enable more GenAI platforms because they clearly boost productivity.  Simply blocking AI is unrealistic. The more sustainable approach is to secure GenAI platforms where they are accessed by users: inside the browser session. The GenAI browser threat model The GenAI‑in‑the‑browser threat model must be approached differently from traditional web browsing due to several key factors.  Us...
The Hidden Weaknesses in AI SOC Tools that No One Talks About

The Hidden Weaknesses in AI SOC Tools that No One Talks About

Jul 03, 2025 Security Operations / Machine Learning
If you’re evaluating AI-powered SOC platforms, you’ve likely seen bold claims: faster triage, smarter remediation, and less noise. But under the hood, not all AI is created equal. Many solutions rely on pre-trained AI models that are hardwired for a handful of specific use cases. While that might work for yesterday’s SOC, today's reality is different. Modern security operations teams face a sprawling and ever-changing landscape of alerts. From cloud to endpoint, identity to OT, insider threats to phishing, network to DLP, and so many more, the list goes on and is continuously growing. CISOs and SOC managers are rightly skeptical. Can this AI actually handle all of my alerts, or is it just another rules engine in disguise? In this post, we’ll examine the divide between two types of AI SOC platforms. Those built on adaptive AI, which learns to triage and respond to any alert type, and those that rely on pre-trained AI, limited to handling predefined use cases only. Understanding t...
⚡ Weekly Recap: AI Automation Exploits, Telecom Espionage, Prompt Poaching & More

⚡ Weekly Recap: AI Automation Exploits, Telecom Espionage, Prompt Poaching & More

Jan 12, 2026 Hacking News / Cybersecurity
This week made one thing clear: small oversights can spiral fast. Tools meant to save time and reduce friction turned into easy entry points once basic safeguards were ignored. Attackers didn’t need novel tricks. They used what was already exposed and moved in without resistance. Scale amplified the damage. A single weak configuration rippled out to millions. A repeatable flaw worked again and again. Phishing crept into apps people rely on daily, while malware blended into routine system behavior. Different victims, same playbook: look normal, move quickly, spread before alarms go off. For defenders, the pressure keeps rising. Vulnerabilities are exploited almost as soon as they surface. Claims and counterclaims appear before the facts settle. Criminal groups adapt faster each cycle. The stories that follow show where things failed—and why those failures matter going forward. ⚡ Threat of the Week Maximum Severity Security Flaw Disclosed in n8n — A maximum-severity vulnerability ...
cyber security

5 Cloud Security Risks You Can’t Afford to Ignore

websiteSentinelOneEnterprise Security / Cloud Security
Get expert analysis, attacker insights, and case studies in our 2025 risk report.
cyber security

Red Report 2026: Analysis of 1.1M Malicious Files and 15.5M Actions

websitePicus SecurityAttack Surface / Cloud Security
New research shows 80% of top ATT&CK techniques now target evasion to remain undetected. Get your copy now.
Generative AI Security: Preventing Microsoft Copilot Data Exposure

Generative AI Security: Preventing Microsoft Copilot Data Exposure

Dec 05, 2023 Data Security / Generative AI
Microsoft Copilot has been called one of the most powerful productivity tools on the planet. Copilot is an AI assistant that lives inside each of your Microsoft 365 apps — Word, Excel, PowerPoint, Teams, Outlook, and so on. Microsoft's dream is to take the drudgery out of daily work and let humans focus on being creative problem-solvers. What makes Copilot a different beast than ChatGPT and other AI tools is that it has access to everything you've ever worked on in 365. Copilot can instantly search and compile data from across your documents, presentations, email, calendar, notes, and contacts. And therein lies the problem for information security teams. Copilot can access all the sensitive data that a user can access, which is often far too much. On average, 10% of a company's M365 data is open to all employees. Copilot can also rapidly generate  net new  sensitive data that must be protected. Prior to the AI revolution, humans' ability to create and share data ...
Why Data Security and Privacy Need to Start in Code

Why Data Security and Privacy Need to Start in Code

Dec 16, 2025 AI Governance / Application Security
AI-assisted coding and AI app generation platforms have created an unprecedented surge in software development. Companies are now facing rapid growth in both the number of applications and the pace of change within those applications. Security and privacy teams are under significant pressure as the surface area they must cover is expanding quickly while their staffing levels remain largely unchanged. Existing data security and privacy solutions are too reactive for this new era. Many begin with data already collected in production, which is often too late. These solutions frequently miss hidden data flows to third party and AI integrations, and for the data sinks they do cover, they help detect risks but do not prevent them. The question is whether many of these issues can instead be prevented early. The answer is yes. Prevention is possible by embedding detection and governance controls directly into development. HoundDog.ai provides a privacy code scanner built for exactly this p...
Perfecting the Defense-in-Depth Strategy with Automation

Perfecting the Defense-in-Depth Strategy with Automation

Jan 26, 2024 Cyber Threat Intelligence
Medieval castles stood as impregnable fortresses for centuries, thanks to their meticulous design. Fast forward to the digital age, and this medieval wisdom still echoes in cybersecurity. Like castles with strategic layouts to withstand attacks, the Defense-in-Depth strategy is the modern counterpart — a multi-layered approach with strategic redundancy and a blend of passive and active security controls.  However, the evolving cyber threat landscape can challenge even the most fortified defenses. Despite the widespread adoption of the Defense-in-Depth strategy, cyber threats persist. Fortunately, the Defense-in-Depth strategy can be augmented using Breach and Attack Simulation (BAS), an automated tool that assesses and improves every security control in each layer. Defense-in-Depth: False Sense of Security with Layers Also known as multi-layered defense, the defense-in-depth strategy has been widely adopted by organizations since the early 2000s. It’s based on the assumption th...
Rethinking AI Data Security: A Buyer's Guide 

Rethinking AI Data Security: A Buyer's Guide 

Sep 17, 2025 AI Security / Shadow IT
Generative AI has gone from a curiosity to a cornerstone of enterprise productivity in just a few short years. From copilots embedded in office suites to dedicated large language model (LLM) platforms, employees now rely on these tools to code, analyze, draft, and decide. But for CISOs and security architects, the very speed of adoption has created a paradox: the more powerful the tools, the more porous the enterprise boundary becomes. And here’s the counterintuitive part: the biggest risk isn’t that employees are careless with prompts. It’s that organizations are applying the wrong mental model when evaluating solutions, trying to retrofit legacy controls for a risk surface they were never designed to cover. A new guide ( download here ) tries to bridge that gap. The Hidden Challenge in Today’s Vendor Landscape The AI data security market is already crowded. Every vendor, from traditional DLP to next-gen SSE platforms, is rebranding around “AI security.” On paper, this seems to of...
Expert Insights Articles Videos
Cybersecurity Resources