#1 Trusted Cybersecurity News Platform Followed by 4.50+ million
The Hacker News Logo
Get the Free Newsletter
SaaS Security

AI Security | Breaking Cybersecurity News | The Hacker News

Three Tips to Protect Your Secrets from AI Accidents

Three Tips to Protect Your Secrets from AI Accidents

Feb 26, 2024 Data Privacy / Machine Learning
Last year, the Open Worldwide Application Security Project (OWASP) published multiple versions of the " OWASP Top 10 For Large Language Models ," reaching a 1.0 document in August and a 1.1 document in October. These documents not only demonstrate the rapidly evolving nature of Large Language Models, but the evolving ways in which they can be attacked and defended. We're going to talk in this article about four items in that top 10 that are most able to contribute to the accidental disclosure of secrets such as passwords, API keys, and more. We're already aware that LLMs can reveal secrets because it's happened. In early 2023, GitGuardian reported it found over 10 million secrets in public Github commits. Github's Copilot AI coding tool was trained on public commits, and in September of 2023, researchers at the University of Hong Kong published a paper on how they created an algorithm that generated 900 prompts designed to get Copilot to reveal secrets from
NIST Warns of Security and Privacy Risks from Rapid AI System Deployment

NIST Warns of Security and Privacy Risks from Rapid AI System Deployment

Jan 08, 2024 Artificial Intelligence / Cyber Security
The U.S. National Institute of Standards and Technology (NIST) is calling attention to the  privacy and security challenges  that arise as a result of increased deployment of artificial intelligence (AI) systems in recent years. "These security and privacy challenges include the potential for adversarial manipulation of training data, adversarial exploitation of model vulnerabilities to adversely affect the performance of the AI system, and even malicious manipulations, modifications or mere interaction with models to exfiltrate sensitive information about people represented in the data, about the model itself, or proprietary enterprise data," NIST  said . As AI systems become integrated into online services at a rapid pace, in part driven by the emergence of generative AI systems like OpenAI ChatGPT and Google Bard, models powering these technologies face a number of threats at various stages of the machine learning operations. These include corrupted training data, security flaw
How to Achieve the Best Risk-Based Alerting (Bye-Bye SIEM)

How to Achieve the Best Risk-Based Alerting (Bye-Bye SIEM)

Feb 19, 2024Network Detection and Response
Did you know that Network Detection and Response (NDR) has become the most effective technology to detect cyber threats? In contrast to SIEM, NDR offers adaptive cybersecurity with reduced false alerts and efficient threat response. Are you aware of  Network Detection and Response (NDR)  and how it's become the most effective technology to detect cyber threats?  NDR massively upgrades your security through risk-based alerting, prioritizing alerts based on the potential risk to your organization's systems and data. How? Well, NDR's real-time analysis, machine learning, and threat intelligence provide immediate detection, reducing alert fatigue and enabling better decision-making. In contrast to SIEM, NDR offers adaptive cybersecurity with reduced false positives and efficient threat response. Why Use Risk-Based Alerting? Risk-based alerting is an approach where security alerts and responses are prioritized based on the level of risk they pose to an organization's system
AI Solutions Are the New Shadow IT

AI Solutions Are the New Shadow IT

Nov 22, 2023 AI Security / SaaS Security
Ambitious Employees Tout New AI Tools, Ignore Serious SaaS Security Risks Like the  SaaS shadow IT  of the past, AI is placing CISOs and cybersecurity teams in a tough but familiar spot.  Employees are covertly using AI  with little regard for established IT and cybersecurity review procedures. Considering  ChatGPT's meteoric rise to 100 million users within 60 days of launch , especially with little sales and marketing fanfare, employee-driven demand for AI tools will only escalate.  As new studies show  some workers boost productivity by 40% using generative AI , the pressure for CISOs and their teams to fast-track AI adoption — and turn a blind eye to unsanctioned AI tool usage — is intensifying.  But succumbing to these pressures can introduce serious SaaS data leakage and breach risks, particularly as employees flock to AI tools developed by small businesses, solopreneurs, and indie developers. AI Security Guide Download AppOmni's CISO Guide to AI Security - Part 2 AI e
cyber security

Are You Vulnerable to Third-Party Breaches Through Interconnected SaaS Apps?

websiteWing SecuritySaaS Security / Risk Management
Protect against cascading risks by identifying and mitigating app2app and third-party SaaS vulnerabilities.
Cybersecurity Resources