#1 Trusted Cybersecurity News Platform
Followed by 5.20+ million
The Hacker News Logo
Subscribe – Get Latest News
Cloud Security

Data Poisoning | Breaking Cybersecurity News | The Hacker News

Category — Data Poisoning
Researchers Highlight Google's Gemini AI Susceptibility to LLM Threats

Researchers Highlight Google's Gemini AI Susceptibility to LLM Threats

Mar 13, 2024 Large Language Model / AI Security
Google's  Gemini  large language model (LLM) is susceptible to security threats that could cause it to divulge system prompts, generate harmful content, and carry out indirect injection attacks. The findings come from HiddenLayer, which said the issues impact consumers using Gemini Advanced with Google Workspace as well as companies using the LLM API. The first vulnerability involves getting around security guardrails to leak the system prompts (or a system message), which are designed to set conversation-wide instructions to the LLM to help it generate more useful responses, by asking the model to output its "foundational instructions" in a markdown block. "A system message can be used to inform the LLM about the context," Microsoft  notes  in its documentation about LLM prompt engineering. "The context may be the type of conversation it is engaging in, or the function it is supposed to perform. It helps the LLM generate more appropriate responses....
U.S., U.K., and Global Partners Release Secure AI System Development Guidelines

U.S., U.K., and Global Partners Release Secure AI System Development Guidelines

Nov 27, 2023 Artificial Intelligence / Privacy
The U.K. and U.S., along with international partners from 16 other countries, have released new guidelines for the development of secure artificial intelligence (AI) systems. "The approach prioritizes ownership of security outcomes for customers, embraces radical transparency and accountability, and establishes organizational structures where secure design is a top priority," the U.S. Cybersecurity and Infrastructure Security Agency (CISA)  said . The goal is to  increase cyber security levels of AI  and help ensure that the technology is designed, developed, and deployed in a secure manner, the National Cyber Security Centre (NCSC)  added . The guidelines also build upon the U.S. government's  ongoing   efforts  to manage the risks posed by AI by ensuring that new tools are tested adequately before public release, there are guardrails in place to address societal harms, such as bias and discrimination, and privacy concerns, and setting up robust ...
5 Reasons Device Management Isn't Device Trust​

5 Reasons Device Management Isn't Device Trust​

Apr 21, 2025Endpoint Security / Zero Trust
The problem is simple: all breaches start with initial access, and initial access comes down to two primary attack vectors – credentials and devices. This is not news; every report you can find on the threat landscape depicts the same picture.  The solution is more complex. For this article, we'll focus on the device threat vector. The risk they pose is significant, which is why device management tools like Mobile Device Management (MDM) and Endpoint Detection and Response (EDR) are essential components of an organization's security infrastructure. However, relying solely on these tools to manage device risk actually creates a false sense of security. Instead of the blunt tools of device management, organizations are looking for solutions that deliver device trust . Device trust provides a comprehensive, risk-based approach to device security enforcement, closing the large gaps left behind by traditional device management solutions. Here are 5 of those limitations and how to ov...
Expert Insights / Articles Videos
Cybersecurity Resources