#1 Trusted Cybersecurity News Platform
Followed by 4.50+ million
The Hacker News Logo
Subscribe – Get Latest News
Cybersecurity

Data Poisoning | Breaking Cybersecurity News | The Hacker News

Category — Data Poisoning
Researchers Highlight Google's Gemini AI Susceptibility to LLM Threats

Researchers Highlight Google's Gemini AI Susceptibility to LLM Threats

Mar 13, 2024 Large Language Model / AI Security
Google's  Gemini  large language model (LLM) is susceptible to security threats that could cause it to divulge system prompts, generate harmful content, and carry out indirect injection attacks. The findings come from HiddenLayer, which said the issues impact consumers using Gemini Advanced with Google Workspace as well as companies using the LLM API. The first vulnerability involves getting around security guardrails to leak the system prompts (or a system message), which are designed to set conversation-wide instructions to the LLM to help it generate more useful responses, by asking the model to output its "foundational instructions" in a markdown block. "A system message can be used to inform the LLM about the context," Microsoft  notes  in its documentation about LLM prompt engineering. "The context may be the type of conversation it is engaging in, or the function it is supposed to perform. It helps the LLM generate more appropriate responses.&qu
U.S., U.K., and Global Partners Release Secure AI System Development Guidelines

U.S., U.K., and Global Partners Release Secure AI System Development Guidelines

Nov 27, 2023 Artificial Intelligence / Privacy
The U.K. and U.S., along with international partners from 16 other countries, have released new guidelines for the development of secure artificial intelligence (AI) systems. "The approach prioritizes ownership of security outcomes for customers, embraces radical transparency and accountability, and establishes organizational structures where secure design is a top priority," the U.S. Cybersecurity and Infrastructure Security Agency (CISA)  said . The goal is to  increase cyber security levels of AI  and help ensure that the technology is designed, developed, and deployed in a secure manner, the National Cyber Security Centre (NCSC)  added . The guidelines also build upon the U.S. government's  ongoing   efforts  to manage the risks posed by AI by ensuring that new tools are tested adequately before public release, there are guardrails in place to address societal harms, such as bias and discrimination, and privacy concerns, and setting up robust methods for consumer
NIST Cybersecurity Framework (CSF) and CTEM – Better Together

NIST Cybersecurity Framework (CSF) and CTEM – Better Together

Sep 05, 2024Threat Detection / Vulnerability Management
It's been a decade since the National Institute of Standards and Technology (NIST) introduced its Cybersecurity Framework (CSF) 1.0. Created following a 2013 Executive Order, NIST was tasked with designing a voluntary cybersecurity framework that would help organizations manage cyber risk, providing guidance based on established standards and best practices. While this version was originally tailored for Critical infrastructure, 2018's version 1.1 was designed for any organization looking to address cybersecurity risk management.  CSF is a valuable tool for organizations looking to evaluate and enhance their security posture. The framework helps security stakeholders understand and assess their current security measures, organize and prioritize actions to manage risks, and improve communication within and outside organizations using a common language. It's a comprehensive collection of guidelines, best practices, and recommendations, divided into five core functions: Identify, Protec
Expert Insights / Articles Videos
Cybersecurity Resources