#1 Trusted Cybersecurity News Platform Followed by 4.50+ million
The Hacker News Logo
Subscribe – Get Latest News
Cloud Security

Data Poisoning | Breaking Cybersecurity News | The Hacker News

Researchers Highlight Google's Gemini AI Susceptibility to LLM Threats

Researchers Highlight Google's Gemini AI Susceptibility to LLM Threats
Mar 13, 2024 Large Language Model / AI Security
Google's  Gemini  large language model (LLM) is susceptible to security threats that could cause it to divulge system prompts, generate harmful content, and carry out indirect injection attacks. The findings come from HiddenLayer, which said the issues impact consumers using Gemini Advanced with Google Workspace as well as companies using the LLM API. The first vulnerability involves getting around security guardrails to leak the system prompts (or a system message), which are designed to set conversation-wide instructions to the LLM to help it generate more useful responses, by asking the model to output its "foundational instructions" in a markdown block. "A system message can be used to inform the LLM about the context," Microsoft  notes  in its documentation about LLM prompt engineering. "The context may be the type of conversation it is engaging in, or the function it is supposed to perform. It helps the LLM generate more appropriate responses.&qu

U.S., U.K., and Global Partners Release Secure AI System Development Guidelines

U.S., U.K., and Global Partners Release Secure AI System Development Guidelines
Nov 27, 2023 Artificial Intelligence / Privacy
The U.K. and U.S., along with international partners from 16 other countries, have released new guidelines for the development of secure artificial intelligence (AI) systems. "The approach prioritizes ownership of security outcomes for customers, embraces radical transparency and accountability, and establishes organizational structures where secure design is a top priority," the U.S. Cybersecurity and Infrastructure Security Agency (CISA)  said . The goal is to  increase cyber security levels of AI  and help ensure that the technology is designed, developed, and deployed in a secure manner, the National Cyber Security Centre (NCSC)  added . The guidelines also build upon the U.S. government's  ongoing   efforts  to manage the risks posed by AI by ensuring that new tools are tested adequately before public release, there are guardrails in place to address societal harms, such as bias and discrimination, and privacy concerns, and setting up robust methods for consumer

Hands-on Review: Cynomi AI-powered vCISO Platform

Hands-on Review: Cynomi AI-powered vCISO Platform
Apr 10, 2024vCISO / Risk Assessment
The need for vCISO services is growing. SMBs and SMEs are dealing with more third-party risks, tightening regulatory demands and stringent cyber insurance requirements than ever before. However, they often lack the resources and expertise to hire an in-house security executive team. By outsourcing security and compliance leadership to a vCISO, these organizations can more easily obtain cybersecurity expertise specialized for their industry and strengthen their cybersecurity posture. MSPs and MSSPs looking to meet this growing vCISO demand are often faced with the same challenge. The demand for cybersecurity talent far exceeds the supply. This has led to a competitive market where the costs of hiring and retaining skilled professionals can be prohibitive for MSSPs/MSPs as well. The need to maintain expertise of both security and compliance further exacerbates this challenge. Cynomi, the first AI-driven vCISO platform , can help. Cynomi enables you - MSPs, MSSPs and consulting firms
Cybersecurity Resources