#1 Trusted Cybersecurity News Platform
Followed by 5.20+ million
The Hacker News Logo
Subscribe – Get Latest News

Ethical AI | Breaking Cybersecurity News | The Hacker News

Category — Ethical AI
OpenAI, Meta, and TikTok Crack Down on Covert Influence Campaigns, Some AI-Powered

OpenAI, Meta, and TikTok Crack Down on Covert Influence Campaigns, Some AI-Powered

May 31, 2024 Ethical AI / Disinformation
OpenAI on Thursday disclosed that it took steps to cut off five covert influence operations (IO) originating from China, Iran, Israel, and Russia that sought to abuse its artificial intelligence (AI) tools to manipulate public discourse or political outcomes online while obscuring their true identity. These activities, which were detected over the past three months, used its AI models to generate short comments and longer articles in a range of languages, cook up names and bios for social media accounts, conduct open-source research, debug simple code, and translate and proofread texts. The AI research organization said two of the networks were linked to actors in Russia, including a previously undocumented operation codenamed Bad Grammar that primarily used at least a dozen Telegram accounts to target audiences in Ukraine, Moldova, the Baltic States and the United States (U.S.) with sloppy content in Russian and English. "The network used our models and accounts on Telegram t...
Microsoft Releases PyRIT - A Red Teaming Tool for Generative AI

Microsoft Releases PyRIT - A Red Teaming Tool for Generative AI

Feb 23, 2024 Red Teaming / Artificial Intelligence
Microsoft has released an open access automation framework called  PyRIT  (short for Python Risk Identification Tool) to proactively identify risks in generative artificial intelligence (AI) systems. The red teaming tool is designed to "enable every organization across the globe to innovate responsibly with the latest artificial intelligence advances," Ram Shankar Siva Kumar, AI red team lead at Microsoft,  said . The company said PyRIT could be used to assess the robustness of large language model (LLM) endpoints against different harm categories such as fabrication (e.g., hallucination), misuse (e.g., bias), and prohibited content (e.g., harassment). It can also be used to identify security harms ranging from malware generation to jailbreaking, as well as privacy harms like identity theft. PyRIT comes with five interfaces: target, datasets, scoring engine, the ability to support multiple attack strategies, and incorporating a memory component that can either take ...
Beyond Compliance: The Advantage of Year-Round Network Pen Testing

Beyond Compliance: The Advantage of Year-Round Network Pen Testing

Nov 18, 2024Penetration Testing / Network Security
IT leaders know the drill—regulators and cyber insurers demand regular network penetration testing to keep the bad guys out. But here's the thing: hackers don't wait around for compliance schedules. Most companies approach network penetration testing on a set schedule, with the most common frequency being twice a year (29%), followed by three to four times per year (23%) and once per year (20%), according to the Kaseya Cybersecurity Survey Report 2024 . Compliance-focused testing can catch vulnerabilities that exist at the exact time of testing, but it's not enough to stay ahead of attackers in a meaningful way. Why More Frequent Testing Makes Sense When companies test more often, they're not just checking a box for compliance—they're actually protecting their networks. The Kaseya survey also points out that the top drivers for network penetration testing are: Cybersecurity Control and Validation (34%) – ensuring the security controls work and vulnerabilities are minimized. Re...
Google Open Sources Magika: AI-Powered File Identification Tool

Google Open Sources Magika: AI-Powered File Identification Tool

Feb 17, 2024 Artificial Intelligence / Data Protection
Google has announced that it's open-sourcing  Magika , an artificial intelligence (AI)-powered tool to identify file types, to help defenders accurately detect binary and textual file types. "Magika outperforms conventional file identification methods providing an overall 30% accuracy boost and up to 95% higher precision on traditionally hard to identify, but potentially problematic content such as VBA, JavaScript, and Powershell," the company  said . The software uses a "custom, highly optimized deep-learning model" that enables the precise identification of file types within milliseconds. Magika implements inference functions using the Open Neural Network Exchange ( ONNX ). Google said it internally uses Magika at scale to help improve users' safety by routing Gmail, Drive, and Safe Browsing files to the proper security and content policy scanners. In November 2023, the tech giant unveiled  RETVec  (short for Resilient and Efficient Text Vectorizer),...
cyber security

Creating, Managing and Securing Non-Human Identities

websitePermisoCybersecurity / Identity Security
A new class of identities has emerged alongside traditional human users: non-human identities (NHIs). Permiso Security's new eBook details everything you need to know about managing and securing non-human identities, and strategies to unify identity security without compromising agility.
U.S., U.K., and Global Partners Release Secure AI System Development Guidelines

U.S., U.K., and Global Partners Release Secure AI System Development Guidelines

Nov 27, 2023 Artificial Intelligence / Privacy
The U.K. and U.S., along with international partners from 16 other countries, have released new guidelines for the development of secure artificial intelligence (AI) systems. "The approach prioritizes ownership of security outcomes for customers, embraces radical transparency and accountability, and establishes organizational structures where secure design is a top priority," the U.S. Cybersecurity and Infrastructure Security Agency (CISA)  said . The goal is to  increase cyber security levels of AI  and help ensure that the technology is designed, developed, and deployed in a secure manner, the National Cyber Security Centre (NCSC)  added . The guidelines also build upon the U.S. government's  ongoing   efforts  to manage the risks posed by AI by ensuring that new tools are tested adequately before public release, there are guardrails in place to address societal harms, such as bias and discrimination, and privacy concerns, and setting up robust ...
Expert Insights / Articles Videos
Cybersecurity Resources