#1 Trusted Cybersecurity News Platform
Followed by 5.20+ million
The Hacker News Logo
Subscribe – Get Latest News
State of SaaS

AI Ethics | Breaking Cybersecurity News | The Hacker News

Category — AI Ethics
Data Governance in DevOps: Ensuring Compliance in the AI Era

Data Governance in DevOps: Ensuring Compliance in the AI Era

Dec 16, 2024 DevOps / Data Governance
With the evolution of modern software development, CI/CD pipeline governance has emerged as a critical factor in maintaining both agility and compliance. As we enter the age of artificial intelligence (AI), the importance of robust pipeline governance has only intensified. With that said, we'll explore the concept of CI/CD pipeline governance and why it's vital, especially as AI becomes increasingly prevalent in our software pipelines. What is CI/CD Pipeline Governance? CI/CD pipeline governance refers to the framework of policies, practices, and controls that oversee the entire software delivery process. It ensures that every step, from the moment the code is committed to when it's deployed in production, adheres to organizational standards, security protocols, and regulatory requirements. In DevOps, this governance acts as a guardrail, allowing teams to move fast without compromising on quality, security, or compliance. It's about striking the delicate balance betwee...
Apple Opens PCC Source Code for Researchers to Identify Bugs in Cloud AI Security

Apple Opens PCC Source Code for Researchers to Identify Bugs in Cloud AI Security

Oct 25, 2024 Cloud Security / Artificial Intelligence
Apple has publicly made available its Private Cloud Compute (PCC) Virtual Research Environment (VRE), allowing the research community to inspect and verify the privacy and security guarantees of its offering. PCC, which Apple unveiled earlier this June, has been marketed as the "most advanced security architecture ever deployed for cloud AI compute at scale." With the new technology, the idea is to offload computationally complex Apple Intelligence requests to the cloud in a manner that doesn't sacrifice user privacy. Apple said it's inviting "all security and privacy researchers — or anyone with interest and a technical curiosity — to learn more about PCC and perform their own independent verification of our claims." To further incentivize research, the iPhone maker said it's expanding the Apple Security Bounty program to include PCC by offering monetary payouts ranging from $50,000 to $1,000,000 for security vulnerabilities identified in it. Th...
Product Walkthrough: How Reco Discovers Shadow AI in SaaS

Future-Ready Trust: Learn How to Manage Certificates Like Never Before

WebinarTrust Management / SSL Certificate
Managing digital trust shouldn't feel impossible. Join us to discover how DigiCert ONE transforms certificate management—streamlining trust operations, ensuring compliance, and future-proofing your digital strategy.
OpenAI Blocks Iranian Influence Operation Using ChatGPT for U.S. Election Propaganda

OpenAI Blocks Iranian Influence Operation Using ChatGPT for U.S. Election Propaganda

Aug 17, 2024 National Securit / AI Ethics
OpenAI on Friday said it banned a set of accounts linked to what it said was an Iranian covert influence operation that leveraged ChatGPT to generate content that, among other things, focused on the upcoming U.S. presidential election. "This week we identified and took down a cluster of ChatGPT accounts that were generating content for a covert Iranian influence operation identified as Storm-2035," OpenAI said . "The operation used ChatGPT to generate content focused on a number of topics — including commentary on candidates on both sides in the U.S. presidential election – which it then shared via social media accounts and websites." The artificial intelligence (AI) company said the content did not achieve any meaningful engagement, with a majority of the social media posts receiving negligible to no likes, shares, and comments. It further noted it had found little evidence that the long-form articles created using ChatGPT were shared on social media platforms....
cyber security

2024: A Year of Identity Attacks | Get the New eBook

websitePush SecurityIdentity Security
Prepare to defend against identity attacks in 2025 by looking back at identity-based breaches in 2024.
Meta Halts AI Use in Brazil Following Data Protection Authority's Ban

Meta Halts AI Use in Brazil Following Data Protection Authority's Ban

Jul 18, 2024 Artificial Intelligence / Data Protection
Meta has suspended the use of generative artificial intelligence (GenAI) in Brazil after the country's data protection authority issued a preliminary ban objecting to its new privacy policy. The development was first reported by news agency Reuters. The company said it has decided to suspend the tools while it is in talks with Brazil's National Data Protection Authority (ANPD) to address the agency's concerns over its use of GenAI technology. Earlier this month, ANPD halted with immediate effect the social media giant's new privacy policy that granted the company access to users' personal data to train its GenAI systems. The decision stems from "the imminent risk of serious and irreparable damage or difficult-to-repair damage to the fundamental rights of the affected data subjects," the agency said. It further set a daily fine of 50,000 reais (about $9,100 as of July 18) in case of non-compliance. Last week, it gave Meta "five more days to p...
From Deepfakes to Malware: AI's Expanding Role in Cyber Attacks

From Deepfakes to Malware: AI's Expanding Role in Cyber Attacks

Mar 19, 2024 Generative AI / Incident Response
Large language models (LLMs) powering artificial intelligence (AI) tools today could be exploited to develop self-augmenting malware capable of bypassing YARA rules. "Generative AI can be used to evade string-based YARA rules by augmenting the source code of small malware variants, effectively lowering detection rates," Recorded Future  said  in a new report shared with The Hacker News. The findings are part of a red teaming exercise designed to uncover malicious use cases for AI technologies, which are  already being experimented  with by threat actors to create malware code snippets, generate phishing emails, and conduct reconnaissance on potential targets. The cybersecurity firm said it submitted to an LLM a known piece of malware called  STEELHOOK  that's associated with the APT28 hacking group, alongside its YARA rules, asking it to modify the source code to sidestep detection such that the original functionality remained intact and the gene...
Expert Insights / Articles Videos
Cybersecurity Resources