#1 Trusted Cybersecurity News Platform
Followed by 5.20+ million
The Hacker News Logo
Subscribe – Get Latest News

artificial intelligence | Breaking Cybersecurity News | The Hacker News

Category — artificial intelligence
Summary of "AI Leaders Spill Their Secrets" Webinar

Summary of "AI Leaders Spill Their Secrets" Webinar

Jul 19, 2024 Technology / Artificial Intelligence
Event Overview The " AI Leaders Spill Their Secrets " webinar, hosted by Sigma Computing, featured prominent AI experts sharing their experiences and strategies for success in the AI industry. The panel included Michael Ward from Sardine, Damon Bryan from Hyperfinity, and Stephen Hillian from Astronomer, moderated by Zalak Trivedi, Sigma Computing's Product Manager. Key Speakers and Their Backgrounds 1. Michael Ward Senior Risk Data Analyst at Sardine. Over 25 years of experience in software engineering. Focuses on data science, analytics, and machine learning to prevent fraud and money laundering. 2. Damon Bryan Co-founder and CTO at Hyperfinity. Specializes in decision intelligence software for retailers and brands. Background in data science, AI, and analytics, transitioning from consultancy to a full-fledged software company. 3. Stephen Hillion SVP of Data and AI at Astronomer. Manages data science teams and focuses on the development and scaling of...
Meta Halts AI Use in Brazil Following Data Protection Authority's Ban

Meta Halts AI Use in Brazil Following Data Protection Authority's Ban

Jul 18, 2024 Artificial Intelligence / Data Protection
Meta has suspended the use of generative artificial intelligence (GenAI) in Brazil after the country's data protection authority issued a preliminary ban objecting to its new privacy policy. The development was first reported by news agency Reuters. The company said it has decided to suspend the tools while it is in talks with Brazil's National Data Protection Authority (ANPD) to address the agency's concerns over its use of GenAI technology. Earlier this month, ANPD halted with immediate effect the social media giant's new privacy policy that granted the company access to users' personal data to train its GenAI systems. The decision stems from "the imminent risk of serious and irreparable damage or difficult-to-repair damage to the fundamental rights of the affected data subjects," the agency said. It further set a daily fine of 50,000 reais (about $9,100 as of July 18) in case of non-compliance. Last week, it gave Meta "five more days to p...
How AI Is Transforming IAM and Identity Security

How AI Is Transforming IAM and Identity Security

Nov 15, 2024Machine Learning / Identity Security
In recent years, artificial intelligence (AI) has begun revolutionizing Identity Access Management (IAM), reshaping how cybersecurity is approached in this crucial field. Leveraging AI in IAM is about tapping into its analytical capabilities to monitor access patterns and identify anomalies that could signal a potential security breach. The focus has expanded beyond merely managing human identities — now, autonomous systems, APIs, and connected devices also fall within the realm of AI-driven IAM, creating a dynamic security ecosystem that adapts and evolves in response to sophisticated cyber threats. The Role of AI and Machine Learning in IAM AI and machine learning (ML) are creating a more robust, proactive IAM system that continuously learns from the environment to enhance security. Let's explore how AI impacts key IAM components: Intelligent Monitoring and Anomaly Detection AI enables continuous monitoring of both human and non-human identities , including APIs, service acc...
U.S. Seizes Domains Used by AI-Powered Russian Bot Farm for Disinformation

U.S. Seizes Domains Used by AI-Powered Russian Bot Farm for Disinformation

Jul 12, 2024 Disinformation / Artificial Intelligence
The U.S. Department of Justice (DoJ) said it seized two internet domains and searched nearly 1,000 social media accounts that Russian threat actors allegedly used to covertly spread pro-Kremlin disinformation in the country and abroad on a large scale. "The social media bot farm used elements of AI to create fictitious social media profiles — often purporting to belong to individuals in the United States — which the operators then used to promote messages in support of Russian government objectives," the DoJ said . The bot network, comprising 968 accounts on X, is said to be part of an elaborate scheme hatched by an employee of Russian state-owned media outlet RT (formerly Russia Today), sponsored by the Kremlin, and aided by an officer of Russia's Federal Security Service (FSB), who created and led an unnamed private intelligence organization. The developmental efforts for the bot farm began in April 2022 when the individuals procured online infrastructure while anon...
cyber security

Creating, Managing and Securing Non-Human Identities

websitePermisoCybersecurity / Identity Security
A new class of identities has emerged alongside traditional human users: non-human identities (NHIs). Permiso Security's new eBook details everything you need to know about managing and securing non-human identities, and strategies to unify identity security without compromising agility.
Brazil Halts Meta's AI Data Processing Amid Privacy Concerns

Brazil Halts Meta's AI Data Processing Amid Privacy Concerns

Jul 04, 2024 Artificial Intelligence / Data Privacy
Brazil's data protection authority, Autoridade Nacional de Proteção de Dados (ANPD), has temporarily banned Meta from processing users' personal data to train the company's artificial intelligence (AI) algorithms. The ANPD said it found "evidence of processing of personal data based on inadequate legal hypothesis, lack of transparency, limitation of the rights of data subjects, and risks to children and adolescents." The decision follows the social media giant's update to its terms that allow it to use public content from Facebook, Messenger, and Instagram for AI training purposes. A recent report published by Human Rights Watch found that LAION-5B , one of the largest image-text datasets used to train AI models, contained links to identifiable photos of Brazilian children, putting them at risk of malicious deepfakes that could place them under even more exploitation and harm. Brazil has about 102 million active users, making it one of the largest ma...
The Emerging Role of AI in Open-Source Intelligence

The Emerging Role of AI in Open-Source Intelligence

Jul 03, 2024 OSINT / Artificial Intelligence
Recently the Office of the Director of National Intelligence (ODNI) unveiled a new strategy for open-source intelligence (OSINT) and referred to OSINT as the "INT of first resort". Public and private sector organizations are realizing the value that the discipline can provide but are also finding that the exponential growth of digital data in recent years has overwhelmed many traditional OSINT methods. Thankfully, Artificial Intelligence (AI) and Machine Learning (ML) are starting to provide a transformative impact on the future of information gathering and analysis.  What is Open-Source Intelligence (OSINT)? Open-Source Intelligence refers to the collection and analysis of information from publicly available sources. These sources can include traditional media, social media platforms, academic publications, government reports, and any other data that is openly accessible. The key characteristic of OSINT is that it does not involve covert or clandestine methods of information ga...
The Secrets of Hidden AI Training on Your Data

The Secrets of Hidden AI Training on Your Data

Jun 27, 2024 Artificial Intelligence / SaaS Security
While some SaaS threats are clear and visible, others are hidden in plain sight, both posing significant risks to your organization. Wing's research indicates that an astounding 99.7% of organizations utilize applications embedded with AI functionalities. These AI-driven tools are indispensable, providing seamless experiences from collaboration and communication to work management and decision-making. However, beneath these conveniences lies a largely unrecognized risk: the potential for AI capabilities in these SaaS tools to compromise sensitive business data and intellectual property (IP). Wing's recent findings reveal a surprising statistic: 70% of the top 10 most commonly used AI applications may use your data for training their models. This practice can go beyond mere data learning and storage. It can involve retraining on your data, having human reviewers analyze it, and even sharing it with third parties. Often, these threats are buried deep in the fine print of Term...
Prompt Injection Flaw in Vanna AI Exposes Databases to RCE Attacks

Prompt Injection Flaw in Vanna AI Exposes Databases to RCE Attacks

Jun 27, 2024 Artificial Intelligence / Vulnerability
Cybersecurity researchers have disclosed a high-severity security flaw in the Vanna.AI library that could be exploited to achieve remote code execution vulnerability via prompt injection techniques. The vulnerability, tracked as CVE-2024-5565 (CVSS score: 8.1), relates to a case of prompt injection in the "ask" function that could be exploited to trick the library into executing arbitrary commands, supply chain security firm JFrog said . Vanna is a Python-based machine learning library that allows users to chat with their SQL database to glean insights by "just asking questions" (aka prompts) that are translated into an equivalent SQL query using a large language model (LLM). The rapid rollout of generative artificial intelligence (AI) models in recent years has brought to the fore the risks of exploitation by malicious actors, who can weaponize the tools by providing adversarial inputs that bypass the safety mechanisms built into them. One such prominent clas...
Google Introduces Project Naptime for AI-Powered Vulnerability Research

Google Introduces Project Naptime for AI-Powered Vulnerability Research

Jun 24, 2024 Vulnerability / Artificial Intelligence
Google has developed a new framework called Project Naptime that it says enables a large language model (LLM) to carry out vulnerability research with an aim to improve automated discovery approaches. "The Naptime architecture is centered around the interaction between an AI agent and a target codebase," Google Project Zero researchers Sergei Glazunov and Mark Brand said . "The agent is provided with a set of specialized tools designed to mimic the workflow of a human security researcher." The initiative is so named for the fact that it allows humans to "take regular naps" while it assists with vulnerability research and automating variant analysis. The approach, at its core, seeks to take advantage of advances in code comprehension and general reasoning ability of LLMs, thus allowing them to replicate human behavior when it comes to identifying and demonstrating security vulnerabilities. It encompasses several components such as a Code Browser tool...
Critical RCE Vulnerability Discovered in Ollama AI Infrastructure Tool

Critical RCE Vulnerability Discovered in Ollama AI Infrastructure Tool

Jun 24, 2024 Artificial Intelligence / Cloud Security
Cybersecurity researchers have detailed a now-patched security flaw affecting the Ollama open-source artificial intelligence (AI) infrastructure platform that could be exploited to achieve remote code execution. Tracked as CVE-2024-37032 , the vulnerability has been codenamed Probllama by cloud security firm Wiz. Following responsible disclosure on May 5, 2024, the issue was addressed in version 0.1.34 released on May 7, 2024. Ollama is a service for packaging, deploying, running large language models (LLMs) locally on Windows, Linux, and macOS devices. At its core, the issue relates to a case of insufficient input validation that results in a path traversal flaw an attacker could exploit to overwrite arbitrary files on the server and ultimately lead to remote code execution. The shortcoming requires the threat actor to send specially crafted HTTP requests to the Ollama API server for successful exploitation. It specifically takes advantage of the API endpoint "/api/pull...
Meta Pauses AI Training on EU User Data Amid Privacy Concerns

Meta Pauses AI Training on EU User Data Amid Privacy Concerns

Jun 15, 2024 Artificial Intelligence / Privacy
Meta on Friday said it's delaying its efforts to train the company's large language models ( LLMs ) using public content shared by adult users on Facebook and Instagram in the European Union following a request from the Irish Data Protection Commission (DPC). The company expressed disappointment at having to put its AI plans on pause, stating it had taken into account feedback from regulators and data protection authorities in the region. At issue is Meta's plan to use personal data to train its artificial intelligence (AI) models without seeking users' explicit consent, instead relying on the legal basis of ' Legitimate Interests ' for processing first and third-party data in the region. These changes were expected to come into effect on June 26, before when the company said users could opt out of having their data used by submitting a request "if they wish." Meta is already utilizing user-generated content to train its AI in other markets such ...
Google's Privacy Sandbox Accused of User Tracking by Austrian Non-Profit

Google's Privacy Sandbox Accused of User Tracking by Austrian Non-Profit

Jun 14, 2024 Privacy / Ad Tracking
Google's plans to deprecate third-party tracking cookies in its Chrome web browser with Privacy Sandbox has run into fresh trouble after Austrian privacy non-profit noyb (none of your business) said the feature can still be used to track users. "While the so-called 'Privacy Sandbox' is advertised as an improvement over extremely invasive third-party tracking, the tracking is now simply done within the browser by Google itself," noyb said . "To do this, the company theoretically needs the same informed consent from users. Instead, Google is tricking people by pretending to 'Turn on an ad privacy feature.'" In other words, by making users agree to enable a privacy feature, they are still being tracked by consenting to Google's first-party ad tracking, the Vienna-based non-profit founded by activist Max Schrems alleged in a complaint filed with the Austrian data protection authority. Privacy Sandbox is a set of proposals put forth by the i...
Microsoft Delays AI-Powered Recall Feature for Copilot+ PCs Amid Security Concerns

Microsoft Delays AI-Powered Recall Feature for Copilot+ PCs Amid Security Concerns

Jun 14, 2024 Artificial Intelligence / Data Protection
Microsoft on Thursday revealed that it's delaying the rollout of the controversial artificial intelligence (AI)-powered Recall feature for Copilot+ PCs. To that end, the company said it intends to shift from general availability to a preview available first in the Windows Insider Program ( WIP ) in the coming weeks. "We are adjusting the release model for Recall to leverage the expertise of the Windows Insider community to ensure the experience meets our high standards for quality and security," it said in an update. "This decision is rooted in our commitment to providing a trusted, secure and robust experience for all customers and to seek additional feedback prior to making the feature available to all Copilot+ PC users." First unveiled last month, Recall was originally slated for a broad release on June 18, 2024, but has since waded into controversial waters after it was widely panned as a privacy and security risk and an alluring target for threat ac...
Apple Launches Private Cloud Compute for Privacy-Centric AI Processing

Apple Launches Private Cloud Compute for Privacy-Centric AI Processing

Jun 11, 2024 Cloud Computing / Artificial Intelligence
Apple has announced the launch of a "groundbreaking cloud intelligence system" called Private Cloud Compute (PCC) that's designed for processing artificial intelligence (AI) tasks in a privacy-preserving manner in the cloud. The tech giant described PCC as the "most advanced security architecture ever deployed for cloud AI compute at scale." PCC coincides with the arrival of new generative AI (GenAI) features – collectively dubbed Apple Intelligence , or AI for short – that the iPhone maker unveiled in its next generation of software, including iOS 18 , iPadOS 18 , and macOS Sequoia . All of the Apple Intelligence features, both the ones that run on-device and those that rely on PCC, leverage in-house generative models trained on "licensed data, including data selected to enhance specific features, as well as publicly available data collected by our web-crawler, AppleBot." With PCC, the idea is to essentially offload complex requests that requir...
Microsoft Revamps Controversial AI-Powered Recall Feature Amid Privacy Concerns

Microsoft Revamps Controversial AI-Powered Recall Feature Amid Privacy Concerns

Jun 08, 2024 Artificial Intelligence / Privacy
Microsoft on Friday said it will disable its much-criticized artificial intelligence (AI)-powered Recall feature by default and make it an opt-in. Recall , currently in preview and coming exclusively to Copilot+ PCs on June 18, 2024, functions as an "explorable visual timeline" by capturing screenshots of what appears on users' screens every five seconds, which are subsequently analyzed and parsed to surface relevant information. But the feature, meant to serve as some sort of an AI-enabled photographic memory, was met with instantaneous backlash from the security and privacy community, which excoriated the company for having not thought through enough and implementing adequate safeguards that could prevent malicious actors from easily gaining a window into a victim's digital life. The recorded information could include screenshots of documents, emails, or messages containing sensitive details that may have been deleted or shared temporarily using disappearing ...
The AI Debate: Google's Guidelines, Meta's GDPR Dispute, Microsoft's Recall Backlash

The AI Debate: Google's Guidelines, Meta's GDPR Dispute, Microsoft's Recall Backlash

Jun 07, 2024 Artificial Intelligence / Privacy
Google is urging third-party Android app developers to incorporate generative artificial intelligence (GenAI) features in a responsible manner. The new guidance from the search and advertising giant is an effort to combat problematic content, including sexual content and hate speech, created through such tools. To that end, apps that generate content using AI must ensure they don't create Restricted Content , have a mechanism for users to report or flag offensive information , and market them in a manner that accurately represents the app's capabilities. App developers are also being recommended to rigorously test their AI models to ensure they respect user safety and privacy. "Be sure to test your apps across various user scenarios and safeguard them against prompts that could manipulate your generative AI feature to create harmful or offensive content," Prabhat Sharma, director of trust and safety for Google Play, Android, and Chrome, said . The development com...
Expert Insights / Articles Videos
Cybersecurity Resources