#1 Trusted Cybersecurity News Platform
Followed by 4.50+ million
The Hacker News Logo
Subscribe – Get Latest News
AI Security

artificial intelligence | Breaking Cybersecurity News | The Hacker News

How to Set up an Automated SMS Analysis Service with AI in Tines

How to Set up an Automated SMS Analysis Service with AI in Tines

Jul 22, 2024 Threat Detection / Employee Security
The opportunities to use AI in workflow automation are many and varied, but one of the simplest ways to use AI to save time and enhance your organization's security posture is by building an automated SMS analysis service. Workflow automation platform Tines provides a good example of how to do it. The vendor recently released their first native AI features , and security teams have already started sharing the AI-enhanced workflows they've built using the platform.  Tines' library of pre-built workflows includes AI-enhanced pre-built workflows for normalizing alerts, creating cases, and determining which phishing emails require escalations.  Let's take a closer look at their SMS analysis workflow, which, like all of their pre-built workflows, is free to access and import, and can be used with a free Community Edition account.  Here, we'll share an overview of the workflow, and a step-by-step guide for getting it up and running. The problem - SMS scam messages targeted at employees
Summary of "AI Leaders Spill Their Secrets" Webinar

Summary of "AI Leaders Spill Their Secrets" Webinar

Jul 19, 2024 Technology / Artificial Intelligence
Event Overview The " AI Leaders Spill Their Secrets " webinar, hosted by Sigma Computing, featured prominent AI experts sharing their experiences and strategies for success in the AI industry. The panel included Michael Ward from Sardine, Damon Bryan from Hyperfinity, and Stephen Hillian from Astronomer, moderated by Zalak Trivedi, Sigma Computing's Product Manager. Key Speakers and Their Backgrounds 1. Michael Ward Senior Risk Data Analyst at Sardine. Over 25 years of experience in software engineering. Focuses on data science, analytics, and machine learning to prevent fraud and money laundering. 2. Damon Bryan Co-founder and CTO at Hyperfinity. Specializes in decision intelligence software for retailers and brands. Background in data science, AI, and analytics, transitioning from consultancy to a full-fledged software company. 3. Stephen Hillion SVP of Data and AI at Astronomer. Manages data science teams and focuses on the development and scaling of
How to Increase Engagement with Your Cybersecurity Clients Through vCISO Reporting

How to Increase Engagement with Your Cybersecurity Clients Through vCISO Reporting

Jul 22, 2024vCISO / Business Security
As a vCISO, you are responsible for your client's cybersecurity strategy and risk governance. This incorporates multiple disciplines, from research to execution to reporting. Recently, we published a comprehensive playbook for vCISOs, "Your First 100 Days as a vCISO – 5 Steps to Success" , which covers all the phases entailed in launching a successful vCISO engagement, along with recommended actions to take, and step-by-step examples.  Following the success of the playbook and the requests that have come in from the MSP/MSSP community, we decided to drill down into specific parts of vCISO reporting and provide more color and examples. In this article, we focus on how to create compelling narratives within a report, which has a significant impact on the overall MSP/MSSP value proposition.  This article brings the highlights of a recent guided workshop we held, covering what makes a successful report and how it can be used to enhance engagement with your cyber security clients.
Meta Halts AI Use in Brazil Following Data Protection Authority's Ban

Meta Halts AI Use in Brazil Following Data Protection Authority's Ban

Jul 18, 2024 Artificial Intelligence / Data Protection
Meta has suspended the use of generative artificial intelligence (GenAI) in Brazil after the country's data protection authority issued a preliminary ban objecting to its new privacy policy. The development was first reported by news agency Reuters. The company said it has decided to suspend the tools while it is in talks with Brazil's National Data Protection Authority (ANPD) to address the agency's concerns over its use of GenAI technology. Earlier this month, ANPD halted with immediate effect the social media giant's new privacy policy that granted the company access to users' personal data to train its GenAI systems. The decision stems from "the imminent risk of serious and irreparable damage or difficult-to-repair damage to the fundamental rights of the affected data subjects," the agency said. It further set a daily fine of 50,000 reais (about $9,100 as of July 18) in case of non-compliance. Last week, it gave Meta "five more days to p
cyber security

Free OAuth Investigation Checklist - How to Uncover Risky or Malicious Grants

websiteNudge SecuritySaaS Security / Supply Chain
OAuth grants provide yet another way for attackers to compromise identities. Download our free checklist to learn what to look for and where when reviewing OAuth grants for potential risks.
U.S. Seizes Domains Used by AI-Powered Russian Bot Farm for Disinformation

U.S. Seizes Domains Used by AI-Powered Russian Bot Farm for Disinformation

Jul 12, 2024 Disinformation / Artificial Intelligence
The U.S. Department of Justice (DoJ) said it seized two internet domains and searched nearly 1,000 social media accounts that Russian threat actors allegedly used to covertly spread pro-Kremlin disinformation in the country and abroad on a large scale. "The social media bot farm used elements of AI to create fictitious social media profiles — often purporting to belong to individuals in the United States — which the operators then used to promote messages in support of Russian government objectives," the DoJ said . The bot network, comprising 968 accounts on X, is said to be part of an elaborate scheme hatched by an employee of Russian state-owned media outlet RT (formerly Russia Today), sponsored by the Kremlin, and aided by an officer of Russia's Federal Security Service (FSB), who created and led an unnamed private intelligence organization. The developmental efforts for the bot farm began in April 2022 when the individuals procured online infrastructure while anon
Brazil Halts Meta's AI Data Processing Amid Privacy Concerns

Brazil Halts Meta's AI Data Processing Amid Privacy Concerns

Jul 04, 2024 Artificial Intelligence / Data Privacy
Brazil's data protection authority, Autoridade Nacional de Proteção de Dados (ANPD), has temporarily banned Meta from processing users' personal data to train the company's artificial intelligence (AI) algorithms. The ANPD said it found "evidence of processing of personal data based on inadequate legal hypothesis, lack of transparency, limitation of the rights of data subjects, and risks to children and adolescents." The decision follows the social media giant's update to its terms that allow it to use public content from Facebook, Messenger, and Instagram for AI training purposes. A recent report published by Human Rights Watch found that LAION-5B , one of the largest image-text datasets used to train AI models, contained links to identifiable photos of Brazilian children, putting them at risk of malicious deepfakes that could place them under even more exploitation and harm. Brazil has about 102 million active users, making it one of the largest ma
The Emerging Role of AI in Open-Source Intelligence

The Emerging Role of AI in Open-Source Intelligence

Jul 03, 2024 OSINT / Artificial Intelligence
Recently the Office of the Director of National Intelligence (ODNI) unveiled a new strategy for open-source intelligence (OSINT) and referred to OSINT as the "INT of first resort". Public and private sector organizations are realizing the value that the discipline can provide but are also finding that the exponential growth of digital data in recent years has overwhelmed many traditional OSINT methods. Thankfully, Artificial Intelligence (AI) and Machine Learning (ML) are starting to provide a transformative impact on the future of information gathering and analysis.  What is Open-Source Intelligence (OSINT)? Open-Source Intelligence refers to the collection and analysis of information from publicly available sources. These sources can include traditional media, social media platforms, academic publications, government reports, and any other data that is openly accessible. The key characteristic of OSINT is that it does not involve covert or clandestine methods of information gather
The Secrets of Hidden AI Training on Your Data

The Secrets of Hidden AI Training on Your Data

Jun 27, 2024 Artificial Intelligence / SaaS Security
While some SaaS threats are clear and visible, others are hidden in plain sight, both posing significant risks to your organization. Wing's research indicates that an astounding 99.7% of organizations utilize applications embedded with AI functionalities. These AI-driven tools are indispensable, providing seamless experiences from collaboration and communication to work management and decision-making. However, beneath these conveniences lies a largely unrecognized risk: the potential for AI capabilities in these SaaS tools to compromise sensitive business data and intellectual property (IP). Wing's recent findings reveal a surprising statistic: 70% of the top 10 most commonly used AI applications may use your data for training their models. This practice can go beyond mere data learning and storage. It can involve retraining on your data, having human reviewers analyze it, and even sharing it with third parties. Often, these threats are buried deep in the fine print of Term
Prompt Injection Flaw in Vanna AI Exposes Databases to RCE Attacks

Prompt Injection Flaw in Vanna AI Exposes Databases to RCE Attacks

Jun 27, 2024 Artificial Intelligence / Vulnerability
Cybersecurity researchers have disclosed a high-severity security flaw in the Vanna.AI library that could be exploited to achieve remote code execution vulnerability via prompt injection techniques. The vulnerability, tracked as CVE-2024-5565 (CVSS score: 8.1), relates to a case of prompt injection in the "ask" function that could be exploited to trick the library into executing arbitrary commands, supply chain security firm JFrog said . Vanna is a Python-based machine learning library that allows users to chat with their SQL database to glean insights by "just asking questions" (aka prompts) that are translated into an equivalent SQL query using a large language model (LLM). The rapid rollout of generative artificial intelligence (AI) models in recent years has brought to the fore the risks of exploitation by malicious actors, who can weaponize the tools by providing adversarial inputs that bypass the safety mechanisms built into them. One such prominent clas
Google Introduces Project Naptime for AI-Powered Vulnerability Research

Google Introduces Project Naptime for AI-Powered Vulnerability Research

Jun 24, 2024 Vulnerability / Artificial Intelligence
Google has developed a new framework called Project Naptime that it says enables a large language model (LLM) to carry out vulnerability research with an aim to improve automated discovery approaches. "The Naptime architecture is centered around the interaction between an AI agent and a target codebase," Google Project Zero researchers Sergei Glazunov and Mark Brand said . "The agent is provided with a set of specialized tools designed to mimic the workflow of a human security researcher." The initiative is so named for the fact that it allows humans to "take regular naps" while it assists with vulnerability research and automating variant analysis. The approach, at its core, seeks to take advantage of advances in code comprehension and general reasoning ability of LLMs, thus allowing them to replicate human behavior when it comes to identifying and demonstrating security vulnerabilities. It encompasses several components such as a Code Browser tool
Critical RCE Vulnerability Discovered in Ollama AI Infrastructure Tool

Critical RCE Vulnerability Discovered in Ollama AI Infrastructure Tool

Jun 24, 2024 Artificial Intelligence / Cloud Security
Cybersecurity researchers have detailed a now-patched security flaw affecting the Ollama open-source artificial intelligence (AI) infrastructure platform that could be exploited to achieve remote code execution. Tracked as CVE-2024-37032 , the vulnerability has been codenamed Probllama by cloud security firm Wiz. Following responsible disclosure on May 5, 2024, the issue was addressed in version 0.1.34 released on May 7, 2024. Ollama is a service for packaging, deploying, running large language models (LLMs) locally on Windows, Linux, and macOS devices. At its core, the issue relates to a case of insufficient input validation that results in a path traversal flaw an attacker could exploit to overwrite arbitrary files on the server and ultimately lead to remote code execution. The shortcoming requires the threat actor to send specially crafted HTTP requests to the Ollama API server for successful exploitation. It specifically takes advantage of the API endpoint "/api/pull&
Meta Pauses AI Training on EU User Data Amid Privacy Concerns

Meta Pauses AI Training on EU User Data Amid Privacy Concerns

Jun 15, 2024 Artificial Intelligence / Privacy
Meta on Friday said it's delaying its efforts to train the company's large language models ( LLMs ) using public content shared by adult users on Facebook and Instagram in the European Union following a request from the Irish Data Protection Commission (DPC). The company expressed disappointment at having to put its AI plans on pause, stating it had taken into account feedback from regulators and data protection authorities in the region. At issue is Meta's plan to use personal data to train its artificial intelligence (AI) models without seeking users' explicit consent, instead relying on the legal basis of ' Legitimate Interests ' for processing first and third-party data in the region. These changes were expected to come into effect on June 26, before when the company said users could opt out of having their data used by submitting a request "if they wish." Meta is already utilizing user-generated content to train its AI in other markets such
Google's Privacy Sandbox Accused of User Tracking by Austrian Non-Profit

Google's Privacy Sandbox Accused of User Tracking by Austrian Non-Profit

Jun 14, 2024 Privacy / Ad Tracking
Google's plans to deprecate third-party tracking cookies in its Chrome web browser with Privacy Sandbox has run into fresh trouble after Austrian privacy non-profit noyb (none of your business) said the feature can still be used to track users. "While the so-called 'Privacy Sandbox' is advertised as an improvement over extremely invasive third-party tracking, the tracking is now simply done within the browser by Google itself," noyb said . "To do this, the company theoretically needs the same informed consent from users. Instead, Google is tricking people by pretending to 'Turn on an ad privacy feature.'" In other words, by making users agree to enable a privacy feature, they are still being tracked by consenting to Google's first-party ad tracking, the Vienna-based non-profit founded by activist Max Schrems alleged in a complaint filed with the Austrian data protection authority. Privacy Sandbox is a set of proposals put forth by the i
Microsoft Delays AI-Powered Recall Feature for Copilot+ PCs Amid Security Concerns

Microsoft Delays AI-Powered Recall Feature for Copilot+ PCs Amid Security Concerns

Jun 14, 2024 Artificial Intelligence / Data Protection
Microsoft on Thursday revealed that it's delaying the rollout of the controversial artificial intelligence (AI)-powered Recall feature for Copilot+ PCs. To that end, the company said it intends to shift from general availability to a preview available first in the Windows Insider Program ( WIP ) in the coming weeks. "We are adjusting the release model for Recall to leverage the expertise of the Windows Insider community to ensure the experience meets our high standards for quality and security," it said in an update. "This decision is rooted in our commitment to providing a trusted, secure and robust experience for all customers and to seek additional feedback prior to making the feature available to all Copilot+ PC users." First unveiled last month, Recall was originally slated for a broad release on June 18, 2024, but has since waded into controversial waters after it was widely panned as a privacy and security risk and an alluring target for threat ac
Apple Launches Private Cloud Compute for Privacy-Centric AI Processing

Apple Launches Private Cloud Compute for Privacy-Centric AI Processing

Jun 11, 2024 Cloud Computing / Artificial Intelligence
Apple has announced the launch of a "groundbreaking cloud intelligence system" called Private Cloud Compute (PCC) that's designed for processing artificial intelligence (AI) tasks in a privacy-preserving manner in the cloud. The tech giant described PCC as the "most advanced security architecture ever deployed for cloud AI compute at scale." PCC coincides with the arrival of new generative AI (GenAI) features – collectively dubbed Apple Intelligence , or AI for short – that the iPhone maker unveiled in its next generation of software, including iOS 18 , iPadOS 18 , and macOS Sequoia . All of the Apple Intelligence features, both the ones that run on-device and those that rely on PCC, leverage in-house generative models trained on "licensed data, including data selected to enhance specific features, as well as publicly available data collected by our web-crawler, AppleBot." With PCC, the idea is to essentially offload complex requests that requir
Expert Insights
Cybersecurity Resources