#1 Trusted Cybersecurity News Platform
Followed by 4.50+ million
The Hacker News Logo
Subscribe – Get Latest News
AI Security

machine learning | Breaking Cybersecurity News | The Hacker News

Safeguard Personal and Corporate Identities with Identity Intelligence

Safeguard Personal and Corporate Identities with Identity Intelligence

Jul 19, 2024 Machine Learning / Corporate Security
Learn about critical threats that can impact your organization and the bad actors behind them from Cybersixgill's threat experts. Each story shines a light on underground activities, the threat actors involved, and why you should care, along with what you can do to mitigate risk.  In the current cyber threat landscape, the protection of personal and corporate identities has become vital. Once in the hands of cybercriminals, compromised credentials and accounts provide unauthorized access to corporations' sensitive information and an entry point to launch costly ransomware and other malware attacks. To properly mitigate threats stemming from compromised credentials and accounts, organizations need identity intelligence. Understanding the significance of identity intelligence and the benefits it delivers is foundational to maintaining a secure posture and minimizing risk.  There is a perception that security teams and threat analysts are already overloaded by too much data. By these
Summary of "AI Leaders Spill Their Secrets" Webinar

Summary of "AI Leaders Spill Their Secrets" Webinar

Jul 19, 2024 Technology / Artificial Intelligence
Event Overview The " AI Leaders Spill Their Secrets " webinar, hosted by Sigma Computing, featured prominent AI experts sharing their experiences and strategies for success in the AI industry. The panel included Michael Ward from Sardine, Damon Bryan from Hyperfinity, and Stephen Hillian from Astronomer, moderated by Zalak Trivedi, Sigma Computing's Product Manager. Key Speakers and Their Backgrounds 1. Michael Ward Senior Risk Data Analyst at Sardine. Over 25 years of experience in software engineering. Focuses on data science, analytics, and machine learning to prevent fraud and money laundering. 2. Damon Bryan Co-founder and CTO at Hyperfinity. Specializes in decision intelligence software for retailers and brands. Background in data science, AI, and analytics, transitioning from consultancy to a full-fledged software company. 3. Stephen Hillion SVP of Data and AI at Astronomer. Manages data science teams and focuses on the development and scaling of
How to Increase Engagement with Your Cybersecurity Clients Through vCISO Reporting

How to Increase Engagement with Your Cybersecurity Clients Through vCISO Reporting

Jul 22, 2024vCISO / Business Security
As a vCISO, you are responsible for your client's cybersecurity strategy and risk governance. This incorporates multiple disciplines, from research to execution to reporting. Recently, we published a comprehensive playbook for vCISOs, "Your First 100 Days as a vCISO – 5 Steps to Success" , which covers all the phases entailed in launching a successful vCISO engagement, along with recommended actions to take, and step-by-step examples.  Following the success of the playbook and the requests that have come in from the MSP/MSSP community, we decided to drill down into specific parts of vCISO reporting and provide more color and examples. In this article, we focus on how to create compelling narratives within a report, which has a significant impact on the overall MSP/MSSP value proposition.  This article brings the highlights of a recent guided workshop we held, covering what makes a successful report and how it can be used to enhance engagement with your cyber security clients.
The Emerging Role of AI in Open-Source Intelligence

The Emerging Role of AI in Open-Source Intelligence

Jul 03, 2024 OSINT / Artificial Intelligence
Recently the Office of the Director of National Intelligence (ODNI) unveiled a new strategy for open-source intelligence (OSINT) and referred to OSINT as the "INT of first resort". Public and private sector organizations are realizing the value that the discipline can provide but are also finding that the exponential growth of digital data in recent years has overwhelmed many traditional OSINT methods. Thankfully, Artificial Intelligence (AI) and Machine Learning (ML) are starting to provide a transformative impact on the future of information gathering and analysis.  What is Open-Source Intelligence (OSINT)? Open-Source Intelligence refers to the collection and analysis of information from publicly available sources. These sources can include traditional media, social media platforms, academic publications, government reports, and any other data that is openly accessible. The key characteristic of OSINT is that it does not involve covert or clandestine methods of information gather
cyber security

Free OAuth Investigation Checklist - How to Uncover Risky or Malicious Grants

websiteNudge SecuritySaaS Security / Supply Chain
OAuth grants provide yet another way for attackers to compromise identities. Download our free checklist to learn what to look for and where when reviewing OAuth grants for potential risks.
Google Introduces Project Naptime for AI-Powered Vulnerability Research

Google Introduces Project Naptime for AI-Powered Vulnerability Research

Jun 24, 2024 Vulnerability / Artificial Intelligence
Google has developed a new framework called Project Naptime that it says enables a large language model (LLM) to carry out vulnerability research with an aim to improve automated discovery approaches. "The Naptime architecture is centered around the interaction between an AI agent and a target codebase," Google Project Zero researchers Sergei Glazunov and Mark Brand said . "The agent is provided with a set of specialized tools designed to mimic the workflow of a human security researcher." The initiative is so named for the fact that it allows humans to "take regular naps" while it assists with vulnerability research and automating variant analysis. The approach, at its core, seeks to take advantage of advances in code comprehension and general reasoning ability of LLMs, thus allowing them to replicate human behavior when it comes to identifying and demonstrating security vulnerabilities. It encompasses several components such as a Code Browser tool
Critical RCE Vulnerability Discovered in Ollama AI Infrastructure Tool

Critical RCE Vulnerability Discovered in Ollama AI Infrastructure Tool

Jun 24, 2024 Artificial Intelligence / Cloud Security
Cybersecurity researchers have detailed a now-patched security flaw affecting the Ollama open-source artificial intelligence (AI) infrastructure platform that could be exploited to achieve remote code execution. Tracked as CVE-2024-37032 , the vulnerability has been codenamed Probllama by cloud security firm Wiz. Following responsible disclosure on May 5, 2024, the issue was addressed in version 0.1.34 released on May 7, 2024. Ollama is a service for packaging, deploying, running large language models (LLMs) locally on Windows, Linux, and macOS devices. At its core, the issue relates to a case of insufficient input validation that results in a path traversal flaw an attacker could exploit to overwrite arbitrary files on the server and ultimately lead to remote code execution. The shortcoming requires the threat actor to send specially crafted HTTP requests to the Ollama API server for successful exploitation. It specifically takes advantage of the API endpoint "/api/pull&
New Attack Technique 'Sleepy Pickle' Targets Machine Learning Models

New Attack Technique 'Sleepy Pickle' Targets Machine Learning Models

Jun 13, 2024 Vulnerability / Software Security
The security risks posed by the Pickle format have once again come to the fore with the discovery of a new "hybrid machine learning (ML) model exploitation technique" dubbed Sleepy Pickle. The attack method, per Trail of Bits, weaponizes the ubiquitous format used to package and distribute machine learning (ML) models to corrupt the model itself, posing a severe supply chain risk to an organization's downstream customers. "Sleepy Pickle is a stealthy and novel attack technique that targets the ML model itself rather than the underlying system," security researcher Boyan Milanov said . While pickle is a widely used serialization format by ML libraries like PyTorch , it can be used to carry out arbitrary code execution attacks simply by loading a pickle file (i.e., during deserialization). "We suggest loading models from users and organizations you trust, relying on signed commits, and/or loading models from [TensorFlow] or Jax formats with the from_
AI Company Hugging Face Detects Unauthorized Access to Its Spaces Platform

AI Company Hugging Face Detects Unauthorized Access to Its Spaces Platform

Jun 01, 2024 AI-as-a-Service / Data Breach
Artificial Intelligence (AI) company Hugging Face on Friday disclosed that it detected unauthorized access to its Spaces platform earlier this week. "We have suspicions that a subset of Spaces' secrets could have been accessed without authorization," it said in an advisory. Spaces offers a way for users to create, host, and share AI and machine learning (ML) applications. It also functions as a discovery service to look up AI apps made by other users on the platform. In response to the security event, Hugging Space said it is taking the step of revoking a number of HF tokens present in those secrets and that it's notifying users who had their tokens revoked via email. "We recommend you refresh any key or token and consider switching your HF tokens to fine-grained access tokens which are the new default," it added. Hugging Face, however, did not disclose how many users are impacted by the incident, which is currently under further investigation. It has
How to Build Your Autonomous SOC Strategy

How to Build Your Autonomous SOC Strategy

May 30, 2024 Endpoint Security / Threat Detection
Security leaders are in a tricky position trying to discern how much new AI-driven cybersecurity tools could actually benefit a security operations center (SOC). The hype about generative AI is still everywhere, but security teams have to live in reality. They face constantly incoming alerts from endpoint security platforms, SIEM tools, and phishing emails reported by internal users. Security teams also face an acute talent shortage.  In this guide, we'll lay out practical steps organizations can take to automate more of their processes and build an autonomous SOC strategy . This should address the acute talent shortage in security teams, by employing artificial intelligence and machine learning with a variety of techniques, these systems simulate the decision-making and investigative processes of human analysts. First, we'll define objectives for an autonomous SOC strategy and then consider key processes that could be automated. Next, we'll consider different AI and automation
Experts Find Flaw in Replicate AI Service Exposing Customers' Models and Data

Experts Find Flaw in Replicate AI Service Exposing Customers' Models and Data

May 25, 2024 Machine Learning / Data Breach
Cybersecurity researchers have discovered a critical security flaw in an artificial intelligence (AI)-as-a-service provider  Replicate  that could have allowed threat actors to gain access to proprietary AI models and sensitive information. "Exploitation of this vulnerability would have allowed unauthorized access to the AI prompts and results of all Replicate's platform customers," cloud security firm Wiz  said  in a report published this week. The issue stems from the fact that AI models are typically packaged in formats that allow arbitrary code execution, which an attacker could weaponize to perform cross-tenant attacks by means of a malicious model. Replicate makes use of an open-source tool called  Cog  to containerize and package machine learning models that could then be deployed either in a self-hosted environment or to Replicate. Wiz said that it created a rogue Cog container and uploaded it to Replicate, ultimately employing it to achieve remote code exec
(Cyber) Risk = Probability of Occurrence x Damage

(Cyber) Risk = Probability of Occurrence x Damage

May 15, 2024 Threat Detection / Cybersecurity
Here's How to Enhance Your Cyber Resilience with CVSS In late 2023, the Common Vulnerability Scoring System (CVSS) v4.0 was unveiled, succeeding the eight-year-old CVSS v3.0, with the aim to enhance vulnerability assessment for both industry and the public. This latest version introduces additional metrics like safety and automation to address criticism of lacking granularity while presenting a revised scoring system for a more comprehensive evaluation. It further emphasizes the importance of considering environmental and threat metrics alongside the base score to assess vulnerabilities accurately. Why Does It Matter? The primary purpose of the CVSS is to evaluate the risk associated with a vulnerability. Some vulnerabilities, particularly those found in network products, present a clear and significant risk as unauthenticated attackers can easily exploit them to gain remote control over affected systems. These vulnerabilities have frequently been exploited over the years, often ser
6 Mistakes Organizations Make When Deploying Advanced Authentication

6 Mistakes Organizations Make When Deploying Advanced Authentication

May 14, 2024 Cyber Threat / Machine Learning
Deploying advanced authentication measures is key to helping organizations address their weakest cybersecurity link: their human users. Having some form of 2-factor authentication in place is a great start, but many organizations may not yet be in that spot or have the needed level of authentication sophistication to adequately safeguard organizational data. When deploying advanced authentication measures, organizations can make mistakes, and it is crucial to be aware of these potential pitfalls.  1. Failing to conduct a risk assessment A comprehensive risk assessment is a vital first step to any authentication implementation. An organization leaves itself open to risk if it fails to assess current threats and vulnerabilities, systems and processes or needed level of protections required for different applications and data.  Not all applications demand the same levels of security. For example, an application that handles sensitive customer information or financials may require stro
Bitcoin Forensic Analysis Uncovers Money Laundering Clusters and Criminal Proceeds

Bitcoin Forensic Analysis Uncovers Money Laundering Clusters and Criminal Proceeds

May 01, 2024 Financial Crime / Forensic Analysis
A forensic analysis of a graph dataset containing transactions on the Bitcoin blockchain has revealed clusters associated with illicit activity and money laundering, including detecting criminal proceeds sent to a crypto exchange and previously unknown wallets belonging to a Russian darknet market. The  findings  come from Elliptic in collaboration with researchers from the MIT-IBM Watson AI Lab. The 26 GB dataset, dubbed  Elliptic2 , is a "large graph dataset containing 122K labeled subgraphs of Bitcoin clusters within a background graph consisting of 49M node clusters and 196M edge transactions," the co-authors  said  in a paper shared with The Hacker News. Elliptic2 builds on the  Elliptic Data Set  (aka Elliptic1), a transaction graph that was made public in July 2019 with the goal of  combating financial crime  using graph convolutional neural networks ( GCNs ). The idea, in a nutshell, is to uncover unlawful activity and money laundering patterns by taking advanta
U.S. Government Releases New AI Security Guidelines for Critical Infrastructure

U.S. Government Releases New AI Security Guidelines for Critical Infrastructure

Apr 30, 2024 Machine Learning / National Security
The U.S. government has unveiled new security guidelines aimed at bolstering critical infrastructure against artificial intelligence (AI)-related threats. "These guidelines are informed by the whole-of-government effort to assess AI risks across all sixteen critical infrastructure sectors, and address threats both to and from, and involving AI systems," the Department of Homeland Security (DHS)  said  Monday. In addition, the agency said it's working to facilitate safe, responsible, and trustworthy use of the technology in a manner that does not infringe on individuals' privacy, civil rights, and civil liberties. The new guidance concerns the use of AI to augment and scale attacks on critical infrastructure, adversarial manipulation of AI systems, and shortcomings in such tools that could result in unintended consequences, necessitating the need for transparency and secure by design practices to evaluate and mitigate AI risks. Specifically, this spans four diffe
Google Prevented 2.28 Million Malicious Apps from Reaching Play Store in 2023

Google Prevented 2.28 Million Malicious Apps from Reaching Play Store in 2023

Apr 29, 2024 Mobile Security / Hacking
Google on Monday revealed that almost 200,000 app submissions to its Play Store for Android were either rejected or remediated to address issues with access to sensitive data such as location or SMS messages over the past year. The tech giant also said it blocked 333,000 bad accounts from the app storefront in 2023 for attempting to distribute malware or for repeated policy violations. "In 2023, we prevented 2.28 million policy-violating apps from being published on Google Play in part thanks to our investment in new and improved security features, policy updates, and advanced machine learning and app review processes," Google's Steve Kafka, Khawaja Shams, and Mohet Saxena said . "To help safeguard user privacy at scale, we partnered with SDK providers to limit sensitive data access and sharing, enhancing the privacy posture for over 31 SDKs impacting 790K+ apps." In comparison, Google  fended off 1.43 million bad apps  from being published to the Play Sto
Expert Insights
Cybersecurity Resources