#1 Trusted Cybersecurity News Platform
Followed by 4.50+ million
The Hacker News Logo
Subscribe – Get Latest News
Cybersecurity

machine learning | Breaking Cybersecurity News | The Hacker News

Category — machine learning
AI Copilot: Launching Innovation Rockets, But Beware of the Darkness Ahead

AI Copilot: Launching Innovation Rockets, But Beware of the Darkness Ahead

Apr 15, 2024 Secure Coding / Artificial Intelligence
Imagine a world where the software that powers your favorite apps, secures your online transactions, and keeps your digital life could be outsmarted and taken over by a cleverly disguised piece of code. This isn't a plot from the latest cyber-thriller; it's actually been a reality for years now. How this will change – in a positive or negative direction – as artificial intelligence (AI) takes on a larger role in software development is one of the big uncertainties related to this brave new world. In an era where AI promises to revolutionize how we live and work, the conversation about its security implications cannot be sidelined. As we increasingly rely on AI for tasks ranging from mundane to mission-critical, the question is no longer just, "Can AI  boost cybersecurity ?" (sure!), but also "Can AI  be hacked? " (yes!), "Can one use AI  to hack? " (of course!), and "Will AI  produce secure software ?" (well…). This thought leadership article is about the latter. Cydrill  (a
PyPI Halts Sign-Ups Amid Surge of Malicious Package Uploads Targeting Developers

PyPI Halts Sign-Ups Amid Surge of Malicious Package Uploads Targeting Developers

Mar 29, 2024 Supply Chain Attack / Threat Intelligence
The maintainers of the Python Package Index (PyPI) repository briefly suspended new user sign-ups following an influx of malicious projects uploaded as part of a typosquatting campaign. PyPI said "new project creation and new user registration" was temporarily halted to mitigate what it said was a "malware upload campaign." The incident was resolved 10 hours later, on March 28, 2024, at 12:56 p.m. UTC. Software supply chain security firm Checkmarx said the unidentified threat actors behind flooding the repository targeted developers with typosquatted versions of popular packages. "This is a multi-stage attack and the malicious payload aimed to steal crypto wallets, sensitive data from browsers (cookies, extensions data, etc.), and various credentials," researchers Yehuda Gelb, Jossef Harush Kadouri, and Tzachi Zornstain  said . "In addition, the malicious payload employed a persistence mechanism to survive reboots." The findings were also c
5 Actionable Steps to Prevent GenAI Data Leaks Without Fully Blocking AI Usage

5 Actionable Steps to Prevent GenAI Data Leaks Without Fully Blocking AI Usage

Oct 01, 2024Generative AI / Data Protection
Since its emergence, Generative AI has revolutionized enterprise productivity. GenAI tools enable faster and more effective software development, financial analysis, business planning, and customer engagement. However, this business agility comes with significant risks, particularly the potential for sensitive data leakage. As organizations attempt to balance productivity gains with security concerns, many have been forced to choose between unrestricted GenAI usage to banning it altogether. A new e-guide by LayerX titled 5 Actionable Measures to Prevent Data Leakage Through Generative AI Tools is designed to help organizations navigate the challenges of GenAI usage in the workplace. The guide offers practical steps for security managers to protect sensitive corporate data while still reaping the productivity benefits of GenAI tools like ChatGPT. This approach is intended to allow companies to strike the right balance between innovation and security. Why Worry About ChatGPT? The e
GitHub Launches AI-Powered Autofix Tool to Assist Devs in Patching Security Flaws

GitHub Launches AI-Powered Autofix Tool to Assist Devs in Patching Security Flaws

Mar 21, 2024 Machine Learning / Software Security
GitHub on Wednesday announced that it's making available a feature called code scanning autofix in public beta for all  Advanced Security customers  to provide targeted recommendations in an effort to avoid introducing new security issues. "Powered by  GitHub Copilot  and  CodeQL , code scanning autofix covers more than 90% of alert types in JavaScript, Typescript, Java, and Python, and delivers code suggestions shown to remediate more than two-thirds of found vulnerabilities with little or no editing," GitHub's Pierre Tempel and Eric Tooley  said . The capability,  first previewed  in November 2023, leverages a combination of CodeQL, Copilot APIs, and OpenAI GPT-4 to generate code suggestions. The Microsoft-owned subsidiary also said it plans to add support for more programming languages, including C# and Go, in the future. Code scanning autofix is designed to help developers resolve vulnerabilities as they code by generating potential fixes as well as providing
cyber security

2024 State of SaaS Security Report eBook

websiteWing SecuritySaaS Security / Insider Threat
A research report featuring astonishing statistics on the security risks of third-party SaaS applications.
From Deepfakes to Malware: AI's Expanding Role in Cyber Attacks

From Deepfakes to Malware: AI's Expanding Role in Cyber Attacks

Mar 19, 2024 Generative AI / Incident Response
Large language models (LLMs) powering artificial intelligence (AI) tools today could be exploited to develop self-augmenting malware capable of bypassing YARA rules. "Generative AI can be used to evade string-based YARA rules by augmenting the source code of small malware variants, effectively lowering detection rates," Recorded Future  said  in a new report shared with The Hacker News. The findings are part of a red teaming exercise designed to uncover malicious use cases for AI technologies, which are  already being experimented  with by threat actors to create malware code snippets, generate phishing emails, and conduct reconnaissance on potential targets. The cybersecurity firm said it submitted to an LLM a known piece of malware called  STEELHOOK  that's associated with the APT28 hacking group, alongside its YARA rules, asking it to modify the source code to sidestep detection such that the original functionality remained intact and the generated source code wa
Over 100 Malicious AI/ML Models Found on Hugging Face Platform

Over 100 Malicious AI/ML Models Found on Hugging Face Platform

Mar 04, 2024 AI Security / Vulnerability
As many as 100 malicious artificial intelligence (AI)/machine learning (ML) models have been discovered in the Hugging Face platform. These include instances where loading a  pickle file  leads to code execution, software supply chain security firm JFrog said. "The model's payload grants the attacker a shell on the compromised machine, enabling them to gain full control over victims' machines through what is commonly referred to as a 'backdoor,'" senior security researcher David Cohen  said . "This silent infiltration could potentially grant access to critical internal systems and pave the way for large-scale data breaches or even corporate espionage, impacting not just individual users but potentially entire organizations across the globe, all while leaving victims utterly unaware of their compromised state." Specifically, the rogue model initiates a reverse shell connection to 210.117.212[.]93, an IP address that belongs to the Korea Research
New Hugging Face Vulnerability Exposes AI Models to Supply Chain Attacks

New Hugging Face Vulnerability Exposes AI Models to Supply Chain Attacks

Feb 27, 2024 Supply Chain Attack / Data Security
Cybersecurity researchers have found that it's possible to compromise the Hugging Face Safetensors conversion service to ultimately hijack the models submitted by users and result in supply chain attacks. "It's possible to send malicious pull requests with attacker-controlled data from the Hugging Face service to any repository on the platform, as well as hijack any models that are submitted through the conversion service," HiddenLayer  said  in a report published last week. This, in turn, can be accomplished using a hijacked model that's meant to be converted by the service, thereby allowing malicious actors to request changes to any repository on the platform by masquerading as the conversion bot. Hugging Face is a popular collaboration platform that helps users host pre-trained machine learning models and datasets, as well as build, deploy, and train them. Safetensors is a  format  devised by the company to store  tensors  keeping security in mind, as oppo
Three Tips to Protect Your Secrets from AI Accidents

Three Tips to Protect Your Secrets from AI Accidents

Feb 26, 2024 Data Privacy / Machine Learning
Last year, the Open Worldwide Application Security Project (OWASP) published multiple versions of the " OWASP Top 10 For Large Language Models ," reaching a 1.0 document in August and a 1.1 document in October. These documents not only demonstrate the rapidly evolving nature of Large Language Models, but the evolving ways in which they can be attacked and defended. We're going to talk in this article about four items in that top 10 that are most able to contribute to the accidental disclosure of secrets such as passwords, API keys, and more. We're already aware that LLMs can reveal secrets because it's happened. In early 2023, GitGuardian reported it found over 10 million secrets in public Github commits. Github's Copilot AI coding tool was trained on public commits, and in September of 2023, researchers at the University of Hong Kong published a paper on how they created an algorithm that generated 900 prompts designed to get Copilot to reveal secrets from
Microsoft Releases PyRIT - A Red Teaming Tool for Generative AI

Microsoft Releases PyRIT - A Red Teaming Tool for Generative AI

Feb 23, 2024 Red Teaming / Artificial Intelligence
Microsoft has released an open access automation framework called  PyRIT  (short for Python Risk Identification Tool) to proactively identify risks in generative artificial intelligence (AI) systems. The red teaming tool is designed to "enable every organization across the globe to innovate responsibly with the latest artificial intelligence advances," Ram Shankar Siva Kumar, AI red team lead at Microsoft,  said . The company said PyRIT could be used to assess the robustness of large language model (LLM) endpoints against different harm categories such as fabrication (e.g., hallucination), misuse (e.g., bias), and prohibited content (e.g., harassment). It can also be used to identify security harms ranging from malware generation to jailbreaking, as well as privacy harms like identity theft. PyRIT comes with five interfaces: target, datasets, scoring engine, the ability to support multiple attack strategies, and incorporating a memory component that can either take the
How to Achieve the Best Risk-Based Alerting (Bye-Bye SIEM)

How to Achieve the Best Risk-Based Alerting (Bye-Bye SIEM)

Feb 19, 2024 Network Detection and Response
Did you know that Network Detection and Response (NDR) has become the most effective technology to detect cyber threats? In contrast to SIEM, NDR offers adaptive cybersecurity with reduced false alerts and efficient threat response. Are you aware of  Network Detection and Response (NDR)  and how it's become the most effective technology to detect cyber threats?  NDR massively upgrades your security through risk-based alerting, prioritizing alerts based on the potential risk to your organization's systems and data. How? Well, NDR's real-time analysis, machine learning, and threat intelligence provide immediate detection, reducing alert fatigue and enabling better decision-making. In contrast to SIEM, NDR offers adaptive cybersecurity with reduced false positives and efficient threat response. Why Use Risk-Based Alerting? Risk-based alerting is an approach where security alerts and responses are prioritized based on the level of risk they pose to an organization's system
Google Open Sources Magika: AI-Powered File Identification Tool

Google Open Sources Magika: AI-Powered File Identification Tool

Feb 17, 2024 Artificial Intelligence / Data Protection
Google has announced that it's open-sourcing  Magika , an artificial intelligence (AI)-powered tool to identify file types, to help defenders accurately detect binary and textual file types. "Magika outperforms conventional file identification methods providing an overall 30% accuracy boost and up to 95% higher precision on traditionally hard to identify, but potentially problematic content such as VBA, JavaScript, and Powershell," the company  said . The software uses a "custom, highly optimized deep-learning model" that enables the precise identification of file types within milliseconds. Magika implements inference functions using the Open Neural Network Exchange ( ONNX ). Google said it internally uses Magika at scale to help improve users' safety by routing Gmail, Drive, and Safe Browsing files to the proper security and content policy scanners. In November 2023, the tech giant unveiled  RETVec  (short for Resilient and Efficient Text Vectorizer),
Riding the AI Waves: The Rise of Artificial Intelligence to Combat Cyber Threats

Riding the AI Waves: The Rise of Artificial Intelligence to Combat Cyber Threats

Jan 29, 2024
In nearly every segment of our lives, AI (artificial intelligence) now makes a significant impact: It can deliver better healthcare diagnoses and treatments; detect and reduce the risk of financial fraud; improve inventory management; and serve up the right recommendation for a streaming movie on Friday night. However, one can also make a strong case that some of AI's most significant impacts are in cybersecurity. AI's ability to learn, adapt, and predict rapidly evolving threats has made it an indispensable tool in protecting the world's businesses and governments. From basic applications like spam filtering to advanced predictive analytics and AI-assisted response, AI serves a critical role on the front lines, defending our digital assets from cyber criminals. The future for AI in cybersecurity is not all rainbows and roses, however. Today we can see the early signs of a significant shift, driven by the democratization of AI technology. While AI continues to empower organizations
TensorFlow CI/CD Flaw Exposed Supply Chain to Poisoning Attacks

TensorFlow CI/CD Flaw Exposed Supply Chain to Poisoning Attacks

Jan 18, 2024 Supply Chain Attacks / AI Security
Continuous integration and continuous delivery (CI/CD) misconfigurations discovered in the open-source  TensorFlow  machine learning framework could have been exploited to orchestrate  supply chain attacks . The misconfigurations could be abused by an attacker to "conduct a supply chain compromise of TensorFlow releases on GitHub and PyPi by compromising TensorFlow's build agents via a malicious pull request," Praetorian researchers Adnan Khan and John Stawinski  said  in a report published this week. Successful exploitation of these issues could permit an external attacker to upload malicious releases to the GitHub repository, gain remote code execution on the self-hosted GitHub runner, and even retrieve a GitHub Personal Access Token (PAT) for the  tensorflow-jenkins user . TensorFlow uses GitHub Actions to automate the software build, test, and deployment pipeline. Runners, which refer to machines that execute jobs in a GitHub Actions workflow, can be either self-
Cybersecurity
Expert Insights / Articles Videos
Cybersecurity Resources