#1 Trusted Cybersecurity News Platform Followed by 4.50+ million
The Hacker News Logo
Subscribe – Get Latest News
Cloud Security

machine learning | Breaking Cybersecurity News | The Hacker News

AI Copilot: Launching Innovation Rockets, But Beware of the Darkness Ahead

AI Copilot: Launching Innovation Rockets, But Beware of the Darkness Ahead

Apr 15, 2024 Secure Coding / Artificial Intelligence
Imagine a world where the software that powers your favorite apps, secures your online transactions, and keeps your digital life could be outsmarted and taken over by a cleverly disguised piece of code. This isn't a plot from the latest cyber-thriller; it's actually been a reality for years now. How this will change – in a positive or negative direction – as artificial intelligence (AI) takes on a larger role in software development is one of the big uncertainties related to this brave new world. In an era where AI promises to revolutionize how we live and work, the conversation about its security implications cannot be sidelined. As we increasingly rely on AI for tasks ranging from mundane to mission-critical, the question is no longer just, "Can AI  boost cybersecurity ?" (sure!), but also "Can AI  be hacked? " (yes!), "Can one use AI  to hack? " (of course!), and "Will AI  produce secure software ?" (well…). This thought leadership article is about the latter. Cydrill  (a
PyPI Halts Sign-Ups Amid Surge of Malicious Package Uploads Targeting Developers

PyPI Halts Sign-Ups Amid Surge of Malicious Package Uploads Targeting Developers

Mar 29, 2024 Supply Chain Attack / Threat Intelligence
The maintainers of the Python Package Index (PyPI) repository briefly suspended new user sign-ups following an influx of malicious projects uploaded as part of a typosquatting campaign. PyPI said "new project creation and new user registration" was temporarily halted to mitigate what it said was a "malware upload campaign." The incident was resolved 10 hours later, on March 28, 2024, at 12:56 p.m. UTC. Software supply chain security firm Checkmarx said the unidentified threat actors behind flooding the repository targeted developers with typosquatted versions of popular packages. "This is a multi-stage attack and the malicious payload aimed to steal crypto wallets, sensitive data from browsers (cookies, extensions data, etc.), and various credentials," researchers Yehuda Gelb, Jossef Harush Kadouri, and Tzachi Zornstain  said . "In addition, the malicious payload employed a persistence mechanism to survive reboots." The findings were also c
Code Keepers: Mastering Non-Human Identity Management

Code Keepers: Mastering Non-Human Identity Management

Apr 12, 2024DevSecOps / Identity Management
Identities now transcend human boundaries. Within each line of code and every API call lies a non-human identity. These entities act as programmatic access keys, enabling authentication and facilitating interactions among systems and services, which are essential for every API call, database query, or storage account access. As we depend on multi-factor authentication and passwords to safeguard human identities, a pressing question arises: How do we guarantee the security and integrity of these non-human counterparts? How do we authenticate, authorize, and regulate access for entities devoid of life but crucial for the functioning of critical systems? Let's break it down. The challenge Imagine a cloud-native application as a bustling metropolis of tiny neighborhoods known as microservices, all neatly packed into containers. These microservices function akin to diligent worker bees, each diligently performing its designated task, be it processing data, verifying credentials, or
GitHub Launches AI-Powered Autofix Tool to Assist Devs in Patching Security Flaws

GitHub Launches AI-Powered Autofix Tool to Assist Devs in Patching Security Flaws

Mar 21, 2024 Machine Learning / Software Security
GitHub on Wednesday announced that it's making available a feature called code scanning autofix in public beta for all  Advanced Security customers  to provide targeted recommendations in an effort to avoid introducing new security issues. "Powered by  GitHub Copilot  and  CodeQL , code scanning autofix covers more than 90% of alert types in JavaScript, Typescript, Java, and Python, and delivers code suggestions shown to remediate more than two-thirds of found vulnerabilities with little or no editing," GitHub's Pierre Tempel and Eric Tooley  said . The capability,  first previewed  in November 2023, leverages a combination of CodeQL, Copilot APIs, and OpenAI GPT-4 to generate code suggestions. The Microsoft-owned subsidiary also said it plans to add support for more programming languages, including C# and Go, in the future. Code scanning autofix is designed to help developers resolve vulnerabilities as they code by generating potential fixes as well as providing
cyber security

WATCH: The SaaS Security Challenge in 90 Seconds

websiteAdaptive ShieldSaaS Security / Cyber Threat
Discover how you can overcome the SaaS security challenge by securing your entire SaaS stack with SSPM.
From Deepfakes to Malware: AI's Expanding Role in Cyber Attacks

From Deepfakes to Malware: AI's Expanding Role in Cyber Attacks

Mar 19, 2024 Generative AI / Incident Response
Large language models (LLMs) powering artificial intelligence (AI) tools today could be exploited to develop self-augmenting malware capable of bypassing YARA rules. "Generative AI can be used to evade string-based YARA rules by augmenting the source code of small malware variants, effectively lowering detection rates," Recorded Future  said  in a new report shared with The Hacker News. The findings are part of a red teaming exercise designed to uncover malicious use cases for AI technologies, which are  already being experimented  with by threat actors to create malware code snippets, generate phishing emails, and conduct reconnaissance on potential targets. The cybersecurity firm said it submitted to an LLM a known piece of malware called  STEELHOOK  that's associated with the APT28 hacking group, alongside its YARA rules, asking it to modify the source code to sidestep detection such that the original functionality remained intact and the generated source code wa
Over 100 Malicious AI/ML Models Found on Hugging Face Platform

Over 100 Malicious AI/ML Models Found on Hugging Face Platform

Mar 04, 2024 AI Security / Vulnerability
As many as 100 malicious artificial intelligence (AI)/machine learning (ML) models have been discovered in the Hugging Face platform. These include instances where loading a  pickle file  leads to code execution, software supply chain security firm JFrog said. "The model's payload grants the attacker a shell on the compromised machine, enabling them to gain full control over victims' machines through what is commonly referred to as a 'backdoor,'" senior security researcher David Cohen  said . "This silent infiltration could potentially grant access to critical internal systems and pave the way for large-scale data breaches or even corporate espionage, impacting not just individual users but potentially entire organizations across the globe, all while leaving victims utterly unaware of their compromised state." Specifically, the rogue model initiates a reverse shell connection to 210.117.212[.]93, an IP address that belongs to the Korea Research
New Hugging Face Vulnerability Exposes AI Models to Supply Chain Attacks

New Hugging Face Vulnerability Exposes AI Models to Supply Chain Attacks

Feb 27, 2024 Supply Chain Attack / Data Security
Cybersecurity researchers have found that it's possible to compromise the Hugging Face Safetensors conversion service to ultimately hijack the models submitted by users and result in supply chain attacks. "It's possible to send malicious pull requests with attacker-controlled data from the Hugging Face service to any repository on the platform, as well as hijack any models that are submitted through the conversion service," HiddenLayer  said  in a report published last week. This, in turn, can be accomplished using a hijacked model that's meant to be converted by the service, thereby allowing malicious actors to request changes to any repository on the platform by masquerading as the conversion bot. Hugging Face is a popular collaboration platform that helps users host pre-trained machine learning models and datasets, as well as build, deploy, and train them. Safetensors is a  format  devised by the company to store  tensors  keeping security in mind, as oppo
Three Tips to Protect Your Secrets from AI Accidents

Three Tips to Protect Your Secrets from AI Accidents

Feb 26, 2024 Data Privacy / Machine Learning
Last year, the Open Worldwide Application Security Project (OWASP) published multiple versions of the " OWASP Top 10 For Large Language Models ," reaching a 1.0 document in August and a 1.1 document in October. These documents not only demonstrate the rapidly evolving nature of Large Language Models, but the evolving ways in which they can be attacked and defended. We're going to talk in this article about four items in that top 10 that are most able to contribute to the accidental disclosure of secrets such as passwords, API keys, and more. We're already aware that LLMs can reveal secrets because it's happened. In early 2023, GitGuardian reported it found over 10 million secrets in public Github commits. Github's Copilot AI coding tool was trained on public commits, and in September of 2023, researchers at the University of Hong Kong published a paper on how they created an algorithm that generated 900 prompts designed to get Copilot to reveal secrets from
Microsoft Releases PyRIT - A Red Teaming Tool for Generative AI

Microsoft Releases PyRIT - A Red Teaming Tool for Generative AI

Feb 23, 2024 Red Teaming / Artificial Intelligence
Microsoft has released an open access automation framework called  PyRIT  (short for Python Risk Identification Tool) to proactively identify risks in generative artificial intelligence (AI) systems. The red teaming tool is designed to "enable every organization across the globe to innovate responsibly with the latest artificial intelligence advances," Ram Shankar Siva Kumar, AI red team lead at Microsoft,  said . The company said PyRIT could be used to assess the robustness of large language model (LLM) endpoints against different harm categories such as fabrication (e.g., hallucination), misuse (e.g., bias), and prohibited content (e.g., harassment). It can also be used to identify security harms ranging from malware generation to jailbreaking, as well as privacy harms like identity theft. PyRIT comes with five interfaces: target, datasets, scoring engine, the ability to support multiple attack strategies, and incorporating a memory component that can either take the
How to Achieve the Best Risk-Based Alerting (Bye-Bye SIEM)

How to Achieve the Best Risk-Based Alerting (Bye-Bye SIEM)

Feb 19, 2024 Network Detection and Response
Did you know that Network Detection and Response (NDR) has become the most effective technology to detect cyber threats? In contrast to SIEM, NDR offers adaptive cybersecurity with reduced false alerts and efficient threat response. Are you aware of  Network Detection and Response (NDR)  and how it's become the most effective technology to detect cyber threats?  NDR massively upgrades your security through risk-based alerting, prioritizing alerts based on the potential risk to your organization's systems and data. How? Well, NDR's real-time analysis, machine learning, and threat intelligence provide immediate detection, reducing alert fatigue and enabling better decision-making. In contrast to SIEM, NDR offers adaptive cybersecurity with reduced false positives and efficient threat response. Why Use Risk-Based Alerting? Risk-based alerting is an approach where security alerts and responses are prioritized based on the level of risk they pose to an organization's system
Google Open Sources Magika: AI-Powered File Identification Tool

Google Open Sources Magika: AI-Powered File Identification Tool

Feb 17, 2024 Artificial Intelligence / Data Protection
Google has announced that it's open-sourcing  Magika , an artificial intelligence (AI)-powered tool to identify file types, to help defenders accurately detect binary and textual file types. "Magika outperforms conventional file identification methods providing an overall 30% accuracy boost and up to 95% higher precision on traditionally hard to identify, but potentially problematic content such as VBA, JavaScript, and Powershell," the company  said . The software uses a "custom, highly optimized deep-learning model" that enables the precise identification of file types within milliseconds. Magika implements inference functions using the Open Neural Network Exchange ( ONNX ). Google said it internally uses Magika at scale to help improve users' safety by routing Gmail, Drive, and Safe Browsing files to the proper security and content policy scanners. In November 2023, the tech giant unveiled  RETVec  (short for Resilient and Efficient Text Vectorizer),
Riding the AI Waves: The Rise of Artificial Intelligence to Combat Cyber Threats

Riding the AI Waves: The Rise of Artificial Intelligence to Combat Cyber Threats

Jan 29, 2024
In nearly every segment of our lives, AI (artificial intelligence) now makes a significant impact: It can deliver better healthcare diagnoses and treatments; detect and reduce the risk of financial fraud; improve inventory management; and serve up the right recommendation for a streaming movie on Friday night. However, one can also make a strong case that some of AI's most significant impacts are in cybersecurity. AI's ability to learn, adapt, and predict rapidly evolving threats has made it an indispensable tool in protecting the world's businesses and governments. From basic applications like spam filtering to advanced predictive analytics and AI-assisted response, AI serves a critical role on the front lines, defending our digital assets from cyber criminals. The future for AI in cybersecurity is not all rainbows and roses, however. Today we can see the early signs of a significant shift, driven by the democratization of AI technology. While AI continues to empower organizations
TensorFlow CI/CD Flaw Exposed Supply Chain to Poisoning Attacks

TensorFlow CI/CD Flaw Exposed Supply Chain to Poisoning Attacks

Jan 18, 2024 Supply Chain Attacks / AI Security
Continuous integration and continuous delivery (CI/CD) misconfigurations discovered in the open-source  TensorFlow  machine learning framework could have been exploited to orchestrate  supply chain attacks . The misconfigurations could be abused by an attacker to "conduct a supply chain compromise of TensorFlow releases on GitHub and PyPi by compromising TensorFlow's build agents via a malicious pull request," Praetorian researchers Adnan Khan and John Stawinski  said  in a report published this week. Successful exploitation of these issues could permit an external attacker to upload malicious releases to the GitHub repository, gain remote code execution on the self-hosted GitHub runner, and even retrieve a GitHub Personal Access Token (PAT) for the  tensorflow-jenkins user . TensorFlow uses GitHub Actions to automate the software build, test, and deployment pipeline. Runners, which refer to machines that execute jobs in a GitHub Actions workflow, can be either self-
Cybersecurity Resources