#1 Trusted Cybersecurity News Platform Followed by 4.50+ million
The Hacker News Logo
Subscribe – Get Latest News
Cloud Security

artificial intelligence | Breaking Cybersecurity News | The Hacker News

Microsoft Warns: North Korean Hackers Turn to AI-Fueled Cyber Espionage

Microsoft Warns: North Korean Hackers Turn to AI-Fueled Cyber Espionage

Apr 22, 2024 Cryptocurrency / Artificial Intelligence
Microsoft has revealed that North Korea-linked state-sponsored cyber actors have begun to use artificial intelligence (AI) to make their operations more effective and efficient. "They are learning to use tools powered by AI large language models (LLM) to make their operations more efficient and effective," the tech giant  said  in its latest report on East Asia hacking groups. The company specifically highlighted a group named  Emerald Sleet  (aka Kimusky or TA427), which has been observed using LLMs to bolster spear-phishing efforts aimed at Korean Peninsula experts. The adversary is also said to have relied on the latest advancements in AI to research vulnerabilities and conduct reconnaissance on organizations and experts focused on North Korea, joining  hacking crews from China , who have turned to AI-generated content for influence operations. It further employed LLMs to troubleshoot technical issues, conduct basic scripting tasks, and draft content for spear-phishi
AI Copilot: Launching Innovation Rockets, But Beware of the Darkness Ahead

AI Copilot: Launching Innovation Rockets, But Beware of the Darkness Ahead

Apr 15, 2024 Secure Coding / Artificial Intelligence
Imagine a world where the software that powers your favorite apps, secures your online transactions, and keeps your digital life could be outsmarted and taken over by a cleverly disguised piece of code. This isn't a plot from the latest cyber-thriller; it's actually been a reality for years now. How this will change – in a positive or negative direction – as artificial intelligence (AI) takes on a larger role in software development is one of the big uncertainties related to this brave new world. In an era where AI promises to revolutionize how we live and work, the conversation about its security implications cannot be sidelined. As we increasingly rely on AI for tasks ranging from mundane to mission-critical, the question is no longer just, "Can AI  boost cybersecurity ?" (sure!), but also "Can AI  be hacked? " (yes!), "Can one use AI  to hack? " (of course!), and "Will AI  produce secure software ?" (well…). This thought leadership article is about the latter. Cydrill  (a
Recover from Ransomware in 5 Minutes—We will Teach You How!

Recover from Ransomware in 5 Minutes—We will Teach You How!

Apr 18, 2024Cyber Resilience / Data Protection
Super Low RPO with Continuous Data Protection: Dial Back to Just Seconds Before an Attack Zerto , a Hewlett Packard Enterprise company, can help you detect and recover from ransomware in near real-time. This solution leverages continuous data protection (CDP) to ensure all workloads have the lowest recovery point objective (RPO) possible. The most valuable thing about CDP is that it does not use snapshots, agents, or any other periodic data protection methodology. Zerto has no impact on production workloads and can achieve RPOs in the region of 5-15 seconds across thousands of virtual machines simultaneously. For example, the environment in the image below has nearly 1,000 VMs being protected with an average RPO of just six seconds! Application-Centric Protection: Group Your VMs to Gain Application-Level Control   You can protect your VMs with the Zerto application-centric approach using Virtual Protection Groups (VPGs). This logical grouping of VMs ensures that your whole applica
AI-as-a-Service Providers Vulnerable to PrivEsc and Cross-Tenant Attacks

AI-as-a-Service Providers Vulnerable to PrivEsc and Cross-Tenant Attacks

Apr 05, 2024 Artificial Intelligence / Supply Chain Attack
New research has found that artificial intelligence (AI)-as-a-service providers such as Hugging Face are susceptible to two critical risks that could allow threat actors to escalate privileges, gain cross-tenant access to other customers' models, and even take over the continuous integration and continuous deployment (CI/CD) pipelines. "Malicious models represent a major risk to AI systems, especially for AI-as-a-service providers because potential attackers may leverage these models to perform cross-tenant attacks," Wiz researchers Shir Tamari and Sagi Tzadik  said . "The potential impact is devastating, as attackers may be able to access the millions of private AI models and apps stored within AI-as-a-service providers." The development comes as machine learning pipelines have emerged as a brand new supply chain attack vector, with repositories like Hugging Face becoming an attractive target for staging adversarial attacks designed to glean sensitive infor
cyber security

Today's Top 4 Identity Threat Exposures: Where To Find Them and How To Stop Them

websiteSilverfortIdentity Protection / Attack Surface
Explore the first ever threat report 100% focused on the prevalence of identity security gaps you may not be aware of.
N. Korea-linked Kimsuky Shifts to Compiled HTML Help Files in Ongoing Cyberattacks

N. Korea-linked Kimsuky Shifts to Compiled HTML Help Files in Ongoing Cyberattacks

Mar 24, 2024 Artificial Intelligence / Cyber Espionage
The North Korea-linked threat actor known as  Kimsuky  (aka Black Banshee, Emerald Sleet, or Springtail) has been observed shifting its tactics, leveraging Compiled HTML Help (CHM) files as vectors to deliver malware for harvesting sensitive data. Kimsuky, active since at least 2012, is known to target entities located in South Korea as well as North America, Asia, and Europe. According to Rapid7, attack chains have leveraged weaponized Microsoft Office documents, ISO files, and Windows shortcut (LNK) files, with the group also employing CHM files to  deploy malware  on  compromised hosts . The cybersecurity firm has attributed the activity to Kimsuky with moderate confidence, citing similar tradecraft observed in the past. "While originally designed for help documentation, CHM files have also been exploited for malicious purposes, such as distributing malware, because they can execute JavaScript when opened," the company  said . The CHM file is propagated within an IS
Generative AI Security - Secure Your Business in a World Powered by LLMs

Generative AI Security - Secure Your Business in a World Powered by LLMs

Mar 20, 2024 Artificial intelligence / Webinar
Did you know that 79% of organizations are already leveraging Generative AI technologies? Much like the internet defined the 90s and the cloud revolutionized the 2010s, we are now in the era of Large Language Models (LLMs) and Generative AI. The potential of Generative AI is immense, yet it brings significant challenges, especially in security integration. Despite their powerful capabilities, LLMs must be approached with caution. A breach in an LLM's security could expose the data it was trained on, along with sensitive organizational and user information, presenting a considerable risk. Join us for an enlightening session with Elad Schulman, CEO & Co-Founder of Lasso Security, and Nir Chervoni, Booking.com's Head of Data Security. They will share their real-world experiences and insights into securing Generative AI technologies. Why Attend? This webinar is a must for IT professionals, security experts, business leaders, and anyone fascinated by the future of Generati
Crafting and Communicating Your Cybersecurity Strategy for Board Buy-In

Crafting and Communicating Your Cybersecurity Strategy for Board Buy-In

Mar 19, 2024 Regulatory Compliance / Cloud Security
In an era where digital transformation drives business across sectors, cybersecurity has transcended its traditional operational role to become a cornerstone of corporate strategy and risk management. This evolution demands a shift in how cybersecurity leaders—particularly Chief Information Security Officers (CISOs)—articulate the value and urgency of cybersecurity investments to their boards.  The Strategic Importance of Cybersecurity Cybersecurity is no longer a backroom IT concern but a pivotal agenda item in boardroom discussions. The surge in cyber threats, coupled with their capacity to disrupt business operations, erode customer trust, and incur significant financial losses, underscores the strategic value of robust cybersecurity measures. Moreover, as companies increasingly integrate digital technologies into their core operations, the significance of cybersecurity in safeguarding corporate assets and reputation continues to rise. The Current State of Cybersecurity in Corpo
Ex-Google Engineer Arrested for Stealing AI Technology Secrets for China

Ex-Google Engineer Arrested for Stealing AI Technology Secrets for China

Mar 07, 2024 Artificial Intelligence / Corporate Espionage
The U.S. Department of Justice (DoJ) announced the indictment of a 38-year-old Chinese national and a California resident for allegedly stealing proprietary information from Google while covertly working for two China-based tech companies. Linwei Ding (aka Leon Ding), a former Google engineer who was arrested on March 6, 2024, "transferred sensitive Google trade secrets and other confidential information from Google's network to his personal account while secretly affiliating himself with PRC-based companies in the AI industry," the DoJ  said . The defendant is said to have pilfered from Google over 500 confidential files containing artificial intelligence (AI) trade secrets with the goal of passing them on to two unnamed Chinese companies looking to gain an edge in the ongoing AI race. "While Linwei Ding was employed as a software engineer at Google, he was secretly working to enrich himself and two companies based in the People's Republic of China," sa
Over 225,000 Compromised ChatGPT Credentials Up for Sale on Dark Web Markets

Over 225,000 Compromised ChatGPT Credentials Up for Sale on Dark Web Markets

Mar 05, 2024 Malware / Artificial Intelligence
More than 225,000 logs containing compromised OpenAI ChatGPT credentials were made available for sale on underground markets between January and October 2023, new findings from Group-IB show. These credentials were found within  information stealer logs  associated with LummaC2, Raccoon, and RedLine stealer malware. "The number of infected devices decreased slightly in mid- and late summer but grew significantly between August and September," the Singapore-headquartered cybersecurity company  said  in its Hi-Tech Crime Trends 2023/2024 report published last week. Between June and October 2023, more than 130,000 unique hosts with access to OpenAI ChatGPT were infiltrated, a 36% increase over what was observed during the first five months of 2023. The breakdown by the top three stealer families is below - LummaC2 - 70,484 hosts Raccoon - 22,468 hosts RedLine - 15,970 hosts "The sharp increase in the number of ChatGPT credentials for sale is due to the overall rise in the numbe
From 500 to 5000 Employees - Securing 3rd Party App-Usage in Mid-Market Companies

From 500 to 5000 Employees - Securing 3rd Party App-Usage in Mid-Market Companies

Mar 04, 2024 SaaS Security / Vulnerability Assessment
A company's lifecycle stage, size, and state have a significant impact on its security needs, policies, and priorities. This is particularly true for modern mid-market companies that are either experiencing or have experienced rapid growth. As requirements and tasks continue to accumulate and malicious actors remain active around the clock, budgets are often stagnant at best. Yet, it is crucial to keep track of the tools and solutions that employees are introducing, the data and know-how shared through these tools, and to ensure that these processes are secure. This need is even more pronounced in today's dynamic and interconnected world, where third-party applications and solutions can be easily accessed and onboarded. The potential damage of losing control over the numerous applications with access and permissions to your data requires no explanation. Security leaders in mid-market companies face a unique set of challenges that demand a distinct approach to overcome.  To begin
Over 100 Malicious AI/ML Models Found on Hugging Face Platform

Over 100 Malicious AI/ML Models Found on Hugging Face Platform

Mar 04, 2024 AI Security / Vulnerability
As many as 100 malicious artificial intelligence (AI)/machine learning (ML) models have been discovered in the Hugging Face platform. These include instances where loading a  pickle file  leads to code execution, software supply chain security firm JFrog said. "The model's payload grants the attacker a shell on the compromised machine, enabling them to gain full control over victims' machines through what is commonly referred to as a 'backdoor,'" senior security researcher David Cohen  said . "This silent infiltration could potentially grant access to critical internal systems and pave the way for large-scale data breaches or even corporate espionage, impacting not just individual users but potentially entire organizations across the globe, all while leaving victims utterly unaware of their compromised state." Specifically, the rogue model initiates a reverse shell connection to 210.117.212[.]93, an IP address that belongs to the Korea Research
Microsoft Releases PyRIT - A Red Teaming Tool for Generative AI

Microsoft Releases PyRIT - A Red Teaming Tool for Generative AI

Feb 23, 2024 Red Teaming / Artificial Intelligence
Microsoft has released an open access automation framework called  PyRIT  (short for Python Risk Identification Tool) to proactively identify risks in generative artificial intelligence (AI) systems. The red teaming tool is designed to "enable every organization across the globe to innovate responsibly with the latest artificial intelligence advances," Ram Shankar Siva Kumar, AI red team lead at Microsoft,  said . The company said PyRIT could be used to assess the robustness of large language model (LLM) endpoints against different harm categories such as fabrication (e.g., hallucination), misuse (e.g., bias), and prohibited content (e.g., harassment). It can also be used to identify security harms ranging from malware generation to jailbreaking, as well as privacy harms like identity theft. PyRIT comes with five interfaces: target, datasets, scoring engine, the ability to support multiple attack strategies, and incorporating a memory component that can either take the
Google Open Sources Magika: AI-Powered File Identification Tool

Google Open Sources Magika: AI-Powered File Identification Tool

Feb 17, 2024 Artificial Intelligence / Data Protection
Google has announced that it's open-sourcing  Magika , an artificial intelligence (AI)-powered tool to identify file types, to help defenders accurately detect binary and textual file types. "Magika outperforms conventional file identification methods providing an overall 30% accuracy boost and up to 95% higher precision on traditionally hard to identify, but potentially problematic content such as VBA, JavaScript, and Powershell," the company  said . The software uses a "custom, highly optimized deep-learning model" that enables the precise identification of file types within milliseconds. Magika implements inference functions using the Open Neural Network Exchange ( ONNX ). Google said it internally uses Magika at scale to help improve users' safety by routing Gmail, Drive, and Safe Browsing files to the proper security and content policy scanners. In November 2023, the tech giant unveiled  RETVec  (short for Resilient and Efficient Text Vectorizer),
Microsoft, OpenAI Warn of Nation-State Hackers Weaponizing AI for Cyber Attacks

Microsoft, OpenAI Warn of Nation-State Hackers Weaponizing AI for Cyber Attacks

Feb 14, 2024 Artificial Intelligence / Cyber Attack
Nation-state actors associated with Russia, North Korea, Iran, and China are experimenting with artificial intelligence (AI) and large language models (LLMs) to complement their ongoing cyber attack operations. The findings come from a report published by Microsoft in collaboration with OpenAI, both of which  said  they disrupted efforts made by five state-affiliated actors that used its AI services to perform malicious cyber activities by terminating their assets and accounts. "Language support is a natural feature of LLMs and is attractive for threat actors with continuous focus on social engineering and other techniques relying on false, deceptive communications tailored to their targets' jobs, professional networks, and other relationships," Microsoft  said  in a report shared with The Hacker News. While no significant or novel attacks employing the LLMs have been detected to date, adversarial exploration of AI technologies has transcended various phases of the at
Cybersecurity Resources