#1 Trusted Cybersecurity News Platform Followed by 4.50+ million
The Hacker News Logo
Subscribe – Get Latest News
Insider Risk Management

Generative AI | Breaking Cybersecurity News | The Hacker News

Apple Launches Private Cloud Compute for Privacy-Centric AI Processing

Apple Launches Private Cloud Compute for Privacy-Centric AI Processing

Jun 11, 2024 Cloud Computing / Artificial Intelligence
Apple has announced the launch of a "groundbreaking cloud intelligence system" called Private Cloud Compute (PCC) that's designed for processing artificial intelligence (AI) tasks in a privacy-preserving manner in the cloud. The tech giant described PCC as the "most advanced security architecture ever deployed for cloud AI compute at scale." PCC coincides with the arrival of new generative AI (GenAI) features – collectively dubbed Apple Intelligence , or AI for short – that the iPhone maker unveiled in its next generation of software, including iOS 18 , iPadOS 18 , and macOS Sequoia . All of the Apple Intelligence features, both the ones that run on-device and those that rely on PCC, leverage in-house generative models trained on "licensed data, including data selected to enhance specific features, as well as publicly available data collected by our web-crawler, AppleBot." With PCC, the idea is to essentially offload complex requests that requir
The AI Debate: Google's Guidelines, Meta's GDPR Dispute, Microsoft's Recall Backlash

The AI Debate: Google's Guidelines, Meta's GDPR Dispute, Microsoft's Recall Backlash

Jun 07, 2024 Artificial Intelligence / Privacy
Google is urging third-party Android app developers to incorporate generative artificial intelligence (GenAI) features in a responsible manner. The new guidance from the search and advertising giant is an effort to combat problematic content, including sexual content and hate speech, created through such tools. To that end, apps that generate content using AI must ensure they don't create Restricted Content , have a mechanism for users to report or flag offensive information , and market them in a manner that accurately represents the app's capabilities. App developers are also being recommended to rigorously test their AI models to ensure they respect user safety and privacy. "Be sure to test your apps across various user scenarios and safeguard them against prompts that could manipulate your generative AI feature to create harmful or offensive content," Prabhat Sharma, director of trust and safety for Google Play, Android, and Chrome, said . The development com
Why Regulated Industries are Turning to Military-Grade Cyber Defenses

Why Regulated Industries are Turning to Military-Grade Cyber Defenses

Jun 14, 2024Cybersecurity / Regulatory Compliance
As cyber threats loom large and data breaches continue to pose increasingly significant risks. Organizations and industries that handle sensitive information and valuable assets make prime targets for cybercriminals seeking financial gain or strategic advantage.  Which is why many highly regulated sectors, from finance to utilities, are turning to military-grade cyber defenses to safeguard their operations. Regulatory Pressures Impacting Cyber Decisions Industries such as finance, healthcare, and government are subject to strict regulatory standards, governing data privacy, security, and compliance. Non-compliance with these regulations can result in severe penalties, legal repercussions, and damage to reputation. To meet regulatory requirements and mitigate the ever-increasing risk, organizations are shifting to adopt more robust cybersecurity measures. Understanding the Increase of Threats Attacks on regulated industries have increased dramatically over the past 5 years, with o
How to Build Your Autonomous SOC Strategy

How to Build Your Autonomous SOC Strategy

May 30, 2024 Endpoint Security / Threat Detection
Security leaders are in a tricky position trying to discern how much new AI-driven cybersecurity tools could actually benefit a security operations center (SOC). The hype about generative AI is still everywhere, but security teams have to live in reality. They face constantly incoming alerts from endpoint security platforms, SIEM tools, and phishing emails reported by internal users. Security teams also face an acute talent shortage.  In this guide, we'll lay out practical steps organizations can take to automate more of their processes and build an autonomous SOC strategy . This should address the acute talent shortage in security teams, by employing artificial intelligence and machine learning with a variety of techniques, these systems simulate the decision-making and investigative processes of human analysts. First, we'll define objectives for an autonomous SOC strategy and then consider key processes that could be automated. Next, we'll consider different AI and automation
cyber security

Join the Live Session: How to Automate SOC 2 & ISO 27001 Compliance

websiteVantaCompliance / Risk Management
Learn about the in-demand frameworks and how Vanta's automation can help you quickly achieve compliance.
CensysGPT: AI-Powered Threat Hunting for Cybersecurity Pros (Webinar)

CensysGPT: AI-Powered Threat Hunting for Cybersecurity Pros (Webinar)

May 10, 2024 Artificial Intelligence / Threat Hunting
Artificial intelligence (AI) is transforming cybersecurity, and those leading the charge are using it to outsmart increasingly advanced cyber threats. Join us for an exciting webinar, " The Future of Threat Hunting is Powered by Generative AI ," where you'll explore how AI tools are shaping the future of cybersecurity defenses. During the session, Censys Security Researcher Aidan Holland will introduce you to CensysGPT, a cutting-edge tool revolutionizing threat hunting and cybersecurity research. With CensysGPT, you can ask questions in plain language, understand competitor searches, and uncover insights from network data like never before. Here's why you should attend: Discover the latest:  Learn how generative AI is changing the game in cybersecurity and threat intelligence. Hunt smarter and faster:  See how CensysGPT helps you find threats quicker, reducing your organization's risk. Strengthen your defenses:  Find out how to incorporate AI into your se
N. Korea-linked Kimsuky Shifts to Compiled HTML Help Files in Ongoing Cyberattacks

N. Korea-linked Kimsuky Shifts to Compiled HTML Help Files in Ongoing Cyberattacks

Mar 24, 2024 Artificial Intelligence / Cyber Espionage
The North Korea-linked threat actor known as  Kimsuky  (aka Black Banshee, Emerald Sleet, or Springtail) has been observed shifting its tactics, leveraging Compiled HTML Help (CHM) files as vectors to deliver malware for harvesting sensitive data. Kimsuky, active since at least 2012, is known to target entities located in South Korea as well as North America, Asia, and Europe. According to Rapid7, attack chains have leveraged weaponized Microsoft Office documents, ISO files, and Windows shortcut (LNK) files, with the group also employing CHM files to  deploy malware  on  compromised hosts . The cybersecurity firm has attributed the activity to Kimsuky with moderate confidence, citing similar tradecraft observed in the past. "While originally designed for help documentation, CHM files have also been exploited for malicious purposes, such as distributing malware, because they can execute JavaScript when opened," the company  said . The CHM file is propagated within an IS
U.S. Sanctions Russians Behind 'Doppelganger' Cyber Influence Campaign

U.S. Sanctions Russians Behind 'Doppelganger' Cyber Influence Campaign

Mar 21, 2024 National Security / Data Privacy
The U.S. Treasury Department's Office of Foreign Assets Control (OFAC) on Wednesday announced sanctions against two 46-year-old Russian nationals and the respective companies they own for engaging in cyber influence operations. Ilya Andreevich Gambashidze (Gambashidze), the founder of the Moscow-based company Social Design Agency (SDA), and Nikolai Aleksandrovich Tupikin (Tupikin), the CEO and current owner of Russia-based Company Group Structura LLC (Structura), have been accused of providing services to the Russian government in connection to a "foreign malign influence campaign." The disinformation campaign is tracked by the broader cybersecurity community under the name  Doppelganger , which is known to target audiences in Europe and the U.S. using inauthentic news sites and social media accounts. "SDA and Structura have been identified as key actors of the campaign, responsible for providing [the Government of the Russian Federation] with a variety of servic
Generative AI Security - Secure Your Business in a World Powered by LLMs

Generative AI Security - Secure Your Business in a World Powered by LLMs

Mar 20, 2024 Artificial intelligence / Webinar
Did you know that 79% of organizations are already leveraging Generative AI technologies? Much like the internet defined the 90s and the cloud revolutionized the 2010s, we are now in the era of Large Language Models (LLMs) and Generative AI. The potential of Generative AI is immense, yet it brings significant challenges, especially in security integration. Despite their powerful capabilities, LLMs must be approached with caution. A breach in an LLM's security could expose the data it was trained on, along with sensitive organizational and user information, presenting a considerable risk. Join us for an enlightening session with Elad Schulman, CEO & Co-Founder of Lasso Security, and Nir Chervoni, Booking.com's Head of Data Security. They will share their real-world experiences and insights into securing Generative AI technologies. Why Attend? This webinar is a must for IT professionals, security experts, business leaders, and anyone fascinated by the future of Generati
From Deepfakes to Malware: AI's Expanding Role in Cyber Attacks

From Deepfakes to Malware: AI's Expanding Role in Cyber Attacks

Mar 19, 2024 Generative AI / Incident Response
Large language models (LLMs) powering artificial intelligence (AI) tools today could be exploited to develop self-augmenting malware capable of bypassing YARA rules. "Generative AI can be used to evade string-based YARA rules by augmenting the source code of small malware variants, effectively lowering detection rates," Recorded Future  said  in a new report shared with The Hacker News. The findings are part of a red teaming exercise designed to uncover malicious use cases for AI technologies, which are  already being experimented  with by threat actors to create malware code snippets, generate phishing emails, and conduct reconnaissance on potential targets. The cybersecurity firm said it submitted to an LLM a known piece of malware called  STEELHOOK  that's associated with the APT28 hacking group, alongside its YARA rules, asking it to modify the source code to sidestep detection such that the original functionality remained intact and the generated source code wa
Over 100 Malicious AI/ML Models Found on Hugging Face Platform

Over 100 Malicious AI/ML Models Found on Hugging Face Platform

Mar 04, 2024 AI Security / Vulnerability
As many as 100 malicious artificial intelligence (AI)/machine learning (ML) models have been discovered in the Hugging Face platform. These include instances where loading a  pickle file  leads to code execution, software supply chain security firm JFrog said. "The model's payload grants the attacker a shell on the compromised machine, enabling them to gain full control over victims' machines through what is commonly referred to as a 'backdoor,'" senior security researcher David Cohen  said . "This silent infiltration could potentially grant access to critical internal systems and pave the way for large-scale data breaches or even corporate espionage, impacting not just individual users but potentially entire organizations across the globe, all while leaving victims utterly unaware of their compromised state." Specifically, the rogue model initiates a reverse shell connection to 210.117.212[.]93, an IP address that belongs to the Korea Research
Microsoft Releases PyRIT - A Red Teaming Tool for Generative AI

Microsoft Releases PyRIT - A Red Teaming Tool for Generative AI

Feb 23, 2024 Red Teaming / Artificial Intelligence
Microsoft has released an open access automation framework called  PyRIT  (short for Python Risk Identification Tool) to proactively identify risks in generative artificial intelligence (AI) systems. The red teaming tool is designed to "enable every organization across the globe to innovate responsibly with the latest artificial intelligence advances," Ram Shankar Siva Kumar, AI red team lead at Microsoft,  said . The company said PyRIT could be used to assess the robustness of large language model (LLM) endpoints against different harm categories such as fabrication (e.g., hallucination), misuse (e.g., bias), and prohibited content (e.g., harassment). It can also be used to identify security harms ranging from malware generation to jailbreaking, as well as privacy harms like identity theft. PyRIT comes with five interfaces: target, datasets, scoring engine, the ability to support multiple attack strategies, and incorporating a memory component that can either take the
Italian Data Protection Watchdog Accuses ChatGPT of Privacy Violations

Italian Data Protection Watchdog Accuses ChatGPT of Privacy Violations

Jan 30, 2024 Generative AI / Data Privacy
Italy's data protection authority (DPA) has notified ChatGPT-maker OpenAI of supposedly violating privacy laws in the region. "The available evidence pointed to the existence of breaches of the provisions contained in the E.U. GDPR [General Data Protection Regulation]," the Garante per la protezione dei dati personali (aka the Garante)  said  in a statement on Monday. It also said it will "take account of the work in progress within the ad-hoc  task force  set up by the European Data Protection Framework (EDPB) in its final determination on the case." The development comes nearly 10 months after the watchdog imposed a  temporary ban  on ChatGPT in the country, weeks after which OpenAI  announced  a number of privacy controls, including an  opt-out form  to remove one's personal data from being processed by the large language model (LLM). Access to the tool was subsequently reinstated in late April 2023. The Italian DPA said the latest findings, which h
Riding the AI Waves: The Rise of Artificial Intelligence to Combat Cyber Threats

Riding the AI Waves: The Rise of Artificial Intelligence to Combat Cyber Threats

Jan 29, 2024
In nearly every segment of our lives, AI (artificial intelligence) now makes a significant impact: It can deliver better healthcare diagnoses and treatments; detect and reduce the risk of financial fraud; improve inventory management; and serve up the right recommendation for a streaming movie on Friday night. However, one can also make a strong case that some of AI's most significant impacts are in cybersecurity. AI's ability to learn, adapt, and predict rapidly evolving threats has made it an indispensable tool in protecting the world's businesses and governments. From basic applications like spam filtering to advanced predictive analytics and AI-assisted response, AI serves a critical role on the front lines, defending our digital assets from cyber criminals. The future for AI in cybersecurity is not all rainbows and roses, however. Today we can see the early signs of a significant shift, driven by the democratization of AI technology. While AI continues to empower organizations
There is a Ransomware Armageddon Coming for Us All

There is a Ransomware Armageddon Coming for Us All

Jan 11, 2024 Artificial Intelligence / Biometric Security
Generative AI will enable anyone to launch sophisticated phishing attacks that only Next-generation MFA devices can stop The least surprising headline from 2023 is that ransomware again set new records for a number of incidents and the damage inflicted. We saw new headlines every week, which included a who's-who of big-name organizations. If MGM, Johnson Controls, Chlorox, Hanes Brands, Caesars Palace, and so many others cannot stop the attacks, how will anyone else? Phishing-driven ransomware is the cyber threat that looms larger and more dangerous than all others. CISA and Cisco report that 90% of data breaches are the result of phishing attacks and monetary losses that exceed $10 billion in total. A report from Splunk revealed that 96 percent of companies fell victim to at least one phishing attack in the last 12 months and 83 percent suffered two or more. Protect your organization from phishing and ransomware by learning about the benefits of Next-Generation MFA. Download th
Expert Insights
Cybersecurity Resources