#1 Trusted Cybersecurity News Platform
Followed by 4.50+ million
The Hacker News Logo
Subscribe – Get Latest News
AI Security

Generative AI | Breaking Cybersecurity News | The Hacker News

Offensive AI: The Sine Qua Non of Cybersecurity

Offensive AI: The Sine Qua Non of Cybersecurity

Jul 26, 2024 Digital Warfare / Cybersecurity Training
"Peace is the virtue of civilization. War is its crime. Yet it is often in the furnace of war that the sharpest tools of peace are forged." - Victor Hugo. In 1971, an unsettling message started appearing on several computers that comprised ARPANET, the precursor to what we now know as the Internet. The message, which read "I'm the Creeper: catch me if you can." was the output of a program named Creeper, which was developed by the famous programmer Bob Thomas while he worked at BBN Technologies. While Thomas's intentions were not malicious, the Creeper program represents the advent of what we now call a computer virus. The appearance of Creeper on ARPANET set the stage for the emergence of the first Antivirus software. While unconfirmed, it is believed that Ray Thomlinson, famously known for inventing email, developed Reaper, a program designed to remove Creeper from Infected Machines. The development of this tool used to defensively chase down and remove
Summary of "AI Leaders Spill Their Secrets" Webinar

Summary of "AI Leaders Spill Their Secrets" Webinar

Jul 19, 2024 Technology / Artificial Intelligence
Event Overview The " AI Leaders Spill Their Secrets " webinar, hosted by Sigma Computing, featured prominent AI experts sharing their experiences and strategies for success in the AI industry. The panel included Michael Ward from Sardine, Damon Bryan from Hyperfinity, and Stephen Hillian from Astronomer, moderated by Zalak Trivedi, Sigma Computing's Product Manager. Key Speakers and Their Backgrounds 1. Michael Ward Senior Risk Data Analyst at Sardine. Over 25 years of experience in software engineering. Focuses on data science, analytics, and machine learning to prevent fraud and money laundering. 2. Damon Bryan Co-founder and CTO at Hyperfinity. Specializes in decision intelligence software for retailers and brands. Background in data science, AI, and analytics, transitioning from consultancy to a full-fledged software company. 3. Stephen Hillion SVP of Data and AI at Astronomer. Manages data science teams and focuses on the development and scaling of
How to Increase Engagement with Your Cybersecurity Clients Through vCISO Reporting

How to Increase Engagement with Your Cybersecurity Clients Through vCISO Reporting

Jul 22, 2024vCISO / Business Security
As a vCISO, you are responsible for your client's cybersecurity strategy and risk governance. This incorporates multiple disciplines, from research to execution to reporting. Recently, we published a comprehensive playbook for vCISOs, "Your First 100 Days as a vCISO – 5 Steps to Success" , which covers all the phases entailed in launching a successful vCISO engagement, along with recommended actions to take, and step-by-step examples.  Following the success of the playbook and the requests that have come in from the MSP/MSSP community, we decided to drill down into specific parts of vCISO reporting and provide more color and examples. In this article, we focus on how to create compelling narratives within a report, which has a significant impact on the overall MSP/MSSP value proposition.  This article brings the highlights of a recent guided workshop we held, covering what makes a successful report and how it can be used to enhance engagement with your cyber security clients.
How MFA Failures are Fueling a 500% Surge in Ransomware Losses

How MFA Failures are Fueling a 500% Surge in Ransomware Losses

Jul 02, 2024 Multi-Factor Authentication
The cybersecurity threat landscape has witnessed a dramatic and alarming rise in the average ransomware payment, an increase exceeding 500%. Sophos, a global leader in cybersecurity, revealed in its annual "State of Ransomware 2024" report that the average ransom payment has increased 500% in the last year with organizations that paid a ransom reporting an average payment of $2 million, up from $400,000 in 2023. Separately, RISK & INSURANCE, a leading media source for the insurance industry reported recently that in 2023 the median ransom demand soared to $20 million in 2023 from $1.4 million in 2022, and payment skyrocketed to $6.5 million in 2023 from $335,000 in 2022, much more than 500%. This shocking surge is a testament to the increasing sophistication of cyberattacks and the significant vulnerabilities inherent in outdated security methods. The most significant factor contributing to this trend is a broad reliance on twenty-year-old, legacy Multi-Factor Authentic
cyber security

Free OAuth Investigation Checklist - How to Uncover Risky or Malicious Grants

websiteNudge SecuritySaaS Security / Supply Chain
OAuth grants provide yet another way for attackers to compromise identities. Download our free checklist to learn what to look for and where when reviewing OAuth grants for potential risks.
Apple Launches Private Cloud Compute for Privacy-Centric AI Processing

Apple Launches Private Cloud Compute for Privacy-Centric AI Processing

Jun 11, 2024 Cloud Computing / Artificial Intelligence
Apple has announced the launch of a "groundbreaking cloud intelligence system" called Private Cloud Compute (PCC) that's designed for processing artificial intelligence (AI) tasks in a privacy-preserving manner in the cloud. The tech giant described PCC as the "most advanced security architecture ever deployed for cloud AI compute at scale." PCC coincides with the arrival of new generative AI (GenAI) features – collectively dubbed Apple Intelligence , or AI for short – that the iPhone maker unveiled in its next generation of software, including iOS 18 , iPadOS 18 , and macOS Sequoia . All of the Apple Intelligence features, both the ones that run on-device and those that rely on PCC, leverage in-house generative models trained on "licensed data, including data selected to enhance specific features, as well as publicly available data collected by our web-crawler, AppleBot." With PCC, the idea is to essentially offload complex requests that requir
The AI Debate: Google's Guidelines, Meta's GDPR Dispute, Microsoft's Recall Backlash

The AI Debate: Google's Guidelines, Meta's GDPR Dispute, Microsoft's Recall Backlash

Jun 07, 2024 Artificial Intelligence / Privacy
Google is urging third-party Android app developers to incorporate generative artificial intelligence (GenAI) features in a responsible manner. The new guidance from the search and advertising giant is an effort to combat problematic content, including sexual content and hate speech, created through such tools. To that end, apps that generate content using AI must ensure they don't create Restricted Content , have a mechanism for users to report or flag offensive information , and market them in a manner that accurately represents the app's capabilities. App developers are also being recommended to rigorously test their AI models to ensure they respect user safety and privacy. "Be sure to test your apps across various user scenarios and safeguard them against prompts that could manipulate your generative AI feature to create harmful or offensive content," Prabhat Sharma, director of trust and safety for Google Play, Android, and Chrome, said . The development com
How to Build Your Autonomous SOC Strategy

How to Build Your Autonomous SOC Strategy

May 30, 2024 Endpoint Security / Threat Detection
Security leaders are in a tricky position trying to discern how much new AI-driven cybersecurity tools could actually benefit a security operations center (SOC). The hype about generative AI is still everywhere, but security teams have to live in reality. They face constantly incoming alerts from endpoint security platforms, SIEM tools, and phishing emails reported by internal users. Security teams also face an acute talent shortage.  In this guide, we'll lay out practical steps organizations can take to automate more of their processes and build an autonomous SOC strategy . This should address the acute talent shortage in security teams, by employing artificial intelligence and machine learning with a variety of techniques, these systems simulate the decision-making and investigative processes of human analysts. First, we'll define objectives for an autonomous SOC strategy and then consider key processes that could be automated. Next, we'll consider different AI and automation
CensysGPT: AI-Powered Threat Hunting for Cybersecurity Pros (Webinar)

CensysGPT: AI-Powered Threat Hunting for Cybersecurity Pros (Webinar)

May 10, 2024 Artificial Intelligence / Threat Hunting
Artificial intelligence (AI) is transforming cybersecurity, and those leading the charge are using it to outsmart increasingly advanced cyber threats. Join us for an exciting webinar, " The Future of Threat Hunting is Powered by Generative AI ," where you'll explore how AI tools are shaping the future of cybersecurity defenses. During the session, Censys Security Researcher Aidan Holland will introduce you to CensysGPT, a cutting-edge tool revolutionizing threat hunting and cybersecurity research. With CensysGPT, you can ask questions in plain language, understand competitor searches, and uncover insights from network data like never before. Here's why you should attend: Discover the latest:  Learn how generative AI is changing the game in cybersecurity and threat intelligence. Hunt smarter and faster:  See how CensysGPT helps you find threats quicker, reducing your organization's risk. Strengthen your defenses:  Find out how to incorporate AI into your se
N. Korea-linked Kimsuky Shifts to Compiled HTML Help Files in Ongoing Cyberattacks

N. Korea-linked Kimsuky Shifts to Compiled HTML Help Files in Ongoing Cyberattacks

Mar 24, 2024 Artificial Intelligence / Cyber Espionage
The North Korea-linked threat actor known as  Kimsuky  (aka Black Banshee, Emerald Sleet, or Springtail) has been observed shifting its tactics, leveraging Compiled HTML Help (CHM) files as vectors to deliver malware for harvesting sensitive data. Kimsuky, active since at least 2012, is known to target entities located in South Korea as well as North America, Asia, and Europe. According to Rapid7, attack chains have leveraged weaponized Microsoft Office documents, ISO files, and Windows shortcut (LNK) files, with the group also employing CHM files to  deploy malware  on  compromised hosts . The cybersecurity firm has attributed the activity to Kimsuky with moderate confidence, citing similar tradecraft observed in the past. "While originally designed for help documentation, CHM files have also been exploited for malicious purposes, such as distributing malware, because they can execute JavaScript when opened," the company  said . The CHM file is propagated within an IS
U.S. Sanctions Russians Behind 'Doppelganger' Cyber Influence Campaign

U.S. Sanctions Russians Behind 'Doppelganger' Cyber Influence Campaign

Mar 21, 2024 National Security / Data Privacy
The U.S. Treasury Department's Office of Foreign Assets Control (OFAC) on Wednesday announced sanctions against two 46-year-old Russian nationals and the respective companies they own for engaging in cyber influence operations. Ilya Andreevich Gambashidze (Gambashidze), the founder of the Moscow-based company Social Design Agency (SDA), and Nikolai Aleksandrovich Tupikin (Tupikin), the CEO and current owner of Russia-based Company Group Structura LLC (Structura), have been accused of providing services to the Russian government in connection to a "foreign malign influence campaign." The disinformation campaign is tracked by the broader cybersecurity community under the name  Doppelganger , which is known to target audiences in Europe and the U.S. using inauthentic news sites and social media accounts. "SDA and Structura have been identified as key actors of the campaign, responsible for providing [the Government of the Russian Federation] with a variety of servic
Generative AI Security - Secure Your Business in a World Powered by LLMs

Generative AI Security - Secure Your Business in a World Powered by LLMs

Mar 20, 2024 Artificial intelligence / Webinar
Did you know that 79% of organizations are already leveraging Generative AI technologies? Much like the internet defined the 90s and the cloud revolutionized the 2010s, we are now in the era of Large Language Models (LLMs) and Generative AI. The potential of Generative AI is immense, yet it brings significant challenges, especially in security integration. Despite their powerful capabilities, LLMs must be approached with caution. A breach in an LLM's security could expose the data it was trained on, along with sensitive organizational and user information, presenting a considerable risk. Join us for an enlightening session with Elad Schulman, CEO & Co-Founder of Lasso Security, and Nir Chervoni, Booking.com's Head of Data Security. They will share their real-world experiences and insights into securing Generative AI technologies. Why Attend? This webinar is a must for IT professionals, security experts, business leaders, and anyone fascinated by the future of Generati
From Deepfakes to Malware: AI's Expanding Role in Cyber Attacks

From Deepfakes to Malware: AI's Expanding Role in Cyber Attacks

Mar 19, 2024 Generative AI / Incident Response
Large language models (LLMs) powering artificial intelligence (AI) tools today could be exploited to develop self-augmenting malware capable of bypassing YARA rules. "Generative AI can be used to evade string-based YARA rules by augmenting the source code of small malware variants, effectively lowering detection rates," Recorded Future  said  in a new report shared with The Hacker News. The findings are part of a red teaming exercise designed to uncover malicious use cases for AI technologies, which are  already being experimented  with by threat actors to create malware code snippets, generate phishing emails, and conduct reconnaissance on potential targets. The cybersecurity firm said it submitted to an LLM a known piece of malware called  STEELHOOK  that's associated with the APT28 hacking group, alongside its YARA rules, asking it to modify the source code to sidestep detection such that the original functionality remained intact and the generated source code wa
Over 100 Malicious AI/ML Models Found on Hugging Face Platform

Over 100 Malicious AI/ML Models Found on Hugging Face Platform

Mar 04, 2024 AI Security / Vulnerability
As many as 100 malicious artificial intelligence (AI)/machine learning (ML) models have been discovered in the Hugging Face platform. These include instances where loading a  pickle file  leads to code execution, software supply chain security firm JFrog said. "The model's payload grants the attacker a shell on the compromised machine, enabling them to gain full control over victims' machines through what is commonly referred to as a 'backdoor,'" senior security researcher David Cohen  said . "This silent infiltration could potentially grant access to critical internal systems and pave the way for large-scale data breaches or even corporate espionage, impacting not just individual users but potentially entire organizations across the globe, all while leaving victims utterly unaware of their compromised state." Specifically, the rogue model initiates a reverse shell connection to 210.117.212[.]93, an IP address that belongs to the Korea Research
Microsoft Releases PyRIT - A Red Teaming Tool for Generative AI

Microsoft Releases PyRIT - A Red Teaming Tool for Generative AI

Feb 23, 2024 Red Teaming / Artificial Intelligence
Microsoft has released an open access automation framework called  PyRIT  (short for Python Risk Identification Tool) to proactively identify risks in generative artificial intelligence (AI) systems. The red teaming tool is designed to "enable every organization across the globe to innovate responsibly with the latest artificial intelligence advances," Ram Shankar Siva Kumar, AI red team lead at Microsoft,  said . The company said PyRIT could be used to assess the robustness of large language model (LLM) endpoints against different harm categories such as fabrication (e.g., hallucination), misuse (e.g., bias), and prohibited content (e.g., harassment). It can also be used to identify security harms ranging from malware generation to jailbreaking, as well as privacy harms like identity theft. PyRIT comes with five interfaces: target, datasets, scoring engine, the ability to support multiple attack strategies, and incorporating a memory component that can either take the
Italian Data Protection Watchdog Accuses ChatGPT of Privacy Violations

Italian Data Protection Watchdog Accuses ChatGPT of Privacy Violations

Jan 30, 2024 Generative AI / Data Privacy
Italy's data protection authority (DPA) has notified ChatGPT-maker OpenAI of supposedly violating privacy laws in the region. "The available evidence pointed to the existence of breaches of the provisions contained in the E.U. GDPR [General Data Protection Regulation]," the Garante per la protezione dei dati personali (aka the Garante)  said  in a statement on Monday. It also said it will "take account of the work in progress within the ad-hoc  task force  set up by the European Data Protection Framework (EDPB) in its final determination on the case." The development comes nearly 10 months after the watchdog imposed a  temporary ban  on ChatGPT in the country, weeks after which OpenAI  announced  a number of privacy controls, including an  opt-out form  to remove one's personal data from being processed by the large language model (LLM). Access to the tool was subsequently reinstated in late April 2023. The Italian DPA said the latest findings, which h
Expert Insights
Cybersecurity Resources