#1 Trusted Cybersecurity News Platform
Followed by 5.20+ million
The Hacker News Logo
Subscribe – Get Latest News

Generative AI | Breaking Cybersecurity News | The Hacker News

Category — Generative AI
Why Phishing-Resistant MFA Is No Longer Optional: The Hidden Risks of Legacy MFA

Why Phishing-Resistant MFA Is No Longer Optional: The Hidden Risks of Legacy MFA

Oct 24, 2024 Ransomware / Generative AI
Sometimes, it turns out that the answers we struggled so hard to find were sitting right in front of us for so long that we somehow overlooked them. When the Department of Homeland Security, through the Cybersecurity and Infrastructure Security Agency (CISA), in coordination with the FBI, issues a cybersecurity warning and prescribes specific action, it's a pretty good idea to at least read the joint advisory. In their advisory AA24-242A, DHS/CISA and the FBI told the entire cybercriminal-stopping world that to stop ransomware attacks, organizations needed to implement phishing-resistant MFA and ditch SMS-based OTP MFA.  The Best Advice I Never Followed  This year, we have experienced an astonishing surge in ransomware payments, with the average payment increasing by a staggering 500%. Per the "State of Ransomware 2024" report from cybersecurity leader Sophos, the average ransom has jumped by 5X reaching $2 million from $400,000 last year. Even more troubling, RISK &...
5 Actionable Steps to Prevent GenAI Data Leaks Without Fully Blocking AI Usage

5 Actionable Steps to Prevent GenAI Data Leaks Without Fully Blocking AI Usage

Oct 01, 2024 Generative AI / Data Protection
Since its emergence, Generative AI has revolutionized enterprise productivity. GenAI tools enable faster and more effective software development, financial analysis, business planning, and customer engagement. However, this business agility comes with significant risks, particularly the potential for sensitive data leakage. As organizations attempt to balance productivity gains with security concerns, many have been forced to choose between unrestricted GenAI usage to banning it altogether. A new e-guide by LayerX titled 5 Actionable Measures to Prevent Data Leakage Through Generative AI Tools is designed to help organizations navigate the challenges of GenAI usage in the workplace. The guide offers practical steps for security managers to protect sensitive corporate data while still reaping the productivity benefits of GenAI tools like ChatGPT. This approach is intended to allow companies to strike the right balance between innovation and security. Why Worry About ChatGPT? The e...
7 PAM Best Practices to Secure Hybrid and Multi-Cloud Environments

7 PAM Best Practices to Secure Hybrid and Multi-Cloud Environments

Dec 04, 2024Risk Management / Zero Trust
Are you using the cloud or thinking about transitioning? Undoubtedly, multi-cloud and hybrid environments offer numerous benefits for organizations. However, the cloud's flexibility, scalability, and efficiency come with significant risk — an expanded attack surface. The decentralization that comes with utilizing multi-cloud environments can also lead to limited visibility into user activity and poor access management.  Privileged accounts with access to your critical systems and sensitive data are among the most vulnerable elements in cloud setups. When mismanaged, these accounts open the doors to unauthorized access, potential malicious activity, and data breaches. That's why strong privileged access management (PAM) is indispensable. PAM plays an essential role in addressing the security challenges of complex infrastructures by enforcing strict access controls and managing the life cycle of privileged accounts. By employing PAM in hybrid and cloud environments, you're not...
The AI Hangover is Here – The End of the Beginning

The AI Hangover is Here – The End of the Beginning

Aug 12, 2024 AI Technology / Machine Learning
After a good year of sustained exuberance, the hangover is finally here. It's a gentle one (for now), as the market corrects the share price of the major players (like Nvidia, Microsoft, and Google), while other players reassess the market and adjust priorities. Gartner calls it the trough of disillusionment , when interest wanes and implementations fail to deliver the promised breakthroughs. Producers of the technology shake out or fail. Investment continues only if the surviving providers improve their products to the satisfaction of early adopters.  Let's be clear, this was always going to be the case: the post-human revolution promised by the AI cheerleaders was never a realistic goal, and the incredible excitement triggered by the early LLMs was not based on market success.  AI is here to stay  What's next for AI then? Well, if it follows the Gartner hype cycle, the deep crash is followed by the slope of enlightenment where the maturing technology regains its foo...
cyber security

Breaking Barriers: Strategies to Unite AppSec and R&D for Success

websiteBackslashApplication Security
Tackle common challenges to make security and innovation work seamlessly.
Offensive AI: The Sine Qua Non of Cybersecurity

Offensive AI: The Sine Qua Non of Cybersecurity

Jul 26, 2024 Digital Warfare / Cybersecurity Training
"Peace is the virtue of civilization. War is its crime. Yet it is often in the furnace of war that the sharpest tools of peace are forged." - Victor Hugo. In 1971, an unsettling message started appearing on several computers that comprised ARPANET, the precursor to what we now know as the Internet. The message, which read "I'm the Creeper: catch me if you can." was the output of a program named Creeper, which was developed by the famous programmer Bob Thomas while he worked at BBN Technologies. While Thomas's intentions were not malicious, the Creeper program represents the advent of what we now call a computer virus. The appearance of Creeper on ARPANET set the stage for the emergence of the first Antivirus software. While unconfirmed, it is believed that Ray Thomlinson, famously known for inventing email, developed Reaper, a program designed to remove Creeper from Infected Machines. The development of this tool used to defensively chase down and remove ...
Summary of "AI Leaders Spill Their Secrets" Webinar

Summary of "AI Leaders Spill Their Secrets" Webinar

Jul 19, 2024 Technology / Artificial Intelligence
Event Overview The " AI Leaders Spill Their Secrets " webinar, hosted by Sigma Computing, featured prominent AI experts sharing their experiences and strategies for success in the AI industry. The panel included Michael Ward from Sardine, Damon Bryan from Hyperfinity, and Stephen Hillian from Astronomer, moderated by Zalak Trivedi, Sigma Computing's Product Manager. Key Speakers and Their Backgrounds 1. Michael Ward Senior Risk Data Analyst at Sardine. Over 25 years of experience in software engineering. Focuses on data science, analytics, and machine learning to prevent fraud and money laundering. 2. Damon Bryan Co-founder and CTO at Hyperfinity. Specializes in decision intelligence software for retailers and brands. Background in data science, AI, and analytics, transitioning from consultancy to a full-fledged software company. 3. Stephen Hillion SVP of Data and AI at Astronomer. Manages data science teams and focuses on the development and scaling of...
How MFA Failures are Fueling a 500% Surge in Ransomware Losses

How MFA Failures are Fueling a 500% Surge in Ransomware Losses

Jul 02, 2024 Multi-Factor Authentication
The cybersecurity threat landscape has witnessed a dramatic and alarming rise in the average ransomware payment, an increase exceeding 500%. Sophos, a global leader in cybersecurity, revealed in its annual "State of Ransomware 2024" report that the average ransom payment has increased 500% in the last year with organizations that paid a ransom reporting an average payment of $2 million, up from $400,000 in 2023. Separately, RISK & INSURANCE, a leading media source for the insurance industry reported recently that in 2023 the median ransom demand soared to $20 million in 2023 from $1.4 million in 2022, and payment skyrocketed to $6.5 million in 2023 from $335,000 in 2022, much more than 500%. This shocking surge is a testament to the increasing sophistication of cyberattacks and the significant vulnerabilities inherent in outdated security methods. The most significant factor contributing to this trend is a broad reliance on twenty-year-old, legacy Multi-Factor Authentic...
Apple Launches Private Cloud Compute for Privacy-Centric AI Processing

Apple Launches Private Cloud Compute for Privacy-Centric AI Processing

Jun 11, 2024 Cloud Computing / Artificial Intelligence
Apple has announced the launch of a "groundbreaking cloud intelligence system" called Private Cloud Compute (PCC) that's designed for processing artificial intelligence (AI) tasks in a privacy-preserving manner in the cloud. The tech giant described PCC as the "most advanced security architecture ever deployed for cloud AI compute at scale." PCC coincides with the arrival of new generative AI (GenAI) features – collectively dubbed Apple Intelligence , or AI for short – that the iPhone maker unveiled in its next generation of software, including iOS 18 , iPadOS 18 , and macOS Sequoia . All of the Apple Intelligence features, both the ones that run on-device and those that rely on PCC, leverage in-house generative models trained on "licensed data, including data selected to enhance specific features, as well as publicly available data collected by our web-crawler, AppleBot." With PCC, the idea is to essentially offload complex requests that requir...
The AI Debate: Google's Guidelines, Meta's GDPR Dispute, Microsoft's Recall Backlash

The AI Debate: Google's Guidelines, Meta's GDPR Dispute, Microsoft's Recall Backlash

Jun 07, 2024 Artificial Intelligence / Privacy
Google is urging third-party Android app developers to incorporate generative artificial intelligence (GenAI) features in a responsible manner. The new guidance from the search and advertising giant is an effort to combat problematic content, including sexual content and hate speech, created through such tools. To that end, apps that generate content using AI must ensure they don't create Restricted Content , have a mechanism for users to report or flag offensive information , and market them in a manner that accurately represents the app's capabilities. App developers are also being recommended to rigorously test their AI models to ensure they respect user safety and privacy. "Be sure to test your apps across various user scenarios and safeguard them against prompts that could manipulate your generative AI feature to create harmful or offensive content," Prabhat Sharma, director of trust and safety for Google Play, Android, and Chrome, said . The development com...
How to Build Your Autonomous SOC Strategy

How to Build Your Autonomous SOC Strategy

May 30, 2024 Endpoint Security / Threat Detection
Security leaders are in a tricky position trying to discern how much new AI-driven cybersecurity tools could actually benefit a security operations center (SOC). The hype about generative AI is still everywhere, but security teams have to live in reality. They face constantly incoming alerts from endpoint security platforms, SIEM tools, and phishing emails reported by internal users. Security teams also face an acute talent shortage.  In this guide, we'll lay out practical steps organizations can take to automate more of their processes and build an autonomous SOC strategy . This should address the acute talent shortage in security teams, by employing artificial intelligence and machine learning with a variety of techniques, these systems simulate the decision-making and investigative processes of human analysts. First, we'll define objectives for an autonomous SOC strategy and then consider key processes that could be automated. Next, we'll consider different AI and automation ...
CensysGPT: AI-Powered Threat Hunting for Cybersecurity Pros (Webinar)

CensysGPT: AI-Powered Threat Hunting for Cybersecurity Pros (Webinar)

May 10, 2024 Artificial Intelligence / Threat Hunting
Artificial intelligence (AI) is transforming cybersecurity, and those leading the charge are using it to outsmart increasingly advanced cyber threats. Join us for an exciting webinar, " The Future of Threat Hunting is Powered by Generative AI ," where you'll explore how AI tools are shaping the future of cybersecurity defenses. During the session, Censys Security Researcher Aidan Holland will introduce you to CensysGPT, a cutting-edge tool revolutionizing threat hunting and cybersecurity research. With CensysGPT, you can ask questions in plain language, understand competitor searches, and uncover insights from network data like never before. Here's why you should attend: Discover the latest:  Learn how generative AI is changing the game in cybersecurity and threat intelligence. Hunt smarter and faster:  See how CensysGPT helps you find threats quicker, reducing your organization's risk. Strengthen your defenses:  Find out how to incorporate AI into your se...
N. Korea-linked Kimsuky Shifts to Compiled HTML Help Files in Ongoing Cyberattacks

N. Korea-linked Kimsuky Shifts to Compiled HTML Help Files in Ongoing Cyberattacks

Mar 24, 2024 Artificial Intelligence / Cyber Espionage
The North Korea-linked threat actor known as  Kimsuky  (aka Black Banshee, Emerald Sleet, or Springtail) has been observed shifting its tactics, leveraging Compiled HTML Help (CHM) files as vectors to deliver malware for harvesting sensitive data. Kimsuky, active since at least 2012, is known to target entities located in South Korea as well as North America, Asia, and Europe. According to Rapid7, attack chains have leveraged weaponized Microsoft Office documents, ISO files, and Windows shortcut (LNK) files, with the group also employing CHM files to  deploy malware  on  compromised hosts . The cybersecurity firm has attributed the activity to Kimsuky with moderate confidence, citing similar tradecraft observed in the past. "While originally designed for help documentation, CHM files have also been exploited for malicious purposes, such as distributing malware, because they can execute JavaScript when opened," the company  said . The CHM file is prop...
U.S. Sanctions Russians Behind 'Doppelganger' Cyber Influence Campaign

U.S. Sanctions Russians Behind 'Doppelganger' Cyber Influence Campaign

Mar 21, 2024 National Security / Data Privacy
The U.S. Treasury Department's Office of Foreign Assets Control (OFAC) on Wednesday announced sanctions against two 46-year-old Russian nationals and the respective companies they own for engaging in cyber influence operations. Ilya Andreevich Gambashidze (Gambashidze), the founder of the Moscow-based company Social Design Agency (SDA), and Nikolai Aleksandrovich Tupikin (Tupikin), the CEO and current owner of Russia-based Company Group Structura LLC (Structura), have been accused of providing services to the Russian government in connection to a "foreign malign influence campaign." The disinformation campaign is tracked by the broader cybersecurity community under the name  Doppelganger , which is known to target audiences in Europe and the U.S. using inauthentic news sites and social media accounts. "SDA and Structura have been identified as key actors of the campaign, responsible for providing [the Government of the Russian Federation] with a variety of servic...
Expert Insights / Articles Videos
Cybersecurity Resources