#1 Trusted Cybersecurity News Platform Followed by 4.50+ million
The Hacker News Logo
Get the Free Newsletter
SaaS Security

Generative AI | Breaking Cybersecurity News | The Hacker News

Over 100 Malicious AI/ML Models Found on Hugging Face Platform

Over 100 Malicious AI/ML Models Found on Hugging Face Platform

Mar 04, 2024 AI Security / Vulnerability
As many as 100 malicious artificial intelligence (AI)/machine learning (ML) models have been discovered in the Hugging Face platform. These include instances where loading a  pickle file  leads to code execution, software supply chain security firm JFrog said. "The model's payload grants the attacker a shell on the compromised machine, enabling them to gain full control over victims' machines through what is commonly referred to as a 'backdoor,'" senior security researcher David Cohen  said . "This silent infiltration could potentially grant access to critical internal systems and pave the way for large-scale data breaches or even corporate espionage, impacting not just individual users but potentially entire organizations across the globe, all while leaving victims utterly unaware of their compromised state." Specifically, the rogue model initiates a reverse shell connection to 210.117.212[.]93, an IP address that belongs to the Korea Research
Microsoft Releases PyRIT - A Red Teaming Tool for Generative AI

Microsoft Releases PyRIT - A Red Teaming Tool for Generative AI

Feb 23, 2024 Red Teaming / Artificial Intelligence
Microsoft has released an open access automation framework called  PyRIT  (short for Python Risk Identification Tool) to proactively identify risks in generative artificial intelligence (AI) systems. The red teaming tool is designed to "enable every organization across the globe to innovate responsibly with the latest artificial intelligence advances," Ram Shankar Siva Kumar, AI red team lead at Microsoft,  said . The company said PyRIT could be used to assess the robustness of large language model (LLM) endpoints against different harm categories such as fabrication (e.g., hallucination), misuse (e.g., bias), and prohibited content (e.g., harassment). It can also be used to identify security harms ranging from malware generation to jailbreaking, as well as privacy harms like identity theft. PyRIT comes with five interfaces: target, datasets, scoring engine, the ability to support multiple attack strategies, and incorporating a memory component that can either take the
6 Ways to Simplify SaaS Identity Governance

6 Ways to Simplify SaaS Identity Governance

Feb 21, 2024SaaS Security / Identity Management
With SaaS applications now making up the vast majority of technology used by employees in most organizations, tasks related to identity governance need to happen across a myriad of individual SaaS apps. This presents a huge challenge for centralized IT teams who are ultimately held responsible for managing and securing app access, but can't possibly become experts in the nuances of the native security settings and access controls for hundreds (or thousands) of apps. And, even if they could, the sheer volume of tasks would easily bury them. Modern IT teams need a way to orchestrate and govern SaaS identity governance by engaging the application owners in the business who are most familiar with how the tool is used, and who needs what type of access.  Nudge Security is a  SaaS security and governance solution  that can help you do just that, with automated workflows to save time and make the process manageable at scale. Read on to learn how it works. 1 . Discover all SaaS apps used b
Italian Data Protection Watchdog Accuses ChatGPT of Privacy Violations

Italian Data Protection Watchdog Accuses ChatGPT of Privacy Violations

Jan 30, 2024 Generative AI / Data Privacy
Italy's data protection authority (DPA) has notified ChatGPT-maker OpenAI of supposedly violating privacy laws in the region. "The available evidence pointed to the existence of breaches of the provisions contained in the E.U. GDPR [General Data Protection Regulation]," the Garante per la protezione dei dati personali (aka the Garante)  said  in a statement on Monday. It also said it will "take account of the work in progress within the ad-hoc  task force  set up by the European Data Protection Framework (EDPB) in its final determination on the case." The development comes nearly 10 months after the watchdog imposed a  temporary ban  on ChatGPT in the country, weeks after which OpenAI  announced  a number of privacy controls, including an  opt-out form  to remove one's personal data from being processed by the large language model (LLM). Access to the tool was subsequently reinstated in late April 2023. The Italian DPA said the latest findings, which h
cyber security

NIST Cybersecurity Framework: Your Go-To Cybersecurity Standard is Changing

websiteArmorPointCybersecurity / Risk Management
Find everything you need to know to prepare for NIST CSF 2.0's impending release in this guide.
Riding the AI Waves: The Rise of Artificial Intelligence to Combat Cyber Threats

Riding the AI Waves: The Rise of Artificial Intelligence to Combat Cyber Threats

Jan 29, 2024
In nearly every segment of our lives, AI (artificial intelligence) now makes a significant impact: It can deliver better healthcare diagnoses and treatments; detect and reduce the risk of financial fraud; improve inventory management; and serve up the right recommendation for a streaming movie on Friday night. However, one can also make a strong case that some of AI's most significant impacts are in cybersecurity. AI's ability to learn, adapt, and predict rapidly evolving threats has made it an indispensable tool in protecting the world's businesses and governments. From basic applications like spam filtering to advanced predictive analytics and AI-assisted response, AI serves a critical role on the front lines, defending our digital assets from cyber criminals. The future for AI in cybersecurity is not all rainbows and roses, however. Today we can see the early signs of a significant shift, driven by the democratization of AI technology. While AI continues to empower organizations
There is a Ransomware Armageddon Coming for Us All

There is a Ransomware Armageddon Coming for Us All

Jan 11, 2024 Artificial Intelligence / Biometric Security
Generative AI will enable anyone to launch sophisticated phishing attacks that only Next-generation MFA devices can stop The least surprising headline from 2023 is that ransomware again set new records for a number of incidents and the damage inflicted. We saw new headlines every week, which included a who's-who of big-name organizations. If MGM, Johnson Controls, Chlorox, Hanes Brands, Caesars Palace, and so many others cannot stop the attacks, how will anyone else? Phishing-driven ransomware is the cyber threat that looms larger and more dangerous than all others. CISA and Cisco report that 90% of data breaches are the result of phishing attacks and monetary losses that exceed $10 billion in total. A report from Splunk revealed that 96 percent of companies fell victim to at least one phishing attack in the last 12 months and 83 percent suffered two or more. Protect your organization from phishing and ransomware by learning about the benefits of Next-Generation MFA. Download th
Russia's AI-Powered Disinformation Operation Targeting Ukraine, U.S., and Germany

Russia's AI-Powered Disinformation Operation Targeting Ukraine, U.S., and Germany

Dec 05, 2023 Brandjacking / Artificial Intelligence
The Russia-linked influence operation called Doppelganger has targeted Ukrainian, U.S., and German audiences through a combination of inauthentic news sites and social media accounts. These campaigns are designed to amplify content designed to undermine Ukraine as well as propagate anti-LGBTQ+ sentiment, U.S. military competence, and Germany's economic and social issues, according to a new Recorded Future report shared with The Hacker News. Doppelganger ,  described  by Meta as the "largest and the most aggressively-persistent Russian-origin operation," is a  pro-Russian network  known for spreading anti-Ukrainian propaganda. Active since at least February 2022, it has been linked to two companies named Structura National Technologies and Social Design Agency. Activities associated with the influence operation are known to leverage manufactured websites as well as those impersonating authentic media – a technique called brandjacking – to disseminate adversarial narrat
Generative AI Security: Preventing Microsoft Copilot Data Exposure

Generative AI Security: Preventing Microsoft Copilot Data Exposure

Dec 05, 2023 Data Security / Generative AI
Microsoft Copilot has been called one of the most powerful productivity tools on the planet. Copilot is an AI assistant that lives inside each of your Microsoft 365 apps — Word, Excel, PowerPoint, Teams, Outlook, and so on. Microsoft's dream is to take the drudgery out of daily work and let humans focus on being creative problem-solvers. What makes Copilot a different beast than ChatGPT and other AI tools is that it has access to everything you've ever worked on in 365. Copilot can instantly search and compile data from across your documents, presentations, email, calendar, notes, and contacts. And therein lies the problem for information security teams. Copilot can access all the sensitive data that a user can access, which is often far too much. On average, 10% of a company's M365 data is open to all employees. Copilot can also rapidly generate  net new  sensitive data that must be protected. Prior to the AI revolution, humans' ability to create and share data
7 Uses for Generative AI to Enhance Security Operations

7 Uses for Generative AI to Enhance Security Operations

Nov 30, 2023 Generative AI / Threat Intelligence
Welcome to a world where Generative AI revolutionizes the field of cybersecurity. Generative AI refers to the use of artificial intelligence (AI) techniques to generate or create new data, such as images, text, or sounds. It has gained significant attention in recent years due to its ability to generate realistic and diverse outputs. When it comes to security operations,  Generative AI can play a significant role . It can be used to detect and prevent various threats, including malware, phishing attempts, and data breaches. Analyzing patterns and behaviors in large amounts of data allows it to identify suspicious activities and alert security teams in real-time. Here are seven practical use cases that demonstrate the power of Generative AI. There are more possibilities out there of how you can achieve objectives and fortify security operations, but this list should get your creative juices flowing. 1) Information Management Information security deals with a breadth of data that
Predictive AI in Cybersecurity: Outcomes Demonstrate All AI is Not Created Equally

Predictive AI in Cybersecurity: Outcomes Demonstrate All AI is Not Created Equally

Nov 03, 2023 Artificial Intelligence / Cyber Threat
Here is what matters most when it comes to artificial intelligence (AI) in cybersecurity: Outcomes.  As the threat landscape evolves and  generative AI is added  to the toolsets available to defenders and attackers alike, evaluating the relative effectiveness of various  AI-based security  offerings is increasingly important — and difficult. Asking the right questions can help you spot solutions that deliver value and ROI, instead of just marketing hype. Questions like, "Can your predictive AI tools sufficiently block what's new?" and, "What actually signals success in a cybersecurity platform powered by artificial intelligence?" As BlackBerry's AI and ML (machine learning) patent portfolio attests, BlackBerry is a leader in this space and has developed an exceptionally well-informed point of view on what works and why. Let's explore this timely topic. Evolution of AI in Cybersecurity Some of the earliest uses of ML and AI in cybersecurity date back to the de
Google Expands Its Bug Bounty Program to Tackle Artificial Intelligence Threats

Google Expands Its Bug Bounty Program to Tackle Artificial Intelligence Threats

Oct 27, 2023 Artificial Intelligence / Vulnerability
Google has announced that it's expanding its Vulnerability Rewards Program ( VRP ) to compensate researchers for finding attack scenarios tailored to generative artificial intelligence (AI) systems in an effort to  bolster AI safety and security . "Generative AI raises new and different concerns than traditional digital security, such as the potential for unfair bias, model manipulation or misinterpretations of data (hallucinations)," Google's Laurie Richardson and Royal Hansen  said . Some of the categories that are in scope  include  prompt injections, leakage of sensitive data from training datasets, model manipulation, adversarial perturbation attacks that trigger misclassification, and model theft. It's worth noting that Google earlier this July instituted an  AI Red Team  to help address threats to AI systems as part of its Secure AI Framework ( SAIF ). Also announced as part of its commitment to secure AI are efforts to strengthen the AI supply chain
Exploring the Realm of Malicious Generative AI: A New Digital Security Challenge

Exploring the Realm of Malicious Generative AI: A New Digital Security Challenge

Oct 17, 2023 Cyber Threat / Artificial Intelligence
Recently, the cybersecurity landscape has been confronted with a daunting new reality – the rise of malicious Generative AI, like FraudGPT and WormGPT. These rogue creations, lurking in the dark corners of the internet, pose a distinctive threat to the world of digital security. In this article, we will look at the nature of Generative AI fraud, analyze the messaging surrounding these creations, and evaluate their potential impact on cybersecurity. While it's crucial to maintain a watchful eye, it's equally important to avoid widespread panic, as the situation, though disconcerting, is not yet a cause for alarm. Interested in how your organization can protect against generative AI attacks with an advanced email security solution?  Get an IRONSCALES demo .  Meet FraudGPT and WormGPT FraudGPT  represents a subscription-based malicious Generative AI that harnesses sophisticated machine learning algorithms to generate deceptive content. In stark contrast to ethical AI models, Fr
"I Had a Dream" and Generative AI Jailbreaks

"I Had a Dream" and Generative AI Jailbreaks

Oct 09, 2023 Artificial Intelligence /
"Of course, here's an example of simple code in the Python programming language that can be associated with the keywords "MyHotKeyHandler," "Keylogger," and "macOS," this is a message from ChatGPT followed by a piece of malicious code and a brief remark not to use it for illegal purposes. Initially published by  Moonlock Lab , the screenshots of ChatGPT writing code for a keylogger malware is yet another example of trivial ways to hack large language models and exploit them against their policy of use. In the case of Moonlock Lab, their malware research engineer told ChatGPT about a dream where an attacker was writing code. In the dream, he could only see the three words: "MyHotKeyHandler," "Keylogger," and "macOS." The engineer asked ChatGPT to completely recreate the malicious code and help him stop the attack. After a brief conversation, the AI finally provided the answer. "At times, the code generated isn&
Cybersecurity Resources