#1 Trusted Cybersecurity News Platform
Followed by 4.50+ million
The Hacker News Logo
Subscribe – Get Latest News
AI Security

Generative AI | Breaking Cybersecurity News | The Hacker News

Russia's AI-Powered Disinformation Operation Targeting Ukraine, U.S., and Germany

Russia's AI-Powered Disinformation Operation Targeting Ukraine, U.S., and Germany

Dec 05, 2023 Brandjacking / Artificial Intelligence
The Russia-linked influence operation called Doppelganger has targeted Ukrainian, U.S., and German audiences through a combination of inauthentic news sites and social media accounts. These campaigns are designed to amplify content designed to undermine Ukraine as well as propagate anti-LGBTQ+ sentiment, U.S. military competence, and Germany's economic and social issues, according to a new Recorded Future report shared with The Hacker News. Doppelganger ,  described  by Meta as the "largest and the most aggressively-persistent Russian-origin operation," is a  pro-Russian network  known for spreading anti-Ukrainian propaganda. Active since at least February 2022, it has been linked to two companies named Structura National Technologies and Social Design Agency. Activities associated with the influence operation are known to leverage manufactured websites as well as those impersonating authentic media – a technique called brandjacking – to disseminate adversarial narrat
Generative AI Security: Preventing Microsoft Copilot Data Exposure

Generative AI Security: Preventing Microsoft Copilot Data Exposure

Dec 05, 2023 Data Security / Generative AI
Microsoft Copilot has been called one of the most powerful productivity tools on the planet. Copilot is an AI assistant that lives inside each of your Microsoft 365 apps — Word, Excel, PowerPoint, Teams, Outlook, and so on. Microsoft's dream is to take the drudgery out of daily work and let humans focus on being creative problem-solvers. What makes Copilot a different beast than ChatGPT and other AI tools is that it has access to everything you've ever worked on in 365. Copilot can instantly search and compile data from across your documents, presentations, email, calendar, notes, and contacts. And therein lies the problem for information security teams. Copilot can access all the sensitive data that a user can access, which is often far too much. On average, 10% of a company's M365 data is open to all employees. Copilot can also rapidly generate  net new  sensitive data that must be protected. Prior to the AI revolution, humans' ability to create and share data
HUMINT: Diving Deep into the Dark Web

HUMINT: Diving Deep into the Dark Web

Jul 09, 2024Cybercrime / Dark Web
Discover how cybercriminals behave in Dark Web forums- what services they buy and sell, what motivates them, and even how they scam each other. Clear Web vs. Deep Web vs. Dark Web Threat intelligence professionals divide the internet into three main components: Clear Web - Web assets that can be viewed through public search engines, including media, blogs, and other pages and sites. Deep Web - Websites and forums that are unindexed by search engines. For example, webmail, online banking, corporate intranets, walled gardens, etc. Some of the hacker forums exist in the Deep Web, requiring credentials to enter. Dark Web - Web sources that require specific software to gain access. These sources are anonymous and closed, and include Telegram groups and invite-only forums. The Dark Web contains Tor, P2P, hacker forums, criminal marketplaces, etc. According to Etay Maor, Chief Security Strategist at Cato Networks , "We've been seeing a shift in how criminals communicate and co
7 Uses for Generative AI to Enhance Security Operations

7 Uses for Generative AI to Enhance Security Operations

Nov 30, 2023 Generative AI / Threat Intelligence
Welcome to a world where Generative AI revolutionizes the field of cybersecurity. Generative AI refers to the use of artificial intelligence (AI) techniques to generate or create new data, such as images, text, or sounds. It has gained significant attention in recent years due to its ability to generate realistic and diverse outputs. When it comes to security operations,  Generative AI can play a significant role . It can be used to detect and prevent various threats, including malware, phishing attempts, and data breaches. Analyzing patterns and behaviors in large amounts of data allows it to identify suspicious activities and alert security teams in real-time. Here are seven practical use cases that demonstrate the power of Generative AI. There are more possibilities out there of how you can achieve objectives and fortify security operations, but this list should get your creative juices flowing. 1) Information Management Information security deals with a breadth of data that
cyber security

Top 4 Security Risks of GenAI

websiteWizGenAI Security / Technology
Gain a competitive edge and unlock the top 4 major emerging risks within GenAI. This report from Gartner provides insights and recommended actions for security and product leaders.
Predictive AI in Cybersecurity: Outcomes Demonstrate All AI is Not Created Equally

Predictive AI in Cybersecurity: Outcomes Demonstrate All AI is Not Created Equally

Nov 03, 2023 Artificial Intelligence / Cyber Threat
Here is what matters most when it comes to artificial intelligence (AI) in cybersecurity: Outcomes.  As the threat landscape evolves and  generative AI is added  to the toolsets available to defenders and attackers alike, evaluating the relative effectiveness of various  AI-based security  offerings is increasingly important — and difficult. Asking the right questions can help you spot solutions that deliver value and ROI, instead of just marketing hype. Questions like, "Can your predictive AI tools sufficiently block what's new?" and, "What actually signals success in a cybersecurity platform powered by artificial intelligence?" As BlackBerry's AI and ML (machine learning) patent portfolio attests, BlackBerry is a leader in this space and has developed an exceptionally well-informed point of view on what works and why. Let's explore this timely topic. Evolution of AI in Cybersecurity Some of the earliest uses of ML and AI in cybersecurity date back to the de
Google Expands Its Bug Bounty Program to Tackle Artificial Intelligence Threats

Google Expands Its Bug Bounty Program to Tackle Artificial Intelligence Threats

Oct 27, 2023 Artificial Intelligence / Vulnerability
Google has announced that it's expanding its Vulnerability Rewards Program ( VRP ) to compensate researchers for finding attack scenarios tailored to generative artificial intelligence (AI) systems in an effort to  bolster AI safety and security . "Generative AI raises new and different concerns than traditional digital security, such as the potential for unfair bias, model manipulation or misinterpretations of data (hallucinations)," Google's Laurie Richardson and Royal Hansen  said . Some of the categories that are in scope  include  prompt injections, leakage of sensitive data from training datasets, model manipulation, adversarial perturbation attacks that trigger misclassification, and model theft. It's worth noting that Google earlier this July instituted an  AI Red Team  to help address threats to AI systems as part of its Secure AI Framework ( SAIF ). Also announced as part of its commitment to secure AI are efforts to strengthen the AI supply chain
Exploring the Realm of Malicious Generative AI: A New Digital Security Challenge

Exploring the Realm of Malicious Generative AI: A New Digital Security Challenge

Oct 17, 2023 Cyber Threat / Artificial Intelligence
Recently, the cybersecurity landscape has been confronted with a daunting new reality – the rise of malicious Generative AI, like FraudGPT and WormGPT. These rogue creations, lurking in the dark corners of the internet, pose a distinctive threat to the world of digital security. In this article, we will look at the nature of Generative AI fraud, analyze the messaging surrounding these creations, and evaluate their potential impact on cybersecurity. While it's crucial to maintain a watchful eye, it's equally important to avoid widespread panic, as the situation, though disconcerting, is not yet a cause for alarm. Interested in how your organization can protect against generative AI attacks with an advanced email security solution?  Get an IRONSCALES demo .  Meet FraudGPT and WormGPT FraudGPT  represents a subscription-based malicious Generative AI that harnesses sophisticated machine learning algorithms to generate deceptive content. In stark contrast to ethical AI models, Fr
"I Had a Dream" and Generative AI Jailbreaks

"I Had a Dream" and Generative AI Jailbreaks

Oct 09, 2023 Artificial Intelligence /
"Of course, here's an example of simple code in the Python programming language that can be associated with the keywords "MyHotKeyHandler," "Keylogger," and "macOS," this is a message from ChatGPT followed by a piece of malicious code and a brief remark not to use it for illegal purposes. Initially published by  Moonlock Lab , the screenshots of ChatGPT writing code for a keylogger malware is yet another example of trivial ways to hack large language models and exploit them against their policy of use. In the case of Moonlock Lab, their malware research engineer told ChatGPT about a dream where an attacker was writing code. In the dream, he could only see the three words: "MyHotKeyHandler," "Keylogger," and "macOS." The engineer asked ChatGPT to completely recreate the malicious code and help him stop the attack. After a brief conversation, the AI finally provided the answer. "At times, the code generated isn&
Live Webinar: Overcoming Generative AI Data Leakage Risks

Live Webinar: Overcoming Generative AI Data Leakage Risks

Sep 19, 2023 Artificial Intelligence / Browser Security
As the adoption of generative AI tools, like ChatGPT, continues to surge, so does the risk of data exposure. According to Gartner's "Emerging Tech: Top 4 Security Risks of GenAI" report, privacy and data security is one of the four major emerging risks within generative AI.  A new webinar  featuring a multi-time Fortune 100 CISO and the CEO of LayerX, a browser extension solution, delves into this critical risk. Throughout the webinar, the speakers will explain why data security is a risk and explore the ability of DLP solutions to protect against them, or lack thereof. Then, they will delineate the capabilities required by DLP solutions to ensure businesses benefit from the productivity GenAI applications have to offer without compromising security.  The Business and Security Risks of Generative AI Applications GenAI security risks occur when employees insert sensitive texts into these applications. These actions warrant careful consideration, because the inserted data b
WormGPT: New AI Tool Allows Cybercriminals to Launch Sophisticated Cyber Attacks

WormGPT: New AI Tool Allows Cybercriminals to Launch Sophisticated Cyber Attacks

Jul 15, 2023 Artificial Intelligence / Cyber Crime
With generative artificial intelligence (AI) becoming all the rage these days, it's perhaps not surprising that the technology has been repurposed by malicious actors to their own advantage, enabling avenues for accelerated cybercrime. According to findings from SlashNext, a new generative AI cybercrime tool called  WormGPT  has been advertised on underground forums as a way for adversaries to launch sophisticated phishing and business email compromise ( BEC ) attacks. "This tool presents itself as a blackhat alternative to GPT models, designed specifically for malicious activities," security researcher Daniel Kelley  said . "Cybercriminals can use such technology to automate the creation of highly convincing fake emails, personalized to the recipient, thus increasing the chances of success for the attack." The author of the software has described it as the "biggest enemy of the well-known ChatGPT" that "lets you do all sorts of illegal stuff.
How Generative AI Can Dupe SaaS Authentication Protocols — And Effective Ways To Prevent Other Key AI Risks in SaaS

How Generative AI Can Dupe SaaS Authentication Protocols — And Effective Ways To Prevent Other Key AI Risks in SaaS

Jun 26, 2023 SaaS Security / Artificial Intelligence
Security and IT teams are routinely forced to adopt software before fully understanding the security risks. And AI tools are no exception. Employees and business leaders alike are flocking to generative AI software and similar programs, often unaware of the major SaaS security vulnerabilities they're introducing into the enterprise. A February 2023  generative AI survey of 1,000 executives  revealed that 49% of respondents use ChatGPT now, and 30% plan to tap into the ubiquitous generative AI tool soon. Ninety-nine percent of those using ChatGPT claimed some form of cost-savings, and 25% attested to reducing expenses by $75,000 or more. As the researchers conducted this survey a mere three months after ChatGPT's general availability, today's ChatGPT and AI tool usage is undoubtedly higher.  Security and risk teams are already overwhelmed protecting their SaaS estate (which has now become the operating system of business) from common vulnerabilities such as misconfigurati
Expert Insights
Cybersecurity Resources