#1 Trusted Cybersecurity News Platform Followed by 4.50+ million
The Hacker News Logo
Subscribe – Get Latest News
Cloud Security

artificial intelligence | Breaking Cybersecurity News | The Hacker News

Microsoft's Top Execs' Emails Breached in Sophisticated Russia-Linked APT Attack

Microsoft's Top Execs' Emails Breached in Sophisticated Russia-Linked APT Attack

Jan 20, 2024 Cyber Espionage / Emails Security
Microsoft on Friday revealed that it was the target of a nation-state attack on its corporate systems that resulted in the theft of emails and attachments from senior executives and other individuals in the company's cybersecurity and legal departments. The Windows maker attributed the attack to a Russian advanced persistent threat (APT) group it tracks as  Midnight Blizzard  (formerly Nobelium), which is also known as APT29, BlueBravo, Cloaked Ursa, Cozy Bear, and The Dukes. It further said that it immediately took steps to investigate, disrupt, and mitigate the malicious activity upon discovery on January 12, 2024. The campaign is estimated to have commenced in late November 2023. "The threat actor used a  password spray attack  to compromise a legacy non-production test tenant account and gain a foothold, and then used the account's permissions to access a very small percentage of Microsoft corporate email accounts, including members of our senior leadership team a
There is a Ransomware Armageddon Coming for Us All

There is a Ransomware Armageddon Coming for Us All

Jan 11, 2024 Artificial Intelligence / Biometric Security
Generative AI will enable anyone to launch sophisticated phishing attacks that only Next-generation MFA devices can stop The least surprising headline from 2023 is that ransomware again set new records for a number of incidents and the damage inflicted. We saw new headlines every week, which included a who's-who of big-name organizations. If MGM, Johnson Controls, Chlorox, Hanes Brands, Caesars Palace, and so many others cannot stop the attacks, how will anyone else? Phishing-driven ransomware is the cyber threat that looms larger and more dangerous than all others. CISA and Cisco report that 90% of data breaches are the result of phishing attacks and monetary losses that exceed $10 billion in total. A report from Splunk revealed that 96 percent of companies fell victim to at least one phishing attack in the last 12 months and 83 percent suffered two or more. Protect your organization from phishing and ransomware by learning about the benefits of Next-Generation MFA. Download th
Recover from Ransomware in 5 Minutes—We will Teach You How!

Recover from Ransomware in 5 Minutes—We will Teach You How!

Apr 18, 2024Cyber Resilience / Data Protection
Super Low RPO with Continuous Data Protection: Dial Back to Just Seconds Before an Attack Zerto , a Hewlett Packard Enterprise company, can help you detect and recover from ransomware in near real-time. This solution leverages continuous data protection (CDP) to ensure all workloads have the lowest recovery point objective (RPO) possible. The most valuable thing about CDP is that it does not use snapshots, agents, or any other periodic data protection methodology. Zerto has no impact on production workloads and can achieve RPOs in the region of 5-15 seconds across thousands of virtual machines simultaneously. For example, the environment in the image below has nearly 1,000 VMs being protected with an average RPO of just six seconds! Application-Centric Protection: Group Your VMs to Gain Application-Level Control   You can protect your VMs with the Zerto application-centric approach using Virtual Protection Groups (VPGs). This logical grouping of VMs ensures that your whole applica
NIST Warns of Security and Privacy Risks from Rapid AI System Deployment

NIST Warns of Security and Privacy Risks from Rapid AI System Deployment

Jan 08, 2024 Artificial Intelligence / Cyber Security
The U.S. National Institute of Standards and Technology (NIST) is calling attention to the  privacy and security challenges  that arise as a result of increased deployment of artificial intelligence (AI) systems in recent years. "These security and privacy challenges include the potential for adversarial manipulation of training data, adversarial exploitation of model vulnerabilities to adversely affect the performance of the AI system, and even malicious manipulations, modifications or mere interaction with models to exfiltrate sensitive information about people represented in the data, about the model itself, or proprietary enterprise data," NIST  said . As AI systems become integrated into online services at a rapid pace, in part driven by the emergence of generative AI systems like OpenAI ChatGPT and Google Bard, models powering these technologies face a number of threats at various stages of the machine learning operations. These include corrupted training data, security flaw
cyber security

Today's Top 4 Identity Threat Exposures: Where To Find Them and How To Stop Them

websiteSilverfortIdentity Protection / Attack Surface
Explore the first ever threat report 100% focused on the prevalence of identity security gaps you may not be aware of.
Russia's AI-Powered Disinformation Operation Targeting Ukraine, U.S., and Germany

Russia's AI-Powered Disinformation Operation Targeting Ukraine, U.S., and Germany

Dec 05, 2023 Brandjacking / Artificial Intelligence
The Russia-linked influence operation called Doppelganger has targeted Ukrainian, U.S., and German audiences through a combination of inauthentic news sites and social media accounts. These campaigns are designed to amplify content designed to undermine Ukraine as well as propagate anti-LGBTQ+ sentiment, U.S. military competence, and Germany's economic and social issues, according to a new Recorded Future report shared with The Hacker News. Doppelganger ,  described  by Meta as the "largest and the most aggressively-persistent Russian-origin operation," is a  pro-Russian network  known for spreading anti-Ukrainian propaganda. Active since at least February 2022, it has been linked to two companies named Structura National Technologies and Social Design Agency. Activities associated with the influence operation are known to leverage manufactured websites as well as those impersonating authentic media – a technique called brandjacking – to disseminate adversarial narrat
U.S., U.K., and Global Partners Release Secure AI System Development Guidelines

U.S., U.K., and Global Partners Release Secure AI System Development Guidelines

Nov 27, 2023 Artificial Intelligence / Privacy
The U.K. and U.S., along with international partners from 16 other countries, have released new guidelines for the development of secure artificial intelligence (AI) systems. "The approach prioritizes ownership of security outcomes for customers, embraces radical transparency and accountability, and establishes organizational structures where secure design is a top priority," the U.S. Cybersecurity and Infrastructure Security Agency (CISA)  said . The goal is to  increase cyber security levels of AI  and help ensure that the technology is designed, developed, and deployed in a secure manner, the National Cyber Security Centre (NCSC)  added . The guidelines also build upon the U.S. government's  ongoing   efforts  to manage the risks posed by AI by ensuring that new tools are tested adequately before public release, there are guardrails in place to address societal harms, such as bias and discrimination, and privacy concerns, and setting up robust methods for consumer
Guide: How vCISOs, MSPs and MSSPs Can Keep their Customers Safe from Gen AI Risks

Guide: How vCISOs, MSPs and MSSPs Can Keep their Customers Safe from Gen AI Risks

Nov 08, 2023 Artificial Intelligence / Cybersecurity
Download the free guide , "It's a Generative AI World: How vCISOs, MSPs and MSSPs Can Keep their Customers Safe from Gen AI Risks." ChatGPT now boasts anywhere from 1.5 to 2 billion visits per month. Countless sales, marketing, HR, IT executive, technical support, operations, finance and other functions are feeding data prompts and queries into generative AI engines. They use these tools to write articles, create content, compose emails, answer customer questions and generate plans and strategies.  However, gen AI usage is happening far in advance of efforts to implement safeguards and cybersecurity constraints. Three primary areas of security concern associated with generative AI are: sensitive data included in gen AI scripts, outcomes produced by these tools that may put an organization at risk, and potential hazards related to utilizing third-party generative AI tools. Unchecked AI usage in organizations can lead to:  Major data breaches.  Compromised identities
Offensive and Defensive AI: Let’s Chat(GPT) About It

Offensive and Defensive AI: Let's Chat(GPT) About It

Nov 07, 2023 Artificial Intelligence / Data Security
ChatGPT: Productivity tool, great for writing poems, and… a security risk?! In this article, we show how threat actors can exploit ChatGPT, but also how defenders can use it for leveling up their game. ChatGPT is the most swiftly growing consumer application to date. The extremely popular generative AI chatbot has the ability to generate human-like, coherent and contextually relevant responses. This makes it very valuable for applications like content creation, coding, education, customer support, and even personal assistance. However, ChatGPT also comes with security risks. ChatGPT can be used for data exfiltration, spreading misinformation, developing cyber attacks and writing phishing emails. On the flip side, it can help defenders who can use it for identifying vulnerabilities and learning about various defenses. In this article, we show numerous ways attackers can exploit ChatGPT and the OpenAI Playground. Just as importantly, we show ways that defenders can leverage ChatGPT t
Predictive AI in Cybersecurity: Outcomes Demonstrate All AI is Not Created Equally

Predictive AI in Cybersecurity: Outcomes Demonstrate All AI is Not Created Equally

Nov 03, 2023 Artificial Intelligence / Cyber Threat
Here is what matters most when it comes to artificial intelligence (AI) in cybersecurity: Outcomes.  As the threat landscape evolves and  generative AI is added  to the toolsets available to defenders and attackers alike, evaluating the relative effectiveness of various  AI-based security  offerings is increasingly important — and difficult. Asking the right questions can help you spot solutions that deliver value and ROI, instead of just marketing hype. Questions like, "Can your predictive AI tools sufficiently block what's new?" and, "What actually signals success in a cybersecurity platform powered by artificial intelligence?" As BlackBerry's AI and ML (machine learning) patent portfolio attests, BlackBerry is a leader in this space and has developed an exceptionally well-informed point of view on what works and why. Let's explore this timely topic. Evolution of AI in Cybersecurity Some of the earliest uses of ML and AI in cybersecurity date back to the de
Google Expands Its Bug Bounty Program to Tackle Artificial Intelligence Threats

Google Expands Its Bug Bounty Program to Tackle Artificial Intelligence Threats

Oct 27, 2023 Artificial Intelligence / Vulnerability
Google has announced that it's expanding its Vulnerability Rewards Program ( VRP ) to compensate researchers for finding attack scenarios tailored to generative artificial intelligence (AI) systems in an effort to  bolster AI safety and security . "Generative AI raises new and different concerns than traditional digital security, such as the potential for unfair bias, model manipulation or misinterpretations of data (hallucinations)," Google's Laurie Richardson and Royal Hansen  said . Some of the categories that are in scope  include  prompt injections, leakage of sensitive data from training datasets, model manipulation, adversarial perturbation attacks that trigger misclassification, and model theft. It's worth noting that Google earlier this July instituted an  AI Red Team  to help address threats to AI systems as part of its Secure AI Framework ( SAIF ). Also announced as part of its commitment to secure AI are efforts to strengthen the AI supply chain
Exploring the Realm of Malicious Generative AI: A New Digital Security Challenge

Exploring the Realm of Malicious Generative AI: A New Digital Security Challenge

Oct 17, 2023 Cyber Threat / Artificial Intelligence
Recently, the cybersecurity landscape has been confronted with a daunting new reality – the rise of malicious Generative AI, like FraudGPT and WormGPT. These rogue creations, lurking in the dark corners of the internet, pose a distinctive threat to the world of digital security. In this article, we will look at the nature of Generative AI fraud, analyze the messaging surrounding these creations, and evaluate their potential impact on cybersecurity. While it's crucial to maintain a watchful eye, it's equally important to avoid widespread panic, as the situation, though disconcerting, is not yet a cause for alarm. Interested in how your organization can protect against generative AI attacks with an advanced email security solution?  Get an IRONSCALES demo .  Meet FraudGPT and WormGPT FraudGPT  represents a subscription-based malicious Generative AI that harnesses sophisticated machine learning algorithms to generate deceptive content. In stark contrast to ethical AI models, Fr
How to Guard Your Data from Exposure in ChatGPT

How to Guard Your Data from Exposure in ChatGPT

Oct 12, 2023 Data Security / Artificial Intelligence
ChatGPT has transformed the way businesses generate textual content, which can potentially result in a quantum leap in productivity. However, Generative AI innovation also introduces a new dimension of data exposure risk, when employees inadvertently type or paste sensitive business data into ChatGPT, or similar applications. DLP solutions, the go-to solution for similar challenges, are ill-equipped to handle these challenges, since they focus on file-based data protection. A new report by LayerX, "Browser Security Platform: Guard your Data from Exposure in ChatGPT" ( Download here ), sheds light on the challenges and risks of ungoverned ChatGPT usage. It paints a comprehensive picture of the potential hazards for businesses and then offers a potential solution: browser security platforms. Such platforms provide real-time monitoring and governance over web sessions, effectively safeguarding sensitive data. ChatGPT Data Exposure: By the Numbers Employee usage of GenAI apps has surge
Webinar: How vCISOs Can Navigating the Complex World of AI and LLM Security

Webinar: How vCISOs Can Navigating the Complex World of AI and LLM Security

Oct 09, 2023 Artificial Intelligence / CISO
In today's rapidly evolving technological landscape, the integration of Artificial Intelligence (AI) and Large Language Models (LLMs) has become ubiquitous across various industries. This wave of innovation promises improved efficiency and performance, but lurking beneath the surface are complex vulnerabilities and unforeseen risks that demand immediate attention from cybersecurity professionals. As the average small and medium-sized business leader or end-user is often unaware of these growing threats, it falls upon cybersecurity service providers – MSPs, MSSPs, consultants and especially vCISOs - to take a proactive stance in protecting their clients. At Cynomi, we experience the risks associated with generative AI daily, as we use these technologies internally and work with MSP and MSSP partners to enhance the services they provide to small and medium businesses. Being committed to staying ahead of the curve and empowering virtual vCISOs to swiftly implement cutting-edge secur
Microsoft's AI-Powered Bing Chat Ads May Lead Users to Malware-Distributing Sites

Microsoft's AI-Powered Bing Chat Ads May Lead Users to Malware-Distributing Sites

Sep 29, 2023 Artificial Intelligence / Malware
Malicious ads served inside Microsoft Bing's artificial intelligence (AI) chatbot are being used to distribute malware when searching for popular tools. The findings come from Malwarebytes, which revealed that unsuspecting users can be tricked into visiting booby-trapped sites and installing malware directly from Bing Chat conversations. Introduced by Microsoft in February 2023, Bing Chat is an  interactive search experience  that's powered by OpenAI's large language model called  GPT-4 . A month later, the tech giant  began   exploring  placing ads in the conversations. But the move has also opened the doors for threat actors who resort to malvertising tactics and propagate malware. "Ads can be inserted into a Bing Chat conversation in various ways," Jérôme Segura, director of threat intelligence at Malwarebytes,  said . "One of those is when a user hovers over a link and an ad is displayed first before the organic result." In an example highligh
Webinar — AI vs. AI: Harnessing AI Defenses Against AI-Powered Risks

Webinar — AI vs. AI: Harnessing AI Defenses Against AI-Powered Risks

Sep 25, 2023 Artificial Intelligence / Cybersecurity
Generative AI is a double-edged sword, if there ever was one. There is broad agreement that tools like ChatGPT are unleashing waves of productivity across the business, from IT, to customer experience, to engineering. That's on the one hand.  On the other end of this fencing match: risk. From IP leakage and data privacy risks to the empowering of cybercriminals with AI tools, generative AI presents enterprises with concrete concerns. For example, the mass availability of AI tools was the second most-reported Q2 risk among senior enterprise risk executives — appearing in the top 10 for the first time — according to a  Gartner survey .  In this escalating AI arms race, how can enterprises separate fact from hype and comprehensively manage generative AI risk while accelerating productivity?  Register here and join Zscaler's Will Seaton, Product Marketing Manager, ThreatLabz, to: Uncover the  tangible risks of generative AI  — both for employee AI usage and by threat actors b
Live Webinar: Overcoming Generative AI Data Leakage Risks

Live Webinar: Overcoming Generative AI Data Leakage Risks

Sep 19, 2023 Artificial Intelligence / Browser Security
As the adoption of generative AI tools, like ChatGPT, continues to surge, so does the risk of data exposure. According to Gartner's "Emerging Tech: Top 4 Security Risks of GenAI" report, privacy and data security is one of the four major emerging risks within generative AI.  A new webinar  featuring a multi-time Fortune 100 CISO and the CEO of LayerX, a browser extension solution, delves into this critical risk. Throughout the webinar, the speakers will explain why data security is a risk and explore the ability of DLP solutions to protect against them, or lack thereof. Then, they will delineate the capabilities required by DLP solutions to ensure businesses benefit from the productivity GenAI applications have to offer without compromising security.  The Business and Security Risks of Generative AI Applications GenAI security risks occur when employees insert sensitive texts into these applications. These actions warrant careful consideration, because the inserted data b
Cybersecurity Resources