#1 Trusted Cybersecurity News Platform
Followed by 4.50+ million
The Hacker News Logo
Subscribe – Get Latest News
Cybersecurity

machine learning | Breaking Cybersecurity News | The Hacker News

Category — machine learning
This Free Discovery Tool Finds and Mitigates AI-SaaS Risks

This Free Discovery Tool Finds and Mitigates AI-SaaS Risks

Jan 17, 2024 SaaS Security / Machine Learning
Wing Security announced today that it now offers  free discovery and a paid tier for automated control  over thousands of AI and AI-powered SaaS applications. This will allow companies to better protect their intellectual property (IP) and data against the growing and evolving risks of AI usage. SaaS applications seem to be multiplying by the day, and so does their integration of AI capabilities. According to Wing Security, a SaaS security company that researched over 320 companies, a staggering 83.2% use GenAI applications. While this statistic might not come as a surprise, the research showed that 99.7% of organizations use SaaS applications that leverage AI capabilities to deliver their services. This usage of GenAI in SaaS applications that are not 'pure' AI often goes unnoticed by security teams and users alike. 70% of the most popular GenAI applications may use your data to train their models, and in many cases it's completely up to you to configure it differently
NIST Warns of Security and Privacy Risks from Rapid AI System Deployment

NIST Warns of Security and Privacy Risks from Rapid AI System Deployment

Jan 08, 2024 Artificial Intelligence / Cyber Security
The U.S. National Institute of Standards and Technology (NIST) is calling attention to the  privacy and security challenges  that arise as a result of increased deployment of artificial intelligence (AI) systems in recent years. "These security and privacy challenges include the potential for adversarial manipulation of training data, adversarial exploitation of model vulnerabilities to adversely affect the performance of the AI system, and even malicious manipulations, modifications or mere interaction with models to exfiltrate sensitive information about people represented in the data, about the model itself, or proprietary enterprise data," NIST  said . As AI systems become integrated into online services at a rapid pace, in part driven by the emergence of generative AI systems like OpenAI ChatGPT and Google Bard, models powering these technologies face a number of threats at various stages of the machine learning operations. These include corrupted training data, security flaw
cyber security

Online Master's in Applied Intelligence

websiteGeorgetown UniversityCyber Security
More than 90% of respondents expressed concern over their team and tooling's ability to detect identity-based attacks. Learn about critical gaps in security programs and what environments pose the most risk to security teams. Download the Report.
Google Unveils RETVec - Gmail's New Defense Against Spam and Malicious Emails

Google Unveils RETVec - Gmail's New Defense Against Spam and Malicious Emails

Nov 30, 2023 Machine Learning / Email Security
Google has revealed a new multilingual text vectorizer called  RETVec  (short for Resilient and Efficient Text Vectorizer) to  help detect  potentially harmful content such as spam and malicious emails in Gmail. "RETVec is trained to be resilient against character-level manipulations including insertion, deletion, typos, homoglyphs, LEET substitution, and more," according to the  project's description  on GitHub. "The RETVec model is trained on top of a novel character encoder which can encode all UTF-8 characters and words efficiently." While huge platforms like Gmail and YouTube rely on text classification models to spot phishing attacks, inappropriate comments, and scams, threat actors are known to devise counter-strategies to bypass these defense measures. They have been observed resorting to adversarial text manipulations, which range from the use of homoglyphs to keyword stuffing to invisible characters. RETVec , which works on over 100 languages o
cyber security

Permiso Security's 2024 State of Identity Security Report

websitePermisoThreat Detection / Identity Security
More than 90% of respondents expressed concern over their team and tooling's ability to detect identity-based attacks. Learn about critical gaps in security programs and what environments pose the most risk to security teams. Download the Report.
U.S., U.K., and Global Partners Release Secure AI System Development Guidelines

U.S., U.K., and Global Partners Release Secure AI System Development Guidelines

Nov 27, 2023 Artificial Intelligence / Privacy
The U.K. and U.S., along with international partners from 16 other countries, have released new guidelines for the development of secure artificial intelligence (AI) systems. "The approach prioritizes ownership of security outcomes for customers, embraces radical transparency and accountability, and establishes organizational structures where secure design is a top priority," the U.S. Cybersecurity and Infrastructure Security Agency (CISA)  said . The goal is to  increase cyber security levels of AI  and help ensure that the technology is designed, developed, and deployed in a secure manner, the National Cyber Security Centre (NCSC)  added . The guidelines also build upon the U.S. government's  ongoing   efforts  to manage the risks posed by AI by ensuring that new tools are tested adequately before public release, there are guardrails in place to address societal harms, such as bias and discrimination, and privacy concerns, and setting up robust methods for consumer
Predictive AI in Cybersecurity: Outcomes Demonstrate All AI is Not Created Equally

Predictive AI in Cybersecurity: Outcomes Demonstrate All AI is Not Created Equally

Nov 03, 2023 Artificial Intelligence / Cyber Threat
Here is what matters most when it comes to artificial intelligence (AI) in cybersecurity: Outcomes.  As the threat landscape evolves and  generative AI is added  to the toolsets available to defenders and attackers alike, evaluating the relative effectiveness of various  AI-based security  offerings is increasingly important — and difficult. Asking the right questions can help you spot solutions that deliver value and ROI, instead of just marketing hype. Questions like, "Can your predictive AI tools sufficiently block what's new?" and, "What actually signals success in a cybersecurity platform powered by artificial intelligence?" As BlackBerry's AI and ML (machine learning) patent portfolio attests, BlackBerry is a leader in this space and has developed an exceptionally well-informed point of view on what works and why. Let's explore this timely topic. Evolution of AI in Cybersecurity Some of the earliest uses of ML and AI in cybersecurity date back to the de
Microsoft's AI-Powered Bing Chat Ads May Lead Users to Malware-Distributing Sites

Microsoft's AI-Powered Bing Chat Ads May Lead Users to Malware-Distributing Sites

Sep 29, 2023 Artificial Intelligence / Malware
Malicious ads served inside Microsoft Bing's artificial intelligence (AI) chatbot are being used to distribute malware when searching for popular tools. The findings come from Malwarebytes, which revealed that unsuspecting users can be tricked into visiting booby-trapped sites and installing malware directly from Bing Chat conversations. Introduced by Microsoft in February 2023, Bing Chat is an  interactive search experience  that's powered by OpenAI's large language model called  GPT-4 . A month later, the tech giant  began   exploring  placing ads in the conversations. But the move has also opened the doors for threat actors who resort to malvertising tactics and propagate malware. "Ads can be inserted into a Bing Chat conversation in various ways," Jérôme Segura, director of threat intelligence at Malwarebytes,  said . "One of those is when a user hovers over a link and an ad is displayed first before the organic result." In an example highligh
New AMBERSQUID Cryptojacking Operation Targets Uncommon AWS Services

New AMBERSQUID Cryptojacking Operation Targets Uncommon AWS Services

Sep 18, 2023 Cloud Security / Cryptocurrecy
A novel cloud-native cryptojacking operation has set its eyes on uncommon Amazon Web Services (AWS) offerings such as AWS Amplify, AWS Fargate, and Amazon SageMaker to illicitly mine cryptocurrency. The malicious cyber activity has been codenamed  AMBERSQUID  by cloud and container security firm Sysdig. "The AMBERSQUID operation was able to exploit cloud services without triggering the AWS requirement for approval of more resources, as would be the case if they only spammed EC2 instances," Sysdig security researcher Alessandro Brucato said in a report shared with The Hacker News. "Targeting multiple services also poses additional challenges, like incident response, since it requires finding and killing all miners in each exploited service." Sysdig said it discovered the campaign following an  analysis of 1.7 million images  on Docker Hub, attributing it with moderate confidence to Indonesian attackers based on the use of Indonesian language in scripts and use
Everything You Wanted to Know About AI Security but Were Afraid to Ask

Everything You Wanted to Know About AI Security but Were Afraid to Ask

Sep 04, 2023 Artificial Intelligence / Cyber Security
There's been a great deal of AI hype recently, but that doesn't mean the robots are here to replace us. This article sets the record straight and explains how businesses should approach AI. From musing about self-driving cars to fearing AI bots that could destroy the world, there has been a great deal of AI hype in the past few years. AI has captured our imaginations, dreams, and occasionally, our nightmares. However, the reality is that AI is currently much less advanced than we anticipated it would be by now. Autonomous cars, for example, often considered the poster child of AI's limitless future, represent a narrow use case and are not yet a common application across all transportation sectors. In this article, we de-hype AI, provide tools for businesses approaching AI and share information to help stakeholders educate themselves.  AI Terminology De-Hyped AI vs. ML AI (Artificial Intelligence) and ML (Machine Learning) are terms that are often used interchangeably, but the
Learn How Your Business Data Can Amplify Your AI/ML Threat Detection Capabilities

Learn How Your Business Data Can Amplify Your AI/ML Threat Detection Capabilities

Aug 25, 2023 Threat Detection / Artificial Intelligence
In today's digital landscape, your business data is more than just numbers—it's a powerhouse. Imagine leveraging this data not only for profit but also for enhanced AI and Machine Learning (ML) threat detection. For companies like Comcast, this isn't a dream. It's reality. Your business comprehends its risks, vulnerabilities, and the unique environment in which it operates. No generic, one-size-fits-all tool can capture this nuance. By utilizing your own data, you position yourself ahead of potential threats, enabling informed decisions and safeguarding your assets. Join our groundbreaking webinar, " Clean Data, Better Detections: Using Your Business Data for AI/ML Detections ," to unearth how your distinct business data can be the linchpin to amplifying your AI/ML threat detection prowess. This webinar will endow you with the insights and tools necessary to harness your business data, leading to sharper, more efficient, and potent threat detections. UPC
The Vulnerability of Zero Trust: Lessons from the Storm 0558 Hack

The Vulnerability of Zero Trust: Lessons from the Storm 0558 Hack

Aug 18, 2023 Network Detection and Response
While IT security managers in companies and public administrations rely on the concept of Zero Trust, APTS (Advanced Persistent Threats) are putting its practical effectiveness to the test. Analysts, on the other hand, understand that Zero Trust can only be achieved with comprehensive insight into one's own network.  Just recently, an attack believed to be perpetrated by the Chinese hacker group  Storm-0558  targeted several government agencies. They used fake digital authentication tokens to access webmail accounts running on Microsoft's Outlook service. In this incident, the attackers stole a signing key from Microsoft, enabling them to issue functional access tokens for Outlook Web Access (OWA) and Outlook.com and to download emails and attachments. Due to a plausibility check error, the digital signature, which was only intended for private customer accounts (MSA), also worked in the Azure Active Directory for business customers.  Embracing the Zero Trust Revolution Acc
Unveiling the Unseen: Identifying Data Exfiltration with Machine Learning

Unveiling the Unseen: Identifying Data Exfiltration with Machine Learning

Jun 22, 2023 Network Security / Machine Learning
Why Data Exfiltration Detection is Paramount? The world is witnessing an exponential rise in ransomware and data theft employed to extort companies. At the same time, the industry faces numerous critical vulnerabilities in database software and company websites. This evolution paints a dire picture of data exposure and exfiltration that every security leader and team is grappling with. This article highlights this challenge and expounds on the benefits that Machine Learning algorithms and Network Detection & Response (NDR) approaches bring to the table. Data exfiltration often serves as the final act of a cyberattack, making it the last window of opportunity to detect the breach before the data is made public or is used for other sinister activities, such as espionage. However, data leakage isn't only an aftermath of cyberattacks, it can also be a consequence of human error. While prevention of data exfiltration through security controls is ideal, the escalating complexity a
Italian Watchdog Bans OpenAI's ChatGPT Over Data Protection Concerns

Italian Watchdog Bans OpenAI's ChatGPT Over Data Protection Concerns

Apr 03, 2023 Artificial Intelligence / Data Safety
The Italian data protection watchdog, Garante per la Protezione dei Dati Personali (aka Garante), has imposed a temporary ban of OpenAI's ChatGPT service in the country, citing data protection concerns. To that end, it has ordered the company to stop processing users' data with immediate effect, stating it intends to investigate the company over whether it's unlawfully processing such data in violation of the E.U. General Data Protection Regulation ( GDPR ) laws. "No information is provided to users and data subjects whose data are collected by Open AI," the Garante  noted . "More importantly, there appears to be no legal basis underpinning the massive collection and processing of personal data in order to 'train' the algorithms on which the platform relies." ChatGPT, which is estimated to have reached over 100 million monthly active users since its release late last year,  has not   disclosed   what it used  to train its latest large languag
Microsoft Introduces GPT-4 AI-Powered Security Copilot Tool to Empower Defenders

Microsoft Introduces GPT-4 AI-Powered Security Copilot Tool to Empower Defenders

Mar 28, 2023 Artificial Intelligence / Cyber Threat
Microsoft on Tuesday  unveiled   Security Copilot  in limited preview, marking its continued quest to embed AI-oriented features in an attempt to offer "end-to-end defense at machine speed and scale." Powered by OpenAI's GPT-4 generative AI and its own security-specific model, it's billed as a  security analysis tool  that enables cybersecurity analysts to quickly respond to threats, process signals, and assess risk exposure. To that end, it collates insights and data from various products like Microsoft Sentinel, Defender, and Intune to help security teams better understand their environment; determine if they are susceptible to known vulnerabilities and exploits; identify ongoing attacks, their scale, and receive remediation instructions; and summarize incidents. Users, for instance, can ask Security Copilot about suspicious user logins over a specific time period, or even employ it to create a PowerPoint presentation outlining an incident and its attack chain.
Cybersecurity
Expert Insights / Articles Videos
Cybersecurity Resources