#1 Trusted Cybersecurity News Platform Followed by 4.50+ million
The Hacker News Logo
Get the Free Newsletter
SaaS Security

Large Language Models | Breaking Cybersecurity News | The Hacker News

Microsoft, OpenAI Warn of Nation-State Hackers Weaponizing AI for Cyber Attacks

Microsoft, OpenAI Warn of Nation-State Hackers Weaponizing AI for Cyber Attacks

Feb 14, 2024 Artificial Intelligence / Cyber Attack
Nation-state actors associated with Russia, North Korea, Iran, and China are experimenting with artificial intelligence (AI) and large language models (LLMs) to complement their ongoing cyber attack operations. The findings come from a report published by Microsoft in collaboration with OpenAI, both of which  said  they disrupted efforts made by five state-affiliated actors that used its AI services to perform malicious cyber activities by terminating their assets and accounts. "Language support is a natural feature of LLMs and is attractive for threat actors with continuous focus on social engineering and other techniques relying on false, deceptive communications tailored to their targets' jobs, professional networks, and other relationships," Microsoft  said  in a report shared with The Hacker News. While no significant or novel attacks employing the LLMs have been detected to date, adversarial exploration of AI technologies has transcended various phases of the at
How to Prevent ChatGPT From Stealing Your Content & Traffic

How to Prevent ChatGPT From Stealing Your Content & Traffic

Aug 30, 2023 Artificial Intelligence / Cyber Threat
ChatGPT and similar large language models (LLMs) have added further complexity to the ever-growing online threat landscape. Cybercriminals no longer need advanced coding skills to execute fraud and other damaging attacks against online businesses and customers, thanks to bots-as-a-service, residential proxies, CAPTCHA farms, and other easily accessible tools.  Now, the latest technology damaging businesses' bottom line is  ChatGPT . Not only have ChatGPT, OpenAI, and other LLMs raised ethical issues by  training their models  on scraped data from across the internet. LLMs are negatively impacting enterprises' web traffic, which can be extremely damaging to business.  3 Risks Presented by LLMs, ChatGPT, & ChatGPT Plugins Among the threats ChatGPT and ChatGPT plugins can pose against online businesses, there are three key risks we will focus on: Content theft  (or republishing data without permission from the original source)can hurt the authority, SEO rankings, and perceived
SaaS Compliance through the NIST Cybersecurity Framework

SaaS Compliance through the NIST Cybersecurity Framework

Feb 20, 2024Cybersecurity Framework / SaaS Security
The US National Institute of Standards and Technology (NIST) cybersecurity framework is one of the world's most important guidelines for securing networks. It can be applied to any number of applications, including SaaS.  One of the challenges facing those tasked with securing SaaS applications is the different settings found in each application. It makes it difficult to develop a configuration policy that will apply to an HR app that manages employees, a marketing app that manages content, and an R&D app that manages software versions, all while aligning with NIST compliance standards.  However, there are several settings that can be applied to nearly every app in the SaaS stack. In this article, we'll explore some universal configurations, explain why they are important, and guide you in setting them in a way that improves your SaaS apps' security posture.  Start with Admins Role-based access control (RBAC) is a key to NIST adherence and should be applied to every SaaS a
Cybersecurity Resources