#1 Trusted Cybersecurity News Platform
Followed by 5.20+ million
The Hacker News Logo
Subscribe – Get Latest News
Salesforce Security Handbook

Search results for chatgpt risk example | Breaking Cybersecurity News | The Hacker News

Offensive and Defensive AI: Let’s Chat(GPT) About It

Offensive and Defensive AI: Let's Chat(GPT) About It

Nov 07, 2023 Artificial Intelligence / Data Security
ChatGPT: Productivity tool, great for writing poems, and… a security risk?! In this article, we show how threat actors can exploit ChatGPT, but also how defenders can use it for leveling up their game. ChatGPT is the most swiftly growing consumer application to date. The extremely popular generative AI chatbot has the ability to generate human-like, coherent and contextually relevant responses. This makes it very valuable for applications like content creation, coding, education, customer support, and even personal assistance. However, ChatGPT also comes with security risks. ChatGPT can be used for data exfiltration, spreading misinformation, developing cyber attacks and writing phishing emails. On the flip side, it can help defenders who can use it for identifying vulnerabilities and learning about various defenses. In this article, we show numerous ways attackers can exploit ChatGPT and the OpenAI Playground. Just as importantly, we show ways that defenders can leverage ChatGPT t...
How Generative AI Can Dupe SaaS Authentication Protocols — And Effective Ways To Prevent Other Key AI Risks in SaaS

How Generative AI Can Dupe SaaS Authentication Protocols — And Effective Ways To Prevent Other Key AI Risks in SaaS

Jun 26, 2023 SaaS Security / Artificial Intelligence
Security and IT teams are routinely forced to adopt software before fully understanding the security risks. And AI tools are no exception. Employees and business leaders alike are flocking to generative AI software and similar programs, often unaware of the major SaaS security vulnerabilities they're introducing into the enterprise. A February 2023  generative AI survey of 1,000 executives  revealed that 49% of respondents use ChatGPT now, and 30% plan to tap into the ubiquitous generative AI tool soon. Ninety-nine percent of those using ChatGPT claimed some form of cost-savings, and 25% attested to reducing expenses by $75,000 or more. As the researchers conducted this survey a mere three months after ChatGPT's general availability, today's ChatGPT and AI tool usage is undoubtedly higher.  Security and risk teams are already overwhelmed protecting their SaaS estate (which has now become the operating system of business) from common vulnerabilities such as misconfigura...
How to Prevent ChatGPT From Stealing Your Content & Traffic

How to Prevent ChatGPT From Stealing Your Content & Traffic

Aug 30, 2023 Artificial Intelligence / Cyber Threat
ChatGPT and similar large language models (LLMs) have added further complexity to the ever-growing online threat landscape. Cybercriminals no longer need advanced coding skills to execute fraud and other damaging attacks against online businesses and customers, thanks to bots-as-a-service, residential proxies, CAPTCHA farms, and other easily accessible tools.  Now, the latest technology damaging businesses' bottom line is  ChatGPT . Not only have ChatGPT, OpenAI, and other LLMs raised ethical issues by  training their models  on scraped data from across the internet. LLMs are negatively impacting enterprises' web traffic, which can be extremely damaging to business.  3 Risks Presented by LLMs, ChatGPT, & ChatGPT Plugins Among the threats ChatGPT and ChatGPT plugins can pose against online businesses, there are three key risks we will focus on: Content theft  (or republishing data without permission from the original source)can hurt the authority,...
cyber security

2025 Cybersecurity Assessment Report: Navigating the New Reality

websiteBitdefenderCybersecurity / Attack Surface
Insights from 1,200 security professionals reveal perception gaps, concealed breaches, and new concerns about AI-backed attacks.
cyber security

Keeper Security recognized in the 2025 Gartner® Magic Quadrant™ for PAM

websiteKeeper SecurityAgentic AI / Identity Management
Access the full Magic Quadrant report and see how KeeperPAM compares to other leading PAM platforms.
Hands on Review: LayerX's Enterprise Browser Security Extension

Hands on Review: LayerX's Enterprise Browser Security Extension

Nov 01, 2023 Browser Security / Cybersecurity
The browser has become the main work interface in modern enterprises. It's where employees create and interact with data, and how they access organizational and external SaaS and web apps. As a result, the browser is extensively targeted by adversaries. They seek to steal the data it stores and use it for malicious access to organizational SaaS apps or the hosting machine. Additionally, unintentional data leakage via the browser has become a critical concern for organizations as well. However, traditional endpoint, network, and data protection solutions fail to protect this critical resource against advanced web-borne attacks that continuously rise in sophistication and volume. This gap leaves organizations exposed to phishing attacks, malicious browser extensions, data exposure, and data loss.  This is the challenge  LayerX  is attempting to solve. LayerX has developed a secure enterprise browser extension that can be mounted on any browser. The LayerX extension deli...
Discover Hidden Browsing Threats: Free Risk Assessment for GenAI, Identity, Web, and SaaS Risks

Discover Hidden Browsing Threats: Free Risk Assessment for GenAI, Identity, Web, and SaaS Risks

Jan 22, 2025 Risk Assessment / Browser Security
As GenAI tools and SaaS platforms become a staple component in the employee toolkit, the risks associated with data exposure, identity vulnerabilities, and unmonitored browsing behavior have skyrocketed. Forward-thinking security teams are looking for security controls and strategies to address these risks, but they do not always know which risks to prioritize. In some cases, they might have blind spots into the existence of risks. To help, a new complimentary risk assessment is now available. The assessment will be customized for each organization's browsing environment, evaluating their risk and providing actionable insights. Security and IT teams can leverage the assessment to strengthen their security posture, inform their decision-making, evangelize across the organization, and plan next steps. The assessment results in a report that includes a high-level overview of key risks, including insecure use of gen AI, sensitive data leakage risks through the browser, SaaS app usage, ...
5 Actionable Steps to Prevent GenAI Data Leaks Without Fully Blocking AI Usage

5 Actionable Steps to Prevent GenAI Data Leaks Without Fully Blocking AI Usage

Oct 01, 2024 Generative AI / Data Protection
Since its emergence, Generative AI has revolutionized enterprise productivity. GenAI tools enable faster and more effective software development, financial analysis, business planning, and customer engagement. However, this business agility comes with significant risks, particularly the potential for sensitive data leakage. As organizations attempt to balance productivity gains with security concerns, many have been forced to choose between unrestricted GenAI usage to banning it altogether. A new e-guide by LayerX titled 5 Actionable Measures to Prevent Data Leakage Through Generative AI Tools is designed to help organizations navigate the challenges of GenAI usage in the workplace. The guide offers practical steps for security managers to protect sensitive corporate data while still reaping the productivity benefits of GenAI tools like ChatGPT. This approach is intended to allow companies to strike the right balance between innovation and security. Why Worry About ChatGPT? The e...
"I Had a Dream" and Generative AI Jailbreaks

"I Had a Dream" and Generative AI Jailbreaks

Oct 09, 2023 Artificial Intelligence /
"Of course, here's an example of simple code in the Python programming language that can be associated with the keywords "MyHotKeyHandler," "Keylogger," and "macOS," this is a message from ChatGPT followed by a piece of malicious code and a brief remark not to use it for illegal purposes. Initially published by  Moonlock Lab , the screenshots of ChatGPT writing code for a keylogger malware is yet another example of trivial ways to hack large language models and exploit them against their policy of use. In the case of Moonlock Lab, their malware research engineer told ChatGPT about a dream where an attacker was writing code. In the dream, he could only see the three words: "MyHotKeyHandler," "Keylogger," and "macOS." The engineer asked ChatGPT to completely recreate the malicious code and help him stop the attack. After a brief conversation, the AI finally provided the answer. "At times, the code generated isn...
Product Walkthrough: How Reco Discovers Shadow AI in SaaS

Product Walkthrough: How Reco Discovers Shadow AI in SaaS

Jan 09, 2025 AI Security / SaaS Security
As SaaS providers race to integrate AI into their product offerings to stay competitive and relevant, a new challenge has emerged in the world of AI: shadow AI.  Shadow AI refers to the unauthorized use of AI tools and copilots at organizations. For example, a developer using ChatGPT to assist with writing code, a salesperson downloading an AI-powered meeting transcription tool, or a customer support person using Agentic AI to automate tasks – without going through the proper channels. When these tools are used without IT or the Security team's knowledge, they often lack sufficient security controls, putting company data at risk. Shadow AI Detection Challenges Because shadow AI tools often embed themselves in approved business applications via AI assistants, copilots, and agents they are even more tricky to discover than traditional shadow IT. While traditional shadow apps can be identified through network monitoring methodologies that scan for unauthorized connections based on...
c
Expert Insights Articles Videos
Cybersecurity Resources