AI-powered browsers are changing how we use the web, but they're also creating some serious new security risks. Tools like Perplexity's Comet and Opera's Neon can summarize pages and automate tasks for you. The problem is that researchers have found these agentic copilots can be hijacked by malicious prompts hidden in ordinary webpages, essentially turning your browser against you.
In August 2025, Brave's security team disclosed an indirect prompt injection against Perplexity's Comet using hidden instructions in a Reddit spoiler tag, leading Comet to extract an email address and a one-time passcode. No memory corruption, no code execution exploit. The browser simply followed instructions it couldn't distinguish from legitimate user intent.
In this post, we'll look at how these attacks work, why they slip past traditional defenses, and what security teams can do to keep data safe from compromised AI agents.
AI Browsers: Powerful, But a New Target
AI-enabled browsers like Comet are classified as "agentic browsers" because they take actions on behalf of users: booking meetings, filling forms, summarizing pages, and navigating between sites. The AI operates with full access to the user's browsing context, including authenticated sessions on any site where the user is logged in. This is where the risk comes in. If an attacker can sneak commands into the content the AI processes, they can effectively take control of your browsing session.
What makes these attacks different from traditional hacking is that there's no malware or code exploits involved. Attackers exploit the fact that AI can't distinguish between your instructions and someone else's. They hide malicious commands in places you'd never notice: white text on a white background, buried in HTML comments, tucked inside collapsed sections, or encoded invisibly in images. When the AI processes the page, it treats these hidden instructions as legitimate requests.
Hijacked Agents: How Prompt Injection Works
To understand just how these hidden commands lead to full account compromise, consider the attack chain Brave's team demonstrated against Comet, which follows four stages:
1. Payload Placement
The attacker embeds hidden instructions in web content using invisible CSS styling (white text on white backgrounds, zero-height divs) or injects them into user-generated content on legitimate platforms like forum comments or social media posts. In Brave's demonstration, researchers embedded a malicious prompt inside a Reddit post's spoiler tag, invisible to users browsing normally.
2. User Trigger
The victim navigates to the compromised page and invokes the browser's AI feature, typically by clicking "summarize this page" or asking a question. The user has no indication that malicious instructions are present. In the Brave proof-of-concept, the victim simply asked Comet to summarize the Reddit thread.
3. Instruction Confusion
The AI processes the page and encounters the hidden prompt. The model lacks context separation and cannot differentiate between visible content and concealed attack instructions. All input tokens are processed equivalently, so Comet interpreted the hidden spoiler tag content as legitimate instructions.
4. Malicious Execution
The AI then executes the injected commands using the victim's authenticated sessions. In the Brave demonstration, Comet navigated to Perplexity account settings to extract the user's email, triggered an OTP email, opened Gmail to read the code, then posted both values to Reddit for attacker retrieval. The entire chain completed in seconds with zero user awareness.
Other Delivery Mechanisms
Beyond hidden text in web pages, Brave's research revealed multiple entry points: text concealed within images using colors invisible to humans but readable by OCR, and malicious commands embedded in URL parameters that sit dormant until the AI processes them. In each case, the user never sees the payload, and exfiltration happens silently.
Why Traditional Defenses Fail
Browsers already have protections against cross-site attacks, but those defenses were designed for a different threat model. Same-Origin Policy and sandboxing stop one site from accessing another's data, but when an AI agent is controlling the browser, those protections stop working.
The AI operates with the user's complete privilege set across all authenticated sessions. In the Reddit attack demonstration, instructions originating from reddit.com caused the AI to access perplexity.ai, gmail.com, and potentially any other site. Standard security controls fail for the following reasons:
| Security Control | Failure Mode |
| Same-Origin Policy (SOP) | AI operates as the user, not as a script from an untrusted origin. Cross-origin restrictions do not apply to user-initiated navigation. |
| Cross-Origin Resource Sharing (CORS) | CORS governs script-initiated requests (like XMLHttpRequest), not user-level navigation. AI agents perform the equivalent of clicking links and browsing as the user, inheriting their authenticated privileges across all domains. |
| CSRF Tokens | AI makes legitimate requests with valid session cookies. From the server perspective, these are authentic user actions. |
| Content Security Policy (CSP) | Designed to control resource loading (scripts, images, stylesheets), not to filter text content that AI agents read. Cannot block natural language commands embedded in page content. |
The underlying problem is architectural. Language models combine trusted instructions (system prompts, user queries) with untrusted content (web pages, documents) into a single input stream. The model processes everything the same way, with no easy method to distinguish between the two.
The Role of Dynamic SaaS Security Platforms
Given this limitation, organizations need to shift their focus. If you can't reliably prevent the attack at its source, the next best strategy is controlling what a compromised AI agent can access in the first place.
AI copilots in browsers and SaaS applications function as non-human identities with user-level privileges. Whether browser extensions, embedded features, or third-party integrations, they access data like any user, which means security teams should apply the same governance frameworks used for service accounts and API tokens.
Dynamic SaaS security platforms provide this governance through four capabilities:
Unified Visibility of All Identities
You can't secure what you can't see. Dynamic SaaS security platforms automatically discover AI browser extensions, embedded copilots (GitHub Copilot, Microsoft 365 Copilot, Notion AI), OAuth-connected AI tools, and shadow AI usage outside IT-sanctioned channels. The output maps each AI identity to its access permissions and the sensitive data it can reach.
Least Privilege Enforcement
Once you have visibility, the next step is reducing exposure. Dynamic SaaS security platforms analyze AI tool permissions against actual usage patterns to flag over-privileged access. Detection triggers include AI extensions requesting broad OAuth scopes, assistants with admin-level API rights, or tools accessing data outside their functional requirements. Enforcement options include automatic scope restriction, browser policy controls blocking AI access to high-risk sites, and step-up authentication for sensitive systems.
Continuous Anomaly Monitoring
Even with tight permissions, compromised AI agents can still cause damage. Because they use human credentials, they blend into legitimate traffic. Dynamic SaaS security platforms baseline normal behavior for each AI tool (connection patterns, access frequency, data volume) and alert on deviations: anomalous cross-domain navigation, bulk data exports, encoded transmissions to external endpoints, OAuth token anomalies, and permission escalation attempts.
Automated Remediation and Response
When an anomaly is detected, speed matters. Dynamic SaaS security platforms integrate with SaaS applications to automate containment: token revocation, integration disablement, and account quarantine. Ongoing hygiene includes identifying stale OAuth grants, flagging excess permissions, and enforcing credential rotation policies.
Conclusion
AI browsers provide real productivity benefits, but they bypass decades of web security engineering. When a browser's AI can authenticate to accounts, read sensitive data, and execute commands across domains, prompt injection becomes a direct path to organizational data.
![]() |
| Figure 1: Reco's Shadow AI Discovery dashboard |
Security teams should treat AI assistants as privileged agents requiring oversight. Reco provides the controls: discovering AI identities across your SaaS environment, enforcing least privilege, monitoring for anomalous behavior, and automating response.
Book a demo or start a free trial at reco.ai today.
About the Author: Gal is the Cofounder & CPO of Reco. Gal is a former Lieutenant Colonel in the Israeli Prime Minister's Office. He is a tech enthusiast, with a background of Security Researcher and Hacker. Gal has led teams in multiple cybersecurity areas with an expertise in the human element.
Gal Nakash — CPO and Cofounder at Reco https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhuQWbnqXQszfF7Ro1ckEfpJAt4R_6RI4pi_EParenaMvBTPNTZ5vs91QXTU7w_7mZukKntRojMFYpgQRTBFYFTFRnP9zaj8KrlfFrkG8Rwo_GjkEFsNt4pbGhmI2aoJHB-ENuTVLOKGQUDy_hxD3Fiy4dSlhRlnZA5jyqfkyKbUpdUx6ZCD8op9n6uo90/s728-rw-e365/Gal.png



