Unknown threat actors have been observed weaponizing v0, a generative artificial intelligence (AI) tool from Vercel, to design fake sign-in pages that impersonate their legitimate counterparts.
"This observation signals a new evolution in the weaponization of Generative AI by threat actors who have demonstrated an ability to generate a functional phishing site from simple text prompts," Okta Threat Intelligence researchers Houssem Eddine Bordjiba and Paula De la Hoz said.
v0 is an AI-powered offering from Vercel that allows users to create basic landing pages and full-stack apps using natural language prompts.
The identity services provider said it has observed scammers using the technology to develop convincing replicas of login pages associated with multiple brands, including an unnamed customer of its own. Following responsible disclosure, Vercel has blocked access to these phishing sites.
The threat actors behind the campaign have also been found to host other resources such as the impersonated company logos on Vercel's infrastructure, likely in an effort to abuse the trust associated with the developer platform and evade detection.
Unlike traditional phishing kits that require some amount of effort to set, tools like v0 — and its open-source clones on GitHub — allows attackers spin up fake pages just by typing a prompt. It's faster, easier, and doesn't require coding skills. This makes it simple for even low-skilled threat actors to build convincing phishing sites at scale.
"The observed activity confirms that today's threat actors are actively experimenting with and weaponizing leading GenAI tools to streamline and enhance their phishing capabilities," the researchers said.
"The use of a platform like Vercel's v0.dev allows emerging threat actors to rapidly produce high-quality, deceptive phishing pages, increasing the speed and scale of their operations."
The development comes as bad actors continue to leverage large language models (LLMs) to aid in their criminal activities, building uncensored versions of these models that are explicitly designed for illicit purposes. One such LLM that has gained popularity in the cybercrime landscape is WhiteRabbitNeo, which advertises itself as an "Uncensored AI model for (Dev) SecOps teams."
"Cybercriminals are increasingly gravitating towards uncensored LLMs, cybercriminal-designed LLMs, and jailbreaking legitimate LLMs," Cisco Talos researcher Jaeson Schultz said.
"Uncensored LLMs are unaligned models that operate without the constraints of guardrails. These systems happily generate sensitive, controversial, or potentially harmful output in response to user prompts. As a result, uncensored LLMs are perfectly suited for cybercriminal usage."
This fits a bigger shift we're seeing: Phishing is being powered by AI in more ways than before. Fake emails, cloned voices, even deepfake videos are showing up in social engineering attacks. These tools help attackers scale up fast, turning small scams into large, automated campaigns. It's no longer just about tricking users—it's about building whole systems of deception.