Vercel on Wednesday revealed that it has identified an additional set of customer accounts that were compromised as part of a security incident that enabled unauthorized access to its internal systems.

The company said it made the discovery after expanding its investigation to include an extra set of compromise indicators, alongside a review of requests to the Vercel network and environment variable read events in its logs.

"Second, we have uncovered a small number of customer accounts with evidence of prior compromise that is independent of and predates this incident, potentially as a result of social engineering, malware, or other methods," the company said in an update.

In both cases, Vercel said it notified affected parties. It did not disclose the exact number of customers who were impacted.

The development comes after the company that created the Next.js framework acknowledged the breach originated with a compromise of Context.ai after it was used by a Vercel employee, enabling the attacker to seize control of their Google Workspace account and then use it to gain access to their Vercel account.

"From there, they were able to pivot into a Vercel environment, and subsequently maneuvered through systems to enumerate and decrypt non-sensitive environment variables," Vercel noted.

Further investigation by Hudson Rock has revealed that one of Context.ai employees was infected with Lumma Stealer in February 2026 after searching for Roblox auto-farm scripts and game exploit executors, indicating that this event may have been the "patient zero" that triggered the whole chain of malicious actions.

"We now understand that the threat actor has been active beyond that startup's [referring to Context.ai] compromise," Vercel CEO Guillermo Rauch said in an X post. "Threat intel points to the distribution of malware to computers in search of valuable tokens like keys to Vercel accounts and other providers."

It's unclear if Vercel employees' use of the Context AI Office Suite was sanctioned or an instance of shadow AI, which refers to the unauthorized use of artificial intelligence (AI) tools within SaaS apps without formal IT review or vetting, exposing organizations to unintended risks. The AI Office Suite has since been deprecated by Context.ai.

"OAuth integrations are useful because they reduce friction," Tanium said. "They're also dangerous because they can inherit trust from the user and the organization. When attackers abuse an approved integration, they may avoid some of the controls teams rely on for direct account compromise."

"What stands out operationally is less the volume of data exposed and more the attackers' velocity and ability to enumerate internal environments before detection. That changes the job for defenders. The challenge shifts from prevention to rapid scoping and blast-radius reduction."

Found this article interesting? Follow us on Google News, Twitter and LinkedIn to read more exclusive content we post.