Meta's Llama Framework Flaw Exposes AI Systems to Remote Code Execution Risks
Jan 26, 2025
AI Security / Vulnerability
A high-severity security flaw has been disclosed in Meta's Llama large language model (LLM) framework that, if successfully exploited, could allow an attacker to execute arbitrary code on the llama-stack inference server. The vulnerability, tracked as CVE-2024-50050 , has been assigned a CVSS score of 6.3 out of 10.0. Supply chain security firm Snyk, on the other hand, has assigned it a critical severity rating of 9.3. "Affected versions of meta-llama are vulnerable to deserialization of untrusted data, meaning that an attacker can execute arbitrary code by sending malicious data that is deserialized," Oligo Security researcher Avi Lumelsky said in an analysis earlier this week. The shortcoming, per the cloud security company, resides in a component called Llama Stack , which defines a set of API interfaces for artificial intelligence (AI) application development, including using Meta's own Llama models. Specifically, it has to do with a remote code execution ...