Cybersecurity researchers discovered six security flaws in Ollama’s artificial intelligence (AI) framework that could be exploited by an attacker to perform a variety of actions, including denial of service, model poisoning, and model theft.
“Combined, these vulnerabilities could allow an attacker to perform a wide variety of malicious activities with a single HTTP request, including Denial of Service (DoS) attacks, model poisoning, model theft, and more,” Avi, researcher at Oligo Security. Lumelsky said in a report published last week.
Ollama is an open source program that allows users to locally deploy and manage large language models (LLMs) on Windows, Linux, and macOS devices. His project repository on GitHub has been forked 7,600 times to date.
A brief description of the six vulnerabilities is given below –
- CVE-2024-39719 (CVSS Score: 7.5) – Vulnerability that an attacker could use with /api/create an endpoint to determine the existence of a file on the server (Fixed in version 0.1.47)
- CVE-2024-39720 (CVSS Score: 8.2) – Out-of-bounds read vulnerability that could cause an application to crash using the /api/create endpoint, leading to a DoS condition (Fixed in 0.1.46)
- CVE-2024-39721 (CVSS Score: 7.5) – Vulnerability causing resource exhaustion and ultimately DoS when repeatedly calling the /api/create endpoint when passing a file “/dev/random” as input (Fixed in version 0.1. 34)
- CVE-2024-39722 (CVSS Score: 7.5) – API/Push endpoint path traversal vulnerability that exposes files existing on the server and the entire directory structure where Ollama is deployed (Fixed in version 0.1.46)
- Vulnerability that could lead to model poisoning via the /api/pull endpoint from an untrusted source (no CVE ID, not fixed)
- Vulnerability that could lead to model theft via the /api/push endpoint to an untrusted target (no CVE ID, not fixed)
For both unresolved vulnerabilities, Ollama developers recommended that users filter endpoints exposed to the Internet using a proxy server or web application firewall.
“This means that not all endpoints should be open by default,” Lumelski said. “This is a dangerous assumption. Not everyone is aware of this or filters http routing to Ollama. These endpoints are currently available through the standard Ollama port as part of every deployment, without any separation or backup documentation.”
Oligo said it found 9,831 unique Internet instances running Ollama, with most of them located in China, the US, Germany, South Korea, Taiwan, France, the UK, India, Singapore and Hong Kong. One in four Internet servers was found to be vulnerable to the identified flaws.
This development comes more than four months after cloud security company Wiz disclosed a serious flaw affecting Ollama (CVE-2024-37032), which could be used for remote code execution.
“Exposing Ollama to the Internet without authorization is equivalent to exposing a docker socket to the public Internet, because it can upload files and has model pull and push capabilities (which can be abused by attackers),” Lumelski noted.