Critical Remote Code Execution Vulnerability Found in Ollama AI Infrastructure Tool

June 24, 2024

A critical security flaw has been discovered in the Ollama open-source artificial intelligence (AI) infrastructure platform. The flaw, which could be exploited to achieve remote code execution, has been identified by cybersecurity researchers and is known as Probllama. It's tracked under the identifier CVE-2024-37032.

The issue was responsibly disclosed on May 5, 2024, and subsequently addressed in the 0.1.34 version of Ollama, released on May 7, 2024. Ollama is a service used for packaging, deploying, and running large language models (LLMs) locally on devices running Windows, Linux, and macOS.

The vulnerability stems from insufficient input validation that leads to a path traversal flaw. An attacker could exploit this to overwrite arbitrary files on the server, leading to remote code execution. To successfully exploit this, the attacker would need to send specially crafted HTTP requests to the Ollama API server.

The vulnerability exploits the API endpoint "/api/pull", used to download a model from the official registry or a private repository, by providing a malicious model manifest file with a path traversal payload in the digest field. This could not only corrupt arbitrary files on the system but also enable remote code execution by overwriting a configuration file associated with the dynamic linker to include a rogue shared library and launch it before executing any program.

The risk of remote code execution is significantly reduced in default Linux installations as the API server binds to localhost. However, this is not the case with Docker deployments, where the API server is publicly exposed. "This issue is extremely severe in Docker installations, as the server runs with `root` privileges and listens on `` by default – which enables remote exploitation of this vulnerability," said security researcher Sagi Tzadik.

The situation is further complicated by the inherent lack of authentication associated with Ollama. This allows an attacker to exploit a publicly-accessible server to steal or tamper with AI models and compromise self-hosted AI inference servers. As a result, such services need to be secured using middleware like reverse proxies with authentication. Wiz discovered over 1,000 unprotected Ollama instances hosting numerous AI models.

"CVE-2024-37032 is an easy-to-exploit remote code execution that affects modern AI infrastructure," Tzadik stated. "Despite the codebase being relatively new and written in modern programming languages, classic vulnerabilities such as Path Traversal remain an issue."

This development follows a warning from AI security company Protect AI about over 60 security defects affecting various open-source AI/ML tools, including critical issues leading to information disclosure, access to restricted resources, privilege escalation, and complete system takeover. The most severe of these vulnerabilities is CVE-2024-22476 (CVSS score 10.0), an SQL injection flaw in Intel Neural Compressor software that could allow attackers to download arbitrary files from the host system. This was addressed in version 2.5.0.

Latest News

Like what you see?

Get a digest of headlines, vulnerabilities, risk context, and more delivered to your inbox.

Subscribe Below

By submitting this form, you’re giving us permission to email you. You may unsubscribe at any time.

Accelerate Security Teams

Continuously identify and prioritize the risks that are most critical in your environment, and validate that your remediation efforts are reducing risk. An always-on single source-of-truth of your assets, services, and vulnerabilities.