Prompt Injection Vulnerability in Vanna AI Library Poses Risk of Remote Code Execution Attacks
June 27, 2024
Cybersecurity researchers have uncovered a significant security vulnerability in the Vanna.AI library. This flaw could be manipulated to achieve remote code execution by means of prompt injection techniques. The vulnerability, known as CVE-2024-5565 and rated with a CVSS score of 8.1, involves a prompt injection scenario in the 'ask' function that could be used to deceive the library into running arbitrary commands, according to supply chain security firm JFrog.
Vanna is a machine learning library based on Python that enables users to interact with their SQL database and extract insights by simply asking questions or prompts that are then converted into a corresponding SQL query using a large language model (LLM). The accelerated deployment of generative artificial intelligence (AI) models in recent years has highlighted the potential risks of exploitation by malicious actors, who could turn these tools into weapons by providing adversarial inputs that circumvent the built-in safety measures.
A notable category of such attacks is prompt injection, a form of AI jailbreak that can be used to bypass guardrails set up by LLM providers to prevent the production of offensive, harmful, or illegal content, or to execute instructions that contravene the application's intended purpose. These attacks can be indirect, where a system processes data controlled by a third party (e.g., incoming emails or editable documents) to launch a harmful payload that results in an AI jailbreak. They can also take the form of a many-shot jailbreak or multi-turn jailbreak (also known as Crescendo) where the operator initiates a harmless dialogue and gradually directs the conversation towards the intended, prohibited objective.
This method can be further extended to carry out another novel jailbreak attack known as Skeleton Key. 'This AI jailbreak technique works by using a multi-turn (or multiple step) strategy to cause a model to ignore its guardrails,' said Mark Russinovich, chief technology officer of Microsoft Azure. 'Once guardrails are ignored, a model will not be able to determine malicious or unsanctioned requests from any other.' Skeleton Key differs from Crescendo in that once the jailbreak is successful and the system rules are changed, the model can generate responses to questions that would otherwise be prohibited regardless of the ethical and safety risks involved.
'When the Skeleton Key jailbreak is successful, a model acknowledges that it has updated its guidelines and will subsequently comply with instructions to produce any content, no matter how much it violates its original responsible AI guidelines,' Russinovich added. 'Unlike other jailbreaks like Crescendo, where models must be asked about tasks indirectly or with encodings, Skeleton Key puts the models in a mode where a user can directly request tasks. Further, the model's output appears to be completely unfiltered and reveals the extent of a model's knowledge or ability to produce the requested content.'
The recent findings from JFrog, also independently disclosed by Tong Liu, demonstrate how prompt injections could have serious consequences, especially when linked to command execution. CVE-2024-5565 exploits the fact that Vanna enables text-to-SQL Generation to create SQL queries, which are then executed and graphically presented to the users using the Plotly graphing library. This is achieved through an 'ask' function – for instance, vn.ask('What are the top 10 customers by sales?') – which is one of the primary API endpoints that facilitates the generation of SQL queries to be run on the database.
This behavior, combined with the dynamic generation of the Plotly code, creates a security gap that allows a threat actor to submit a specially crafted prompt embedding a command to be executed on the underlying system. 'The Vanna library uses a prompt function to present the user with visualized results, it is possible to alter the prompt using prompt injection and run arbitrary Python code instead of the intended visualization code,' JFrog stated. 'Specifically, allowing external input to the library's 'ask' method with 'visualize' set to True (default behavior) leads to remote code execution.'
Following responsible disclosure, Vanna has released a hardening guide that alerts users that the Plotly integration could be used to generate arbitrary Python code and that users exposing this function should do so in a sandboxed environment. 'This discovery demonstrates that the risks of widespread use of GenAI/LLMs without proper governance and security can have drastic implications for organizations,' said Shachar Menashe, senior director of security research at JFrog. 'The dangers of prompt injection are still not widely well known, but they are easy to execute. Companies should not rely on pre-prompting as an infallible defense mechanism and should employ more robust mechanisms when interfacing LLMs with critical resources such as databases or dynamic code generation.'
Latest News
- Critical SQL Injection Vulnerability in Fortra FileCatalyst Workflow Exposed
- Apple Fixes AirPods Bluetooth Security Flaw Allowing Unauthorized Access
- Major Supply Chain Attack Impacts Over 110,000 Websites Through Hijacked Polyfill Service
- Freshly Revealed MOVEit Vulnerability Exploited Within Hours
- CISA Confirms Data Breach in Chemical Security Assessment Tool: Potential Exposure of Sensitive Information
Like what you see?
Get a digest of headlines, vulnerabilities, risk context, and more delivered to your inbox.