PandasAI Vulnerability Allows Full System Compromise Through Prompt Injection
February 12, 2025
A recently identified security flaw in the open-source AI-based data analysis library, PandasAI, developed by SinaptikAI, has been found to expose users to potential remote code execution (RCE) through prompt injection attacks. Tracked as CVE-2024-12366, the vulnerability offers an opportunity for attackers to modify natural language prompts and execute arbitrary Python code, which could lead to a complete system compromise. CERT/CC has highlighted that “an attacker with access to the chat prompt can craft malicious input that is interpreted as code, potentially achieving arbitrary code execution.” To counter this, SinaptikAI has rolled out new security configurations to reduce the risk.
PandasAI's purpose is to allow users to inquire and examine data using natural language. It transforms user queries into Python or SQL code, employing a large language model (LLM) like OpenAI’s GPT to generate results. However, this method implicitly trusts the code generated by the AI, which opens a window for prompt injection attacks. Security experts from NVIDIA AI Red Team have shown that the security measures in PandasAI version 2.4.3 and prior were not adequate to prevent prompt injection. Their findings revealed that attackers could manipulate the system to execute untrusted code.
CERT/CC explained, “This vulnerability arises from the fundamental challenge of maintaining a clear separation between code and data in AI chatbots and agents.” Since PandasAI regards AI-produced code as trusted, attackers can insert malicious Python code within prompts, leading to arbitrary code execution. CERT/CC warned, “The security controls of PandasAI (2.4.3 and earlier) fail to distinguish between legitimate and malicious inputs, allowing attackers to manipulate the system into executing untrusted code.” This puts organizations using PandasAI in sensitive environments at a particular risk.
The NVIDIA AI Red Team, consisting of Joe Lucas, Becca Lynch, Rich Harang, John Irwin, and Kai Greshake, were credited by CERT/CC for discovering and reporting the vulnerability. To address CVE-2024-12366, SinaptikAI has introduced three security levels in its latest update and has also launched a sandbox environment to help mitigate prompt injection risks. CERT/CC advises users to update PandasAI immediately and configure the security settings suitable for their use case.
Latest News
- Urgent Call to Secure Systems Against Ongoing Attacks Exploiting Microsoft Outlook RCE Vulnerability
- SimpleHelp RMM Vulnerabilities Exploited to Deploy Sliver Malware
- Critical Vulnerabilities in Cisco's Identity Services Engine: A Detailed Analysis
- CISA Mandates Federal Agencies to Address Linux Kernel Vulnerability
- CISA Highlights Exploited Flaws in Microsoft .NET and Apache OFBiz
Like what you see?
Get a digest of headlines, vulnerabilities, risk context, and more delivered to your inbox.