/api/v1/validate/code had missing authentication checks and passed code to the Python exec function. However, it didn’t run exec directly on functions, but on function definitions, which make functions available for execution but don’t execute their code.Because of this, the Horizon3.ai researchers had to come up with an alternative exploitation method leveraging a Python feature called decorators, which “are functions that return functions that wrap other functions.”The proof-of-concept published by Horizon3.ai on April 9 leverages decorators to achieve remote code execution, but the researchers note that a third-party researcher also achieved the same by abusing another feature of Python functions called default arguments.Since then, an exploit for this vulnerability has also been added to Metasploit, a popular penetration testing framework, so it’s not surprising that attackers have already started exploiting this flaw in attacks.
Remediation: Langflow users are advised to immediately upgrade deployments to version 1.3.0 released April 1, which includes the patch, or to the latest version, 14.0, which has additional fixes.The Horizon3.ai researchers point out that any Langflow user can already escalate their privileges to superuser because they can execute code on the server by design. As such, any stolen or weak Langlow user credentials can pose a significant risk.”As a general practice we recommend caution when exposing any recently developed AI tools to the Internet,” the researchers said. “If you must expose it externally, consider putting it in an isolated VPC and/or behind SSO. It only takes one errant/shadow IT deployment of these tools on some cloud instance to have a breach on your hands.”
First seen on csoonline.com
Jump to article: www.csoonline.com/article/3978918/critical-flaw-in-ai-agent-dev-tool-langflow-under-active-exploitation.html
![]()

