CVE-2025-14922

Hugging · Hugging Face Diffusers and potentially other products utilizing the CogView4 model.

A high-severity remote code execution vulnerability has been discovered in multiple Hugging Face products, specifically within the Diffusers library when handling CogView4 models.

Executive summary

A high-severity remote code execution vulnerability has been discovered in multiple Hugging Face products, specifically within the Diffusers library when handling CogView4 models. An attacker could exploit this flaw by tricking an application into loading a malicious model file, allowing them to execute arbitrary code and potentially take full control of the affected system. This poses a significant risk of data theft, service disruption, and further network compromise.

Vulnerability

This vulnerability is an instance of Deserialization of Untrusted Data. The affected Hugging Face software does not properly sanitize data when loading (deserializing) certain model files, such as those for CogView4. An attacker can craft a malicious model file containing embedded code. When an application attempts to load this malicious file, the unsafe deserialization process executes the embedded code with the permissions of the running application, resulting in remote code execution (RCE) on the server.

Business impact

This vulnerability is rated as High severity with a CVSS score of 7.8. Successful exploitation could lead to a complete compromise of the server running the AI model. The potential business impact includes the theft of sensitive data, exposure of proprietary models and intellectual property, service outages, and reputational damage. An attacker could use the compromised system as a pivot point to move laterally within the corporate network, deploy ransomware, or launch further attacks, posing a critical risk to the organization's security and operations.

Remediation

Immediate Action: Identify all internet-facing and internal systems running the affected Hugging Face software. Apply the security patches released by the vendor immediately, prioritizing critical and publicly accessible systems. After patching, monitor for any signs of exploitation attempts and review historical access logs for unusual model loading activities.

Proactive Monitoring: Enhance monitoring on systems running AI/ML workloads. Look for suspicious activity in application logs related to model loading errors or unexpected file paths. Monitor for anomalous system behavior, such as unexpected processes being spawned by the application, and unusual outbound network connections which could indicate a reverse shell or data exfiltration.

Compensating Controls: If patching cannot be performed immediately, implement the following controls to mitigate risk:

  • Restrict model loading to a trusted, internal repository where all models have been scanned and verified.
  • Run the model inference application in a sandboxed or containerized environment with minimal privileges and strict network egress filtering to limit the impact of a potential compromise.
  • Implement file integrity checks to ensure model files have not been tampered with before they are loaded.

Exploitation status

Public Exploit Available: false

Analyst recommendation

This is a critical vulnerability that enables remote code execution and requires immediate attention. The highest priority is to apply the vendor-provided security patches to all affected systems. While this vulnerability is not currently on the CISA KEV list, its high severity makes it a likely candidate for future inclusion. Organizations are strongly advised to treat this with urgency and, if patching is delayed, implement the recommended compensating controls to reduce the attack surface and contain potential damage.