Wednesday, 22 January 2025
26.7 C
Singapore
33.4 C
Thailand
26.4 C
Indonesia
26.1 C
Philippines

Open-source machine learning systems face increasing security threats

Open-source machine learning tools face rising security threats, with recent findings highlighting critical vulnerabilities across key frameworks.

Recent research has uncovered significant security vulnerabilities in open-source machine learning (ML) frameworks, putting sensitive data and operations at risk. As ML adoption grows across industries, so does the urgency of addressing these threats. The vulnerabilities, identified in a report by JFrog, reveal gaps in ML security compared to more established systems like DevOps and web servers.

Critical vulnerabilities in ML frameworks

Open-source ML projects have seen a rise in security flaws, with JFrog reporting 22 vulnerabilities across 15 ML tools in recent months. Two primary concerns concern server-side components and privilege escalation risks within ML environments. These vulnerabilities could allow attackers to access sensitive files, gain unauthorised privileges, and compromise the entire ML workflow.

One significant flaw involves Weave, a Weights & Biases (W&B) tool that tracks and visualises ML model metrics. The WANDB Weave Directory Traversal vulnerability (CVE-2024-7340) allows attackers to exploit improper input validation in file paths. By doing so, they can access sensitive files, including admin API keys, enabling privilege escalation and potentially compromising ML pipelines.

Another affected tool is ZenML, which manages MLOps pipelines. A critical flaw in ZenML Cloudโ€™s access control lets attackers with minimal access privileges escalate permissions. This could expose confidential data like secrets and model files, allowing attackers to manipulate pipelines, tamper with model data, or disrupt production environments dependent on these pipelines.

Risks of privilege escalation and data breaches

Other vulnerabilities highlight the risks of privilege escalation in ML systems. The Deep Lake Command Injection (CVE-2024-6507) found in the Deep Lake database is particularly severe. This database, designed for AI applications, suffers from improper command sanitisation, allowing attackers to execute arbitrary commands. Such breaches could compromise the database and connected applications, leading to remote code execution.

Vanna AI, a natural language SQL query generation tool, also has a serious vulnerability. The Vanna.AI Prompt Injection (CVE-2024-5565) flaw lets attackers inject malicious code into SQL prompts, which can result in remote code execution. This poses risks like manipulated visualisations, SQL injections, or data theft.

Mage.AI, an MLOps platform for managing data pipelines, is vulnerable to unauthorised shell access, file leaks, and path traversal issues. These flaws enable attackers to control pipelines, expose configurations, and execute malicious commands, risking privilege escalation and data integrity breaches.

The path forward

JFrog’s findings highlight a critical gap in MLOps security. Many organisations fail to integrate AI/ML security with broader cybersecurity strategies, leaving blind spots. Attackers can exploit these vulnerabilities to embed malicious code in models, steal data, or manipulate outputs, creating widespread disruptions.

As ML and AI continue transforming industries, securing their frameworks, datasets, and models is essential. Robust security practices must be prioritised to protect the innovations that drive this growing field.

Hot this week

ASUS introduces ProArt Display 5K PA27JCV for creative professionals

ASUS unveils the ProArt Display 5K PA27JCV, a 27-inch monitor offering 5K resolution, Delta E<2 colour accuracy, and advanced features for creators.

Arlo announces partnership with Origin AI to enhance smart home security

Arlo partners with Origin AI to deliver cutting-edge AI-powered home security, integrating exclusive technologies like TruShield and Allos.

More applicants but harder to hire: LinkedIn highlights hiring challenges in 2025

LinkedIn's 2025 research highlights hiring struggles in APAC, driven by a skills mismatch, rising AI demands, and new tools to address these challenges.

President Trump signs executive order delaying TikTok ban for 75 days

Trump delayed the TikTok ban with a 75-day executive order, allowing time to address national security concerns and find a resolution.

Amazon pauses drone deliveries in the US after testing crash

Amazon halts US drone deliveries after crashes during testing, citing safety concerns and working on software updates for its fleet.

UK unveils digital wallet and AI chatbot to revolutionise public services

The UK announces a digital wallet for IDs and an OpenAI-powered chatbot to enhance public services, aiming for secure and efficient solutions.

Apple set to launch iPhone SE 4 with Dynamic Island and iPad Air featuring M3 chip

The iPhone SE 4 with Dynamic Island and iPad Air with M3 chip are expected to launch soon. They will offer modern design and performance upgrades.

President Trump signs executive order delaying TikTok ban for 75 days

Trump delayed the TikTok ban with a 75-day executive order, allowing time to address national security concerns and find a resolution.

President Trump repeals Bidenโ€™s AI executive order on first day in office

President Trump repeals Biden's 2023 AI executive order on day one, sparking debate over AI regulation, innovation, and national security risks.

Related Articles