A new report by Tenable has revealed that artificial intelligence (AI) services used in cloud environments are highly vulnerable to cyber threats, with the majority of workloads exposed to unresolved security issues. The Cloud AI Risk Report 2025 outlines how the growing use of AI tools on platforms like Amazon Web Services (AWS), Google Cloud Platform (GCP), and Microsoft Azure is leading to a sharp increase in security risks for businesses.
The analysis, released on 20 March by the exposure management firm, shows that around 70% of cloud-based AI workloads contain at least one known but unpatched vulnerability. These weaknesses could allow attackers to manipulate data, tamper with AI models, or cause data leaks. The report aims to raise awareness about the potential consequences of combining AI and cloud technologies without proper safeguards.
Unpatched vulnerabilities and misconfigurations widespread
One of the most striking findings from the report is the widespread presence of critical vulnerabilities. In particular, Tenable identified CVE-2023-38545โa severe flaw in the popular curl data transfer toolโin 30% of the AI workloads it analysed. This type of vulnerability could be used by attackers to gain unauthorised access to data or systems.
In addition, misconfigurations in cloud services were found to be alarmingly common. For example, 77% of organisations using Google Vertex AI Notebooks had left the default Compute Engine service account overprivileged. This means services built using these accounts are more vulnerable to exploitation, as they often have more permissions than necessary.
Tenable refers to this issue as a โJenga-styleโ misconfigurationโwhere services are built on top of others, inheriting risky default settings from the lower layers. If one component is misconfigured, it can lead to cascading vulnerabilities throughout the system.
AI data and models at risk of tampering
The report also found evidence of poor controls around AI training data, a key part of machine learning systems. Data poisoning, where training data is deliberately tampered with to manipulate the behaviour of AI models, remains a serious concern.
According to the report, 14% of organisations using Amazon Bedrock had not properly restricted public access to at least one AI training data storage bucket. In 5% of cases, the permissions were found to be overly broad, increasing the likelihood of unauthorised access or data tampering.
Similarly, Tenable discovered that 91% of users running Amazon SageMaker notebook instances had at least one instance that granted root access by default. This creates a significant risk if any notebook is compromised, as it could allow attackers to make changes to all files on the system.
Call for improved AI cloud security
Liat Hayun, Vice President of Research and Product Management for Cloud Security at Tenable, stressed the importance of addressing these risks.
โWhen we talk about AI usage in the cloud, more than sensitive data is on the line. If a threat actor manipulates the data or AI model, there can be catastrophic long-term consequences, such as compromised data integrity, compromised security of critical systems and degradation of customer trust,โ said Hayun.
She added that cloud security strategies need to evolve in line with the increasing use of AI. โCloud security measures must evolve to meet the new challenges of AI and find the delicate balance between protecting against complex attacks on AI data and enabling organizations to achieve responsible AI innovation.โ
The findings serve as a timely reminder for businesses to review their cloud AI security practices and ensure they are not leaving critical data or infrastructure exposed to potential threats.