Tenable has issued a stark warning about the growing cybersecurity risks associated with rapid adoption of artificial intelligence (AI) technologies. According to the company’s new Cloud AI Risk Report 2025, organisations in the Asia-Pacific and beyond are integrating open-source AI tools and managed cloud services at a pace that far exceeds their security preparedness, potentially exposing sensitive data and AI models to significant threats.
The report highlights that while businesses are eager to leverage AI for innovation and competitive advantage, they are often doing so without fully understanding the security implications. Tenable found that the widespread use of open-source packages and rapid cloud service deployment is creating systemic vulnerabilities in enterprise environments.
Widespread AI adoption lacks adequate safeguards
Tenable’s findings align with a 2024 McKinsey Global Survey which showed that 72 percent of organisations had embedded AI into at least one business function by early 2024, up from 50 percent two years earlier. However, this increased adoption has not been matched by improvements in security posture. Instead, Tenable warns that vulnerabilities, cloud misconfigurations, and exposed data are quietly accumulating.
From December 2022 to November 2024, Tenable Cloud Research analysed real-world workloads across major cloud providers including Amazon Web Services (AWS), Microsoft Azure and Google Cloud Platform (GCP). The research identified a growing dependency on open-source libraries such as Scikit-learn and Ollama, found in 28 percent and 23 percent of AI workloads respectively. While these tools accelerate machine learning development, they are also introducing hidden vulnerabilities through unverified code and complex dependency chains.
The risk is especially high in Unix-based environments, which are common in AI development. These systems often rely on open-source components that may go unpatched, creating opportunities for attackers to exploit them and gain access to sensitive data or alter AI model behaviour.
Cloud misconfigurations and excessive permissions
The report also shows that enterprises are heavily relying on managed cloud services to run AI workloads, introducing another layer of risk. In Microsoft Azure environments, 60 percent of organisations had configured Azure Cognitive Services, 40 percent used Azure Machine Learning, and 28 percent relied on Azure AI Bot Service. Similarly, on AWS, 25 percent had configured Amazon SageMaker, and 20 percent deployed Amazon Bedrock. GCP’s Vertex AI Workbench appeared in 20 percent of workloads.
While these tools support innovation at scale, their default configurations can lead to poor security practices. Many organisations unknowingly grant excessive permissions or fail to adjust permissive default settings, making it easier for attackers to access or manipulate critical AI systems and training data.
Nigel Ng, Senior Vice President at Tenable APJ, cautioned, “Organisations are rapidly adopting open-source AI frameworks and cloud services to accelerate innovation, but few are pausing to assess the security impact. The very openness and flexibility that make these tools powerful also create pathways for attackers. Without proper oversight, these hidden exposures could erode trust in AI-driven outcomes and compromise the competitive advantage businesses are chasing.”
Managing AI risk with strategic oversight
To address the risks, Tenable recommends a multi-layered approach. This includes managing AI exposure holistically by continuously monitoring infrastructure, workloads and identities; treating AI assets such as models and datasets as sensitive; enforcing least-privilege access controls; and staying updated on AI regulations and security frameworks like the NIST AI Risk Management Framework.
The company also advises organisations to prioritise remediation of critical vulnerabilities using tools that streamline alerts and improve remediation efficiency. Aligning cloud configurations with provider security recommendations is equally important, especially since many default settings are overly permissive.
Ng added, “AI will shape the future of business, but only if it is built on a secure foundation. Open-source tools and cloud services are essential, but they must be managed with care. Without visibility into what is being deployed and how it is configured, organisations risk losing control of their AI environments and the outcomes those systems produce.”