ASUS has unveiled its future-ready AI infrastructure solutions at CloudFest 2025, showcasing advanced hardware and software platforms designed to support scalable AI deployments for enterprises. The companyโs cutting-edge solutions, powered by Intelยฎ Xeonยฎ 6 processors, Intel Gaudiยฎ 3 AI accelerators, NVIDIAยฎ GPUs, and AMDยฎ EPYCโข processors, aim to drive the next wave of AI innovation, offering seamless integration from edge to cloud.
Intel Xeon 6-based servers for AI training and inference
ASUS is set to feature its Intel Xeon 6-based server solutions at CloudFest 2025, emphasising scalability, performance, and cost-efficiency. The servers, including the RS700-E12, RS720Q-E12, and ESC8000-E12P series, utilise the Data Center Modular Hardware System (DC-MHS) architecture. This architecture enhances the flexibility of data centres, allowing for efficient upgrades and expansion, while maintaining operational stability.
One of the key highlights is the RS700-E12, a 1U air-cooled server that supports dual Intel Xeon 6 processors, DDR5 memory, and NVMe storage, making it ideal for AI inference and training. Its modular design offers exceptional scalability, enabling seamless integration into growing AI workloads.
The RS720Q-E12, a liquid-cooled 2U4N server, is engineered for more demanding tasks, including high-performance computing (HPC) and large-scale data processing. The multi-node server boasts advanced direct-to-chip liquid cooling, optimising thermal management in intensive AI tasks.
At CloudFest 2025, ASUS will also debut the Intel Gaudi 3 AI accelerator PCIe card, designed to enhance AI workload performance. Integrated into the ASUS ESC8000-E12P, a 4U server, the Gaudi 3 PCIe card offers 128GB of HBM2e memory with a PCIe 5.0 interface, and up to 3.7TB/s bandwidth. With 64 dedicated tensor processor cores, the card accelerates AI inference and fine-tuning tasks, while delivering impressive energy efficiency. This AI accelerator card seamlessly integrates with ASUSโ servers, offering an ideal solution for training large AI models and machine learning pipelines.
AI supercomputing for high-performance workloads
ASUS will also showcase its AI supercomputing solutions, designed for high-performance AI model training and supercomputing tasks. Among the highlights is the ASUS ESC N8-E11V, a 7U NVIDIA HGX server with eight NVIDIA H200 Tensor Core GPUs, dual Intel Xeon 6 processors, and advanced liquid cooling. The server delivers exceptional computational power, enabling the training of large language models and complex deep learning algorithms.
For more intensive workloads, the ASUS AI POD with the NVIDIA GB200 NVL72 platform stands out. This AI supercomputer is equipped with 72 NVIDIA Blackwell GPUs, NVLink interconnects, and liquid cooling architectures, making it capable of supporting trillion-parameter AI model training and inference, tackling some of the most demanding AI tasks.
Another notable server on display is the ASUS RS520QA-E13, powered by AMD EPYC 9005 processors. This high-density, 2U4N server is particularly suited for edge computing or electronic design automation (EDA) applications. Its efficient cooling and flexible memory configurations make it an optimal choice for performance-driven and efficiency-critical tasks.
ASUS: A total AI solutions provider
With its diverse portfolio of AI solutions, ASUS is establishing itself as a total AI solutions provider, offering integrated hardware, software, and cloud-based platforms. The companyโs end-to-end solutions empower businesses to deploy, manage, and scale AI applications seamlessly. From AI management tools to comprehensive software frameworks, ASUS is positioning itself as a key player in the fast-evolving AI landscape.
At CloudFest 2025, ASUS highlighted its commitment to helping enterprises achieve innovation and growth through advanced, scalable AI infrastructure. By providing solutions that integrate hardware and software, ASUS aims to simplify the deployment of AI technology, making it accessible for businesses of all sizes.