ASUS, in partnership with its subsidiary Taiwan Web Service Corporation (TWSC), has announced its new GenAI POD Solution at ISC 2024. This solution aims to address the growing need for AI supercomputing. ASUS is showcasing its NVIDIA MGX-powered AI servers, including the ESC NM1-E1, ESR1-511N-M1, ESC N8A-E12, and RS720QN-E11-RS24U HGX GPU servers. These servers are enhanced by TWSC’s exclusive resource management platform and software stacks, making them capable of handling various generative AI and large language model (LLM) training tasks. These integrated solutions feature advanced thermal designs and can be customised for enterprises, providing comprehensive data centre solutions with robust software platforms to support AI initiatives.
The ASUS ESC NM1-E1, powered by an NVIDIA GH200 Grace Hopper Superchip, features 72 Arm Neoverse V9 CPU cores and NVIDIA NVLink-C2C technology. This combination ensures high performance and efficiency, making it suitable for AI-driven data centres, high-performance computing (HPC), data analytics, and NVIDIA Omniverse applications. This server promises significant improvements in performance and memory capabilities.
The ASUS ESR1-511N-M1 server, also powered by the NVIDIA GH200 Grace Hopper Superchip, is designed for large-scale AI and HPC applications. It supports deep-learning (DL) training and inference, data analytics, and high-performance computing. With an enhanced thermal solution, it achieves optimal performance and lower power usage effectiveness (PUE). Its flexible configuration, including a 1U design and support for up to four E1.S local drives via NVIDIA BlueField-3, along with three PCI Express (PCIe) 5.0 x16 slots, ensures seamless and rapid data transfers.
ASUS NVIDIA HGX servers boost AI with end-to-end H100 eight-GPU power
The ASUS ESC N8A-E12 is a robust 7U dual-socket server equipped with dual AMD EPYC 9004 processors and eight NVIDIA H100 Tensor Core GPUs. Designed for generative AI, it features an enhanced thermal solution for optimal performance and lower PUE. This HGX server offers a unique one-GPU-to-one-NIC configuration, providing maximum throughput for compute-heavy tasks.
The ASUS RS720QN-E11-RS24U is a high-density server featuring an NVIDIA Grace CPU Superchip with NVIDIA NVLink-C2C technology. This innovative solution can accommodate four nodes within a 2U4N chassis, offering PCIe 5.0 compatibility and exceptional performance for dual-socket CPUs. It is ideal for data centres, web servers, virtualisation clouds, and hyperscale environments.
ASUS introduces efficient D2C cooling solution
ASUS’s direct-to-chip (D2C) cooling solution offers a swift and straightforward approach, leveraging existing infrastructure and enabling quick deployment with reduced PUE. The ASUS RS720QN-E11-RS24U supports manifolds and cool plates, allowing for diverse cooling solutions. Additionally, these servers support a rear-door heat exchanger that fits standard rack-server designs, meaning only the rear door needs to be replaced to enable liquid cooling. ASUS collaborates with leading cooling solution providers to offer comprehensive cooling solutions, aiming to minimise data centre PUE, carbon emissions, and energy consumption, contributing to greener data centres.
TWSC’s generative AI POD solutions
TWSC has extensive experience in deploying and maintaining large-scale AIHPC infrastructure for NVIDIA Partner Network cloud partners. This includes the National Center for High-performance Computing (NCHC)’s TAIWANIA-2 (#10 / Green 500, November 2018) and FORERUNNER 1 (#92 / Green 500, November 2023) supercomputer series. TWSC’s AI Foundry Service allows for quick deployment of AI supercomputing and flexible model optimisation for AI 2.0 applications, enabling users to tailor AI demand to their needs.
TWSC’s generative AI POD solutions offer enterprise-grade AI infrastructure with swift rollouts and comprehensive end-to-end services, ensuring high availability and cybersecurity standards. These solutions are designed to support success stories across academic, research, and medical institutions. Comprehensive cost-management capabilities optimise power consumption and streamline operational expenses (OPEX), making TWSC technologies a compelling choice for organisations seeking a reliable and sustainable generative AI platform.