You must find an alternative approach to artificial intelligence (AI) development in China as US sanctions continue to block access to advanced semiconductors and chip-making equipment. Industry experts suggest that China leverages its supercomputing technology developed over the past two decades to overcome these restrictions.
Leveraging supercomputing technology
According to Zhang Yunquan, a researcher at the Institute of Computing Technology under the Chinese Academy of Sciences (CAS), using supercomputers could help break the US-led stranglehold on China's AI industry. In a state-backed tabloid Global Times report, Zhang emphasised the importance of supercomputing systems for training large language models (LLMs). These models underpin generative AI services like ChatGPT and could replace the power-hungry data-centre computing clusters that typically use 10,000 to 100,000 graphics processing units (GPUs) for such training.
China's effort to build a viable and advanced computing platform for training LLMs and developing AI applications highlights the urgency of achieving technological self-sufficiency. The country's AI progress remains hampered by limited GPU choices due to US sanctions that have prevented Nvidia, the top GPU firm, from supplying its most cutting-edge chips to China.
Developing alternative solutions
According to a Reuters report, Nvidia is reportedly working on a version of its new flagship AI chips for the Chinese market that is compatible with current US export controls. CAS academician Chen Runsheng, speaking at the same conference as Zhang, stated that building LLMs is not just about adding more chips. Instead, these models must learn to lower energy consumption while improving efficiency, similar to the human brain.
China, the Asia-Pacific's most significant data centre market with 3,956 megawatts of capacity, relies heavily on coal power, generating nearly two-thirds of its electricity from coal last year. Chen urged China to focus on fundamental research for intelligent computing of LLMs and high-performance computing (HPC) technology to achieve breakthroughs in computing power. HPC refers to processing data and performing complex calculations at high speeds, accomplished by supercomputers containing thousands of compute nodes working together.
Pushing for innovation
According to Chen, the current batch of LLMs developed in China is based on models and algorithms from the US without enough consideration of fundamental theories. He believes that progress in fundamental theory could lead to groundbreaking and authentic innovation.
Chinese companies have been building computing clusters to train LLMs with over 10,000 GPUs, including home-grown chips from the Chinese GPU start-up Moore Threads Technology. Big Tech firms, like Tencent Holdings, are optimising their infrastructure to enhance AI training efficiency. For example, Tencent's Xingmai HPC network can support a single computing cluster with more than 100,000 GPUs.
This move towards supercomputing reflects China's commitment to becoming technologically self-sufficient, ensuring its AI development remains robust despite external restrictions.