back to top
Thursday. 31 October 2024

OpenAI and Broadcom team up to create AI chip for faster, smarter inference

OpenAI partners with Broadcom to create a custom AI inference chip to reduce reliance on Nvidia and expand its AI infrastructure.

Published:

Published:

Trending Stories

- Advertisement -

OpenAI is collaborating with Broadcom to develop a custom chip to run artificial intelligence (AI) models efficiently after their training phase. According to sources close to the matter, the partnership aims to create a chip specialised for “inference”—a process that allows AI to respond to user inputs based on its pre-trained knowledge. This move comes as the need for chips suited for inference is set to grow in parallel with AI adoption as companies turn to AI to handle more sophisticated tasks.

OpenAI and Broadcom are reportedly consulting with Taiwan Semiconductor Manufacturing Co. (TSMC), the world's leading chip manufacturer, to aid in chip production. Although sources indicate that OpenAI has considered developing its chip for the past year, the discussions remain preliminary, focusing on faster solutions through collaborations rather than solo production.

A custom path to AI innovation

OpenAI intends to depart from the established graphics processing unit (GPU) market by shifting towards inference-based chips. The company's focus on inference-specific chips marks a difference from Nvidia, which dominates the GPU space and has traditionally powered the initial AI model training and development phases. These GPUs are instrumental in building large, generative AI models but are not as efficient in inference, where a lightweight, specialised chip is better suited.

Industry sources reveal that the chip production journey—spanning design, prototyping, and large-scale production—can be time-consuming and costly. OpenAI has strategically collaborated with established players to navigate these hurdles rather than pursue chip manufacturing independently. While OpenAI had previously contemplated creating its manufacturing network, the immediate need for high-performance chips prompted the decision to leverage Broadcom's resources and TSMC's facilities.

OpenAI has not responded to requests for comment, while Broadcom's representatives remained silent on the partnership, and TSMC declined to address rumours. However, OpenAI's strategy mirrors similar moves by other tech giants as they seek alternatives to Nvidia. The rising demand for diverse AI chips has led companies to explore collaborations and invest in different types of processors, including those from Advanced Micro Devices.

Broadcom's expertise and the AI future

OpenAI and Broadcom team up to create AI chip for faster, smarter inference
Image credit: SDxCentral

Broadcom brings extensive experience as the most prominent designer of application-specific integrated circuits (ASICs). These chips are custom-built for specific tasks, and Broadcom's client list includes some of the tech industry's biggest players, such as , , and ByteDance. CEO Hock Tan previously commented on Broadcom's approach, saying the company is cautious about adding new clients and only commits to full-scale production for projects that meet strict requirements. This business model could work well with OpenAI's needs, as the start-up seeks a specialised chip that can handle AI inference tasks without requiring massive investments in manufacturing infrastructure.

Despite OpenAI's primary reliance on Nvidia GPUs to develop and train its models, the search for a more sustainable and efficient solution has become critical as AI adoption grows. OpenAI's service requirements—particularly in data centres, where vast amounts of computing power are needed to process AI workloads—are fuelling the pursuit of custom-built chips to meet demand at scale. To fund this expansion, OpenAI CEO Sam Altman has contacted US government agencies and global investors, including some in the Middle East, emphasising the need for enhanced data infrastructure to support future growth.

Preparing data centres and future partnerships

The shift towards custom chip solutions marks another important step in OpenAI's broader vision to enhance its AI infrastructure. The company invests in data centre partnerships to provide a robust home for these new AI chips. With the rise of generative AI and large-scale language models, the demand for specialised chips capable of efficiently processing inference requests is surging.

As the industry looks for alternatives to Nvidia, OpenAI's strategic collaboration with Broadcom and consultations with TSMC could pave the way for an innovative solution to handle the vast demands of next-generation AI. By exploring custom chip solutions and building alliances, OpenAI is preparing itself to meet future AI demands with both speed and efficiency while keeping a close eye on developing data centres that can host these advanced processors.

This effort marks OpenAI's continued commitment to creating faster, smarter, and more accessible AI technology that meets users' ever-growing expectations worldwide.

Tech Edition has partnerships that involve sponsored content. While this financial support helps us with daily operations, it doesn't affect the integrity of our reviews. We remain committed to delivering honest and insightful content to our readers.

Tech Edition is now on Telegram! Join our channel here and catch all the latest tech news!



Emma Job
Emma Job
Emma is a news editor at Tech Edition. With a decade's experience in content writing, she revels in both crafting and immersing herself in narratives. From tracking down viral trends to delving into the most recent news stories, her goal is to deliver insightful and timely content to her readers.

Featured Content

Layoffs reshape Asia’s business landscape: From tech giants to e-commerce leaders

Mass layoffs in Asia's tech, e-commerce, fashion, and sports sectors raise questions about how businesses can balance profitability, operational efficiency, and employee welfare.

Related Stories