Friday, 27 December 2024
26.8 C
Singapore

OpenAI and Broadcom team up to create AI chip for faster, smarter inference

OpenAI partners with Broadcom to create a custom AI inference chip to reduce reliance on Nvidia and expand its AI infrastructure.

OpenAI is collaborating with Broadcom to develop a custom chip to run (AI) models efficiently after their training phase. According to sources close to the matter, the partnership aims to create a chip specialised for “inference”—a process that allows AI to respond to user inputs based on its pre-trained knowledge. This move comes as the need for chips suited for inference is set to grow in parallel with AI adoption as companies turn to AI to handle more sophisticated tasks.

OpenAI and Broadcom are reportedly consulting with Taiwan Semiconductor Manufacturing Co. (TSMC), the world’s leading chip manufacturer, to aid in chip production. Although sources indicate that OpenAI has considered developing its chip for the past year, the discussions remain preliminary, focusing on faster solutions through collaborations rather than solo production.

A custom path to AI innovation

OpenAI intends to depart from the established graphics processing unit (GPU) market by shifting towards inference-based chips. The company’s focus on inference-specific chips marks a difference from Nvidia, which dominates the GPU space and has traditionally powered the initial AI model training and development phases. These GPUs are instrumental in building large, generative AI models but are not as efficient in inference, where a lightweight, specialised chip is better suited.

Industry sources reveal that the chip production journey—spanning , prototyping, and large-scale production—can be time-consuming and costly. OpenAI has strategically collaborated with established players to navigate these hurdles rather than pursue chip manufacturing independently. While OpenAI had previously contemplated creating its manufacturing network, the immediate need for high-performance chips prompted the decision to leverage Broadcom’s resources and TSMC’s facilities.

OpenAI has not responded to requests for comment, while Broadcom’s representatives remained silent on the partnership, and TSMC declined to address rumours. However, OpenAI’s strategy mirrors similar moves by other tech giants as they seek alternatives to Nvidia. The rising demand for diverse AI chips has led companies to explore collaborations and invest in different types of processors, including those from Advanced Micro Devices.

Broadcom’s expertise and the AI future

OpenAI and Broadcom team up to create AI chip for faster, smarter inference
Image credit: SDxCentral

Broadcom brings extensive experience as the most prominent designer of application-specific integrated circuits (ASICs). These chips are custom-built for specific tasks, and Broadcom’s client list includes some of the tech industry’s biggest players, such as Google, Meta, and ByteDance. CEO Hock Tan previously commented on Broadcom’s approach, saying the company is cautious about adding new clients and only commits to full-scale production for projects that meet strict requirements. This business model could work well with OpenAI’s needs, as the start-up seeks a specialised chip that can handle AI inference tasks without requiring massive investments in manufacturing infrastructure.

Despite OpenAI’s primary reliance on Nvidia GPUs to develop and train its models, the search for a more sustainable and efficient solution has become critical as AI adoption grows. OpenAI’s service requirements—particularly in data centres, where vast amounts of computing power are needed to process AI workloads—are fuelling the pursuit of custom-built chips to meet demand at scale. To fund this expansion, OpenAI CEO Sam Altman has contacted US government agencies and global investors, including some in the Middle East, emphasising the need for enhanced data infrastructure to support future growth.

Preparing data centres and future partnerships

The shift towards custom chip solutions marks another important step in OpenAI’s broader vision to enhance its AI infrastructure. The company invests in data centre partnerships to provide a robust home for these new AI chips. With the rise of generative AI and large-scale language models, the demand for specialised chips capable of efficiently processing inference requests is surging.

As the industry looks for alternatives to Nvidia, OpenAI’s strategic collaboration with Broadcom and consultations with TSMC could pave the way for an innovative solution to handle the vast demands of next-generation AI. By exploring custom chip solutions and building alliances, OpenAI is preparing itself to meet future AI demands with both speed and efficiency while keeping a close eye on developing data centres that can host these advanced processors.

This effort marks OpenAI’s continued commitment to creating faster, smarter, and more accessible AI technology that meets users’ ever-growing expectations worldwide.

Hot this week

YouTube cracks down on misleading clickbait

YouTube is rolling out a new policy targeting misleading clickbait. To improve transparency, YouTube will remove videos with deceptive titles or thumbnails.

CES 2025: What to expect from AMD, NVIDIA, Hyundai, and more in Las Vegas

Discover what’s coming to CES 2025, including AI advancements, gaming GPUs, and smart tech from NVIDIA, AMD, Hyundai, and more in Las Vegas.

8BitDo introduces a smaller Xbox controller for compact comfort

8BitDo’s Ultimate Mini Xbox controller is a smaller, lighter option for gamers with smaller hands. It features Hall effect joysticks and LED lighting.

Former Huawei recruit announces mass production of humanoid robots

A former Huawei recruit’s start-up, Agibot, begins mass production of humanoid robots, marking a key milestone in China’s robotics race.

ZOWIE XL2566X+ review: A 400Hz esports monitor that redefines gaming performance

Experience unmatched gaming performance with the ZOWIE XL2566X+, featuring 400Hz refresh rate and DyAc 2 for esports excellence.

Google unveils AI model that shows its reasoning process

Google introduces Gemini 2.0 Flash Thinking, an AI model that solves complex questions while revealing its step-by-step reasoning process.

Bluesky introduces a mentions tab in your notifications

Bluesky’s latest update adds a mentions tab, improves reply settings, reserves old usernames, and plans for a subscription service next year.

Lilium halts operations and lays off 1,000 workers after funding struggles

Lilium, a flying taxi company, lays off 1,000 workers and halts operations after struggling to secure VTOL air taxi technology funding.

Interlock ransomware targets critical infrastructure with FreeBSD-specific attacks

Interlock ransomware targets FreeBSD servers, highlighting the need for enhanced security measures in critical infrastructure.

Related Articles

Popular Categories