OpenAI Collaborates with Broadcom and TSMC on New In-House Chip
Photographer: Andrey Rudakov/Bloomberg

OpenAI Collaborates with Broadcom and TSMC on New In-House Chip

OpenAI is working with Broadcom and TSMC to make its first chip in-house. It also combines AMD and Nvidia chips to improve its AI infrastructure and lower costs.

Sources told Reuters that OpenAI is working with Broadcom and TSMC (2330. TW) to make its first in-house chip to support its AI systems. It is also adding AMD and Nvidia chips to meet its growing infrastructure needs.

OpenAI, the quickly growing company behind ChatGPT, has considered several ways to lower costs and increase chip supply. Some speculated that OpenAI could construct everything on its own and secure funding for an expensive plan to establish a network of chip factories known as “foundries.”

The company has temporarily shelved its ambitious plans to build a foundry network because it would take too much time and money. Instead, it plans to focus on its chip design efforts.

The company has outlined its strategy in full for the first time. It shows how the Silicon Valley startup is using industry partnerships and a mix of internal and external methods to get chips and keep costs down, just like Amazon, Meta, Google, and Microsoft.

As one of the biggest buyers of chips, OpenAI’s choice to get its custom chip from a number of different chipmakers could have effects on the tech industry as a whole.

OpenAI, AMD, and TSMC all declined to say anything. A request for comment from Broadcom did not get a response right away. OpenAI helped bring generative AI (AI that answers questions like a person) to the market. To train and run its systems, OpenAI needs a lot of computing power.

OpenAI purchases a significant number of Nvidia graphics processing units (GPUs). They use AI chips for both training models (where the AI learns from data) and inference (where the AI uses new information to make predictions or decisions).

Reuters has already written about OpenAI’s efforts to design chips. The Information wrote about talks with Broadcom and other companies. According to a source, OpenAI has been working with Broadcom for months on its first AI chip, called inference. Analysts believe that with the increasing use of AI applications, the demand for inference chips may surpass that of training chips.

Broadcom assists companies such as Alphabet’s Google in optimizing chip designs before manufacturing. Broadcom also provides design components that facilitate the efficient transfer of data on and off the chips. This becomes crucial when linking tens of thousands of chips to collaborate.

Two of the sources said that OpenAI is still deciding whether to build or buy other parts for its chip design and may bring on more partners. The company has put together a chip team of about 20 people, led by top engineers like Thomas Norrie and Richard Ho, who used to work at Google and build Tensor Processing Units (TPUs).

Sources say that OpenAI has secured the ability to make chips with Taiwan Semiconductor Manufacturing Company (2330. TW) through Broadcom. The company will make OpenAI’s first custom chip in 2026. They said the date might change.

Right now, more than 80% of the market is made up of GPUs made by Nvidia. But shortages and rising prices have made big customers like Microsoft, Meta, and now OpenAI look for alternatives inside or outside of Microsoft.

OpenAI Turns to AMD for Chips

This story was the first to report that OpenAI wants to use AMD chips through Microsoft’s Azure. This shows how AMD’s new MI300X chips are trying to get a piece of the market that Nvidia currently controls.

After the chip comes out in the fourth quarter of 2023, AMD thinks it will sell $4.5 billion worth of AI chips in 2024.

It costs a lot to run services like ChatGPT and train AI models. Sources say that OpenAI expects to lose $5 billion this year, even though it made $3.7 billion last year.

The biggest cost for the company is the hardware, electricity, and cloud services needed to process large datasets and build models. This is why they’re trying to maximize supplier use.

Sources say that OpenAI has been careful about hiring people from Nvidia because it wants to keep a positive relationship with the chipmaker it plans to work with, especially when it comes to getting access to its new generation of Blackwell chips.

 

Leave a Reply

Your email address will not be published. Required fields are marked *

NVIDIA’s xAI Colossus: 100,000 GPUs Transforming Hyperscale AI Performance Previous post NVIDIA’s xAI Colossus: 100,000 GPUs Transforming Hyperscale AI Performance
Microsoft’s GitHub Expands AI Coding Tools with Anthropic and Google Models Next post Microsoft’s GitHub Expands AI Coding Tools with Anthropic and Google Models