On Tuesday, Intel introduced Gaudi 3, its newest artificial intelligence chip, as semiconductor manufacturers race to create chips that can install and train large AI models, like the one that powers OpenAI’s ChatGPT.
According to Intel, the new Gaudi 3 processor can execute AI models 1.5 times faster than Nvidia’s H100 GPU and is more than twice as power-efficient. It is also available in several formats, such as a card that can be inserted into an existing system or a set of eight Gaudi 3 chips on a single motherboard.
Models such as the Abu Dhabi-backed Falcon and Meta’s open-source Llama were used by Intel to test the processor. It was stated that Gaudi 3 can assist in the training or implementation of models, such as OpenAI’s Whisper voice recognition model or Stable Diffusion.
According to Intel, Nvidia chips utilize more energy than its own.
Throughout the past year, Nvidia’s graphics processors, or GPUs, have become the premium chip of choice for AI developers, accounting for an estimated 80% of the market for AI chips.
According to Intel, systems built with the new Gaudi 3 chips will be made accessible to consumers in the third quarter. The chips will be used by Dell, HPE, and Supermicro, among other businesses. For Gaudi 3, Intel did not offer a price range.
On a conference call with reporters, Das Kamhout, vice president of Xeon software at Intel, stated, “We do expect it to be highly competitive” with Nvidia’s most recent CPUs. “From our competitive pricing, our distinctive open integrated network on chip, we’re using industry-standard Ethernet. We believe it’s a strong offering.”
Even while Nvidia continues to produce the vast majority of AI chips, the market for data center AI is predicted to expand as cloud providers and companies construct the necessary infrastructure to use AI software. This suggests that there is potential for competition.
It can be costly to run generative AI and purchase Nvidia GPUs, therefore businesses are searching for more vendors to help save expenses.
Over the past year, Nvidia’s stock has more than tripled due to the AI surge. The stock of Intel has only increased by 18% over this time.
Additionally, AMD wants to grow and market more AI server CPUs. It debuted a new data center GPU last year, the MI300X, and Microsoft and Meta are already among its clients.
Nvidia unveiled its B100 and B200 GPUs earlier this year; these are the H100’s replacements, and they also promise improved performance. Later this year, such chips should begin to ship.
The reason Nvidia has been so successful is because of CUDA, a potent suite of proprietary software that gives AI scientists access to every GPU’s hardware capability. In order to create open software that isn’t proprietary and could make it simple for software businesses to switch chip providers, Intel is collaborating with other major players in the semiconductor and software industries, such as Google, Qualcomm, and Arm.
In a call with reporters, Sachin Katti, senior vice president of Intel’s networking group, stated, “We are working with the software ecosystem to build open reference software, as well as building blocks that allow you to stitch together a solution that you need, rather than be forced into buying a solution,”
Given that Gaudi 3 is based on a five nanometer technology, a very new manufacturing method, it appears that the corporation is outsourcing the foundry work to produce the chips. Apart from creating Gaudi 3, CEO Patrick Gelsinger of Intel told reporters last month that the company also intends to produce AI chips, possibly for other businesses, in a new facility in Ohio that should open in 2027 or 2028.