Intel on Tuesday debuted a new AI chip designed to take on one of Nvidia’s most popular AI processors as part of its Intel Vision event.
The company's Gaudi 3 chip matches and exceeds Nvidia’s H100 AI processor when it comes to training and deploying generative AI models. The H100 is one of Nvidia's most popular chips, powering artificial intelligence applications for some of the world's biggest tech companies, from Microsoft to Google.
According to Intel, the Gaudi 3 is 40% more power efficient with 50% faster inferencing than Nvidia’s H100. Training refers to the time it takes to train an AI foundation model like OpenAI’s GPT-3, while inferencing is the process of actually using the models for apps.
Intel says the Gaudi 3 is up to 1.7 times faster when training common large language models than the H100. Intel even says its chip is an average of 1.3 times faster than Nvidia’s beefier Nvidia H200 when it comes to inferencing certain language models.
In addition to the Gaudi line of chips and Xeon CPUs, Intel is also marketing so-called AI PCs, computers equipped with neural processing units that can handle onboard AI tasks.
These AI maneuvers come at a critical time for Intel.
The company is working to reinvent itself as the leader in sophisticated chip technology while building out its chip manufacturing capabilities in the US and abroad. This includes a foundry business that will build Intel’s own chips as well as those for third-party customers. The company has already confirmed it will build chips for Microsoft.
But this reinvention is coming at a steep cost, with Intel announcing earlier this month a $7 billion operating loss for 2023 as part of its new financial reporting structure. Intel stock is down 25% so far this year. Rivals Nvidia and AMD, conversely, are up 15% and 74% so far this year, respectively.
Intel has been teasing the Gaudi 3 chip for some time, with CEO Pat Gelsinger showing off a version of the platform during a December press event. It’s seen as Intel’s best chance yet to compete with Nvidia in the AI market, a space the graphics chip giant dominates thanks to its powerful hardware and software.
Intel CEO Pat Gelsinger discusses his company's plans to capitalize on the booming demand for the chips needed to power artificial intelligence during a conference in San Jose, Calif., Feb. 21, 2024. (Michael Liedtke/AP Photo) (ASSOCIATED PRESS)
The company's new chips also come just a month after Nvidia debuted its own next-generation AI platform called Blackwell. That system includes Nvidia’s B200 AI chip, which is made up of two Blackwell chips mated together to act as a single processor.
Intel is putting its AI systems forward as an alternative to Nvidia’s offerings, claiming an open-source approach that will allow customers to use the services and software that they want.
And while Nvidia is the leader in overall AI technology, Intel is betting enterprises will remain wary of sticking with a single source for high-value needs like AI hardware.
For instance, hyperscalers like Microsoft, Google, Amazon, Meta, and others already either offer their own AI chips or are developing them in addition to using Nvidia’s chips.
Intel’s Gaudi 3 will give those hyperscalers another option in the chip market.
Intel also announced on Tuesday a new form of Ethernet connectivity for Gaudi 3 nodes, competing with Nvidia’s InfiniBand connectivity technology to ensure uninterrupted connections and processing. The company also debuted its Xeon 6 processors for use in AI systems.
Taking a page out of Nvidia’s book, Intel is offering the entire setup as a reference design that partners like Dell, HP, Lenovo, and Super Micro can use to build their own server cabinets.
Intel says customers will be able to order the Gaudi 3-powered systems in sizes ranging from individual nodes and small boxes to enormous megaclusters that can fill entire data centers.
The company's Gaudi 3 chip matches and exceeds Nvidia’s H100 AI processor when it comes to training and deploying generative AI models. The H100 is one of Nvidia's most popular chips, powering artificial intelligence applications for some of the world's biggest tech companies, from Microsoft to Google.
According to Intel, the Gaudi 3 is 40% more power efficient with 50% faster inferencing than Nvidia’s H100. Training refers to the time it takes to train an AI foundation model like OpenAI’s GPT-3, while inferencing is the process of actually using the models for apps.
Intel says the Gaudi 3 is up to 1.7 times faster when training common large language models than the H100. Intel even says its chip is an average of 1.3 times faster than Nvidia’s beefier Nvidia H200 when it comes to inferencing certain language models.
In addition to the Gaudi line of chips and Xeon CPUs, Intel is also marketing so-called AI PCs, computers equipped with neural processing units that can handle onboard AI tasks.
These AI maneuvers come at a critical time for Intel.
The company is working to reinvent itself as the leader in sophisticated chip technology while building out its chip manufacturing capabilities in the US and abroad. This includes a foundry business that will build Intel’s own chips as well as those for third-party customers. The company has already confirmed it will build chips for Microsoft.
But this reinvention is coming at a steep cost, with Intel announcing earlier this month a $7 billion operating loss for 2023 as part of its new financial reporting structure. Intel stock is down 25% so far this year. Rivals Nvidia and AMD, conversely, are up 15% and 74% so far this year, respectively.
Intel has been teasing the Gaudi 3 chip for some time, with CEO Pat Gelsinger showing off a version of the platform during a December press event. It’s seen as Intel’s best chance yet to compete with Nvidia in the AI market, a space the graphics chip giant dominates thanks to its powerful hardware and software.
Intel CEO Pat Gelsinger discusses his company's plans to capitalize on the booming demand for the chips needed to power artificial intelligence during a conference in San Jose, Calif., Feb. 21, 2024. (Michael Liedtke/AP Photo) (ASSOCIATED PRESS)
The company's new chips also come just a month after Nvidia debuted its own next-generation AI platform called Blackwell. That system includes Nvidia’s B200 AI chip, which is made up of two Blackwell chips mated together to act as a single processor.
Intel is putting its AI systems forward as an alternative to Nvidia’s offerings, claiming an open-source approach that will allow customers to use the services and software that they want.
And while Nvidia is the leader in overall AI technology, Intel is betting enterprises will remain wary of sticking with a single source for high-value needs like AI hardware.
For instance, hyperscalers like Microsoft, Google, Amazon, Meta, and others already either offer their own AI chips or are developing them in addition to using Nvidia’s chips.
Intel’s Gaudi 3 will give those hyperscalers another option in the chip market.
Intel also announced on Tuesday a new form of Ethernet connectivity for Gaudi 3 nodes, competing with Nvidia’s InfiniBand connectivity technology to ensure uninterrupted connections and processing. The company also debuted its Xeon 6 processors for use in AI systems.
Taking a page out of Nvidia’s book, Intel is offering the entire setup as a reference design that partners like Dell, HP, Lenovo, and Super Micro can use to build their own server cabinets.
Intel says customers will be able to order the Gaudi 3-powered systems in sizes ranging from individual nodes and small boxes to enormous megaclusters that can fill entire data centers.
Intel's latest AI chip is a direct shot at Nvidia's moneymaker
Intel has debuted its Gaudi 3 chip to take on Nvidia's H100.
www.yahoo.com