Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/threads/intel-announces-vision-2025-keynote-for-april-1.21853/
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2021770
            [XFI] => 1050270
        )

    [wordpress] => /var/www/html
)

Intel announces VISION 2025 keynote for April 1

Falcon Shores is already cancelled, isn't it?

"Intel had previously teased Falcon Shores for a late 2025 launch as the successor to Gaudi 3, the AI accelerator chip it launched last fall with OEMs such as Dell Technologies and Hewlett-Packard Enterprise. But while Gaudi 3 and previous generations were application-specific integrated circuits (ASICs), Falcon Shores was slated by the company as a programmable GPU that incorporated Gaudi technology.

However, Holthaus gave a blunt assessment of Intel’s AI accelerator chip strategy at an investor conference in December, saying that Gaudi will not help Intel “get to the masses” and suggesting that Falcon Shores wouldn’t be a game-changer.

“Many of you heard me temper expectations on Falcon Shores last month,” she said on Thursday’s earnings call.

Now, Holthaus said, Intel plans to use Falcon Shores for internal testing only.

“This will support our efforts to develop a system-level solution at rack scale with Jaguar Shores to address the AI data center,” she said. “More broadly, as I think about our AI opportunity, my focus is on the problems our customers are trying to solve, most notably lower the cost and increase the efficiency of compute.”

Holthaus said she arrived at the decision—which she called one of her first major moves as the CEO of Intel Products and interim co-CEO after former CEO Pat Gelsinger abruptly departed in early December—after receiving feedback from customers on what Intel needs to “be competitive and to deliver the right product.”

She said the decision was also influenced by discussions with Intel’s AI accelerator chip team on where they were “from a competitive perspective and from an execution perspective.”

“One of the things that we've learned from Gaudi is it's not enough to just deliver the silicon. We need to be able to deliver a complete rack-scale solution, and that's what we're going to be able to be able to do with Jaguar Shores,” Holthaus said."

 
"Intel had previously teased Falcon Shores for a late 2025 launch as the successor to Gaudi 3, the AI accelerator chip it launched last fall with OEMs such as Dell Technologies and Hewlett-Packard Enterprise. But while Gaudi 3 and previous generations were application-specific integrated circuits (ASICs), Falcon Shores was slated by the company as a programmable GPU that incorporated Gaudi technology.

However, Holthaus gave a blunt assessment of Intel’s AI accelerator chip strategy at an investor conference in December, saying that Gaudi will not help Intel “get to the masses” and suggesting that Falcon Shores wouldn’t be a game-changer.

“Many of you heard me temper expectations on Falcon Shores last month,” she said on Thursday’s earnings call.

Now, Holthaus said, Intel plans to use Falcon Shores for internal testing only.

“This will support our efforts to develop a system-level solution at rack scale with Jaguar Shores to address the AI data center,” she said. “More broadly, as I think about our AI opportunity, my focus is on the problems our customers are trying to solve, most notably lower the cost and increase the efficiency of compute.”

Holthaus said she arrived at the decision—which she called one of her first major moves as the CEO of Intel Products and interim co-CEO after former CEO Pat Gelsinger abruptly departed in early December—after receiving feedback from customers on what Intel needs to “be competitive and to deliver the right product.”

She said the decision was also influenced by discussions with Intel’s AI accelerator chip team on where they were “from a competitive perspective and from an execution perspective.”

“One of the things that we've learned from Gaudi is it's not enough to just deliver the silicon. We need to be able to deliver a complete rack-scale solution, and that's what we're going to be able to be able to do with Jaguar Shores,” Holthaus said."


For a long time, Intel's biggest problem has been its lack of products that meet market demands. It's not something that Intel Foundry, Intel 18A, or Intel 3/Intel 4 can necessarily resolve.

Why did it take Intel so long to recognize that Falcon Shores couldn't compete? Somehow, Intel product and design division and Intel past CEOs seem disconnected from the real world.
 
Why did it take Intel so long to recognize that Falcon Shores couldn't compete? Somehow, Intel product and design division and Intel past CEOs seem disconnected from the real world.
Two CEOs ago, Intel took the shotgun approach to catching up in AI accelerators, with each product division offering their own approach (FPGA, extended HPC, plus multiple acquisitions), all glued together OneAPI. But by the time they had assembled and gotten chip and board-level solutions to customers, the puck (market) had raced ahead. Every chip supplier is being forced to deliver rack-level and even data-center-level solutions, because the genAI data center has become essentially one machine.

We need to be able to deliver a complete rack-scale solution, and that's what we're going to be able to be able to do with Jaguar Shores,” Holthaus said."

It’s telling that even chip startups like Cerebras and Groq, have built out and operate their own data centers.
 
For a long time, Intel's biggest problem has been its lack of products that meet market demands. It's not something that Intel Foundry, Intel 18A, or Intel 3/Intel 4 can necessarily resolve.

Why did it take Intel so long to recognize that Falcon Shores couldn't compete? Somehow, Intel product and design division and Intel past CEOs seem disconnected from the real world.
Exactly also the GPUs Intel is so late to GPUs had they started 10 years ago when they had the best fabs they would have been in a better position
 
Two CEOs ago, Intel took the shotgun approach to catching up in AI accelerators, with each product division offering their own approach (FPGA, extended HPC, plus multiple acquisitions), all glued together OneAPI. But by the time they had assembled and gotten chip and board-level solutions to customers, the puck (market) had raced ahead. Every chip supplier is being forced to deliver rack-level and even data-center-level solutions, because the genAI data center has become essentially one machine.



It’s telling that even chip startups like Cerebras and Groq, have built out and operate their own data centers.
AI training requires super-large-scale, ultra-fast connected chips; that is why rack-scale GPUs are currently so popular in the market.

However, that is not the case for AI inference, arguably the larger and more important market going forward. In inference, you only need a few chips connected together (e.g., to fit one large model into 4 or 8 chips). What matters most is very large and very fast memory access. That is why the NVIDIA H20 is currently selling so well in China ($10k per chip, significantly lower compute power than the H100, but overall a faster chip for inference).

Intel should probably give up on the AI training market and focus on AI inference, producing chips that are 1) inexpensive, 2) connected on a smaller scale (to save cost), and 3) have large and fast memory access. If Intel can execute these three points well, I believe it would sell so well, that most hyperscalers would abandon their ASIC chips.
 
AI training requires super-large-scale, ultra-fast connected chips; that is why rack-scale GPUs are currently so popular in the market.

However, that is not the case for AI inference, arguably the larger and more important market going forward. In inference, you only need a few chips connected together (e.g., to fit one large model into 4 or 8 chips). What matters most is very large and very fast memory access. That is why the NVIDIA H20 is currently selling so well in China ($10k per chip, significantly lower compute power than the H100, but overall a faster chip for inference).

Intel should probably give up on the AI training market and focus on AI inference, producing chips that are 1) inexpensive, 2) connected on a smaller scale (to save cost), and 3) have large and fast memory access. If Intel can execute these three points well, I believe it would sell so well, that most hyperscalers would abandon their ASIC chips.
And time to market needs to be fast. Not sure if it is a good idea to salvage Falcon Shores for this purpose.

Trying to catch up NVDA in AI training is futile IMO, like trying to catch up Google in web search in the past 2 decades. You really need to create your own game to win.
 
Back
Top