Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/threads/intel-foundry-business-to-make-custom-chip-for-amazon-chipmakers-shares-jump.21015/page-2
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2021770
            [XFI] => 1050270
        )

    [wordpress] => /var/www/html
)

Intel foundry business to make custom chip for Amazon, chipmaker's shares jump

The AI Fabric is probably interconnect for Amazon's ASIC's Trainium and Inferentia. It's a good first step and opens the doors for more and more important chiplets to be made on 18A

From SemiAnalysis:

"The scale-up accelerator interconnect (NVLink on Nvidia, Infinity Fabric/UALink on AMD, ICI on Google TPU, NeuronLink on Amazon Trainium 2) is an ultra-high speed network that connects GPUs together within a system. On Hopper, this network connected 8 GPUs together at 450GB/s each while on Blackwell NVL72, it will connect 72 GPUs together at 900GB/ each. There is a variant of Blackwell called NVL576 that will connect 576 GPUs together but basically no customers will opt for it. In general, your accelerator interconnect is 8-10x faster than your backend networking."
 
The AI Fabric is probably interconnect for Amazon's ASIC's Trainium and Inferentia. It's a good first step and opens the doors for more and more important chiplets to be made on 18A

From SemiAnalysis:

"The scale-up accelerator interconnect (NVLink on Nvidia, Infinity Fabric/UALink on AMD, ICI on Google TPU, NeuronLink on Amazon Trainium 2) is an ultra-high speed network that connects GPUs together within a system. On Hopper, this network connected 8 GPUs together at 450GB/s each while on Blackwell NVL72, it will connect 72 GPUs together at 900GB/ each. There is a variant of Blackwell called NVL576 that will connect 576 GPUs together but basically no customers will opt for it. In general, your accelerator interconnect is 8-10x faster than your backend networking."
Yeah, Intel has one of these too, called Xe Link, and they're all proprietary. The switch chip could be the AI Fabric chip discussed in the announcement could be one of these. But the article and my post above are just speculation for now.
 
Back
Top