Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/threads/what-lies-ahead-for-auto-industry-in-2024.19347/page-2
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2021770
            [XFI] => 1050270
        )

    [wordpress] => /var/www/html
)

What Lies Ahead for Auto Industry in 2024

Which is probably the reason why there exists interface standards for hyperscale datacenters: Because companies like Facebook actually demanded it.

Facebook rack has failed to score adoption, and now they just made themselves hostages to the few OEMs who would bother to offer them such custom service.
 
Facebook rack has failed to score adoption, and now they just made themselves hostages to the few OEMs who would bother to offer them such custom service.
To my knowledge, all the hyperscalars have their own racks, usually several generations of them, and ODMs make custom boards for all of them. None of the hyperscalers is buying standard off the shelf because their infrastructure is not off the shelf, and they have no regrets. I have studied Meta's choices and they make perfect sense for their use mix, which has shown a high level of co-design optimization between product needs, software, and hardware.

All ODMs want those businesses. Any one contract for a hyperscaler dwarfs the size of shipments of "standard" designs. When they win such a contract, the "A" team is on execution. The insights they learn from seeing the future in the new things hyperscalers do is gold.
 
To my knowledge, all the hyperscalars have their own racks, usually several generations of them, and ODMs make custom boards for all of them. None of the hyperscalers is buying standard off the shelf because their infrastructure is not off the shelf, and they have no regrets. I have studied Meta's choices and they make perfect sense for their use mix, which has shown a high level of co-design optimization between product needs, software, and hardware.

All ODMs want those businesses. Any one contract for a hyperscaler dwarfs the size of shipments of "standard" designs. When they win such a contract, the "A" team is on execution. The insights they learn from seeing the future in the new things hyperscalers do is gold.
That's what I thought, but then I read stuff like this:


When I first read about Olympus some time ago I thought it was just Azure Edge related, then I read this article and I'm not sure what to think. I know this, reading datacenter deployment articles has been known to help me get to sleep at night or when I'm on a plane.
 
Last edited:
When I first read about Olympus some time ago I thought it was just Azure Edge related, then I read this article and I'm not sure what to think.
Olympus had a good run, with ideas contributed mostly by Azure and Meta. The ODMs saw an opportunity to leverage their knowledge to other enterprise users who could use some of that. There have been international presentations on it so there seem to be users in many places.

I suspect the leading edge of server racks with much higher power densities, liquid cooling, disaggregated storage and networking, will have evolved away from that. Things change. And that is just for "ordinary" servers with the latest CPUs, never mind the changes that are happening for the AI, supercomputing, and EDA clusters.
 
Back
Top