Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/threads/mrc-supercomputer-networking-annonced-by-openai-and-others.25100/
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2031070
            [XFI] => 1060170
        )

    [wordpress] => /var/www/html
)

MRC supercomputer networking, annonced by OpenAI and others

NY_Sam2

Member
MRC (Multipath Reliable Connection), announced by OpenAI (and AMD, Broadcom, Intel, Microsoft, Nvidia, in alphabetical order)
. Google is missing! Nvidia's switches & CPO are critical?
. very large networks (like a hundred thousand GPUs) with only two tiers of switches
. source = blog 2026-May-05, link = https://openai.com/index/mrc-supercomputer-networking/

1778715586452.png
 
Very interesting announcement.

Google might be missing because they are all in on optical switching (OCS) already with their Apollo/Palomar hardware and beyond:

Exactly. By investing in a proprietary architecture Google is far ahead of anything the industry groups can kludge together, for functionality and efficiency. Datacenter networking has become a deep morass of inefficient legacy layers (for meeting the latest AI requirements) built on top of industry standards and industry specifications which won't evolve to meet the needs of commercial interests. MRC is a classic example. As for OpenAI's comment about InfiniBand being a "standard", that's just silly. Unless you pay for a membership in the trade association, you can't even get a copy of the specification.

AI networking is different than classic InfiniBand or Ethernet/IP networking because AI workloads, especially on GPUs, are really allergic to congestion of any kind (it causes expensive stalls in SIMD processing), and data transfer sizes have become huge, so "spraying" packets across links (like Ultra Ethernet) and multipath connections (like MRC) are all the rage.
 
Back
Top