You are currently viewing SemiWiki as a guest which gives you limited access to the site. To view blog comments and experience other SemiWiki features you must be a registered member. Registration is fast, simple, and absolutely free so please, join our community today!
MRC (Multipath Reliable Connection), announced by OpenAI (and AMD, Broadcom, Intel, Microsoft, Nvidia, in alphabetical order)
. Google is missing! Nvidia's switches & CPO are critical?
. very large networks (like a hundred thousand GPUs) with only two tiers of switches
. source = blog 2026-May-05, link = https://openai.com/index/mrc-supercomputer-networking/
Exactly. By investing in a proprietary architecture Google is far ahead of anything the industry groups can kludge together, for functionality and efficiency. Datacenter networking has become a deep morass of inefficient legacy layers (for meeting the latest AI requirements) built on top of industry standards and industry specifications which won't evolve to meet the needs of commercial interests. MRC is a classic example. As for OpenAI's comment about InfiniBand being a "standard", that's just silly. Unless you pay for a membership in the trade association, you can't even get a copy of the specification.
AI networking is different than classic InfiniBand or Ethernet/IP networking because AI workloads, especially on GPUs, are really allergic to congestion of any kind (it causes expensive stalls in SIMD processing), and data transfer sizes have become huge, so "spraying" packets across links (like Ultra Ethernet) and multipath connections (like MRC) are all the rage.