Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/threads/nvidia-invests-2b-so-that-marvell-and-its-xpu-customers-to-adopt-nvlink-fusion.24854/
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2031070
            [XFI] => 1060170
        )

    [wordpress] => /var/www/html
)

Nvidia invests $2B so that Marvell (and its XPU Customers) to adopt "NVLink Fusion"

NY_Sam2

Member
Nvidia invests $2B so that Marvell (and its XPU Customers) to adopt "NVLink Fusion"

Marvell's XPU Customers = Amazon, Microsoft, Meta, Google?

NVIDIA and Marvell Technology, Inc. (NASDAQ: MRVL) today announced a strategic partnership to connect Marvell to the NVIDIA AI factory and AI-RAN ecosystem through NVIDIA NVLink Fusion™, offering customers building on NVIDIA architectures greater choice and flexibility in developing next-generation infrastructure. The companies will also collaborate on silicon photonics technology.

In addition, NVIDIA has invested $2 billion in Marvell.

. source = https://nvidianews.nvidia.com/news/...as-marvell-joins-forces-through-nvlink-fusion
1774968155673.png

1774968170049.png
 
Nvidia bets $2 billion on Marvell as rising AI adoption fuels competition

NVIDIA AI Ecosystem Expands as Marvell Joins Forces Through NVLink Fusion

Collaboration Delivers Greater Choice and Flexibility for Customers and Fully Compatible with NVIDIA AI Infrastructure


SANTA CLARA, Calif. – March 31, 2026 – NVIDIA and Marvell Technology, Inc. (NASDAQ: MRVL) today announced a strategic partnership to connect Marvell to the NVIDIA AI factory and AI-RAN ecosystem through NVIDIA NVLink Fusion™ offering customers building on NVIDIA architectures greater choice and flexibility in developing next-generation infrastructure. The companies will also collaborate on silicon photonics technology.

In addition, NVIDIA has invested $2 billion in Marvell.


The partnership builds on NVIDIA NVLink Fusion, a rack-scale platform that enables customers to develop semi-custom AI infrastructure using the NVIDIA NVLink™ ecosystem. Marvell will provide custom XPUs and NVLink Fusion compatible scale-up networking, while NVIDIA will provide the supporting technologies, including Vera CPU, ConnectX® NICs, Bluefield® DPUs, NVLink interconnect and Spectrum-X™ switches, and the rack-scale AI compute.

For customers developing custom XPUs, NVLink Fusion enables a heterogeneous AI infrastructure fully compatible with NVIDIA systems, allowing seamless integration with NVIDIA GPU, LPU, networking and storage platforms while leveraging NVIDIA’s rich technology stack global supply chain ecosystem.

The companies will also partner to transform the world’s telecommunication network into AI infrastructure with NVIDIA Aerial AI-RAN for 5G/6G, and advance world-class networking for AI, including advanced optical interconnect solutions and silicon photonics technology.

“The inference inflection has arrived. Token generation demand is surging, and the world is racing to build AI factories,” said Jensen Huang, founder and CEO of NVIDIA. “Together with Marvell, we are enabling customers to leverage NVIDIA’s AI infrastructure ecosystem and scale to build specialized AI compute.”

“Our expanded partnership with NVIDIA reflects the growing importance of high-speed connectivity, optical interconnect and accelerated infrastructure in scaling AI,” said Matt Murphy, Chairman and CEO of Marvell. “By connecting Marvell’s leadership in high-performance analog, optical DSP, silicon photonics and custom silicon to NVIDIA’s expanding AI ecosystem through NVLink Fusion, we are enabling customers to build scalable, efficient AI infrastructure.”

About Marvell
To deliver the data infrastructure technology that connects the world, we’re building solutions on the most powerful foundation: our partnerships with our customers. Trusted by the world’s leading technology companies for over 30 years, we move, store, process and secure the world’s data with semiconductor solutions designed for our customers’ current needs and future ambitions. Through a process of deep collaboration and transparency, we’re ultimately changing the way tomorrow’s enterprise, cloud and carrier architectures transform—for the better.

About NVIDIA
NVIDIA (NASDAQ: NVDA) is the world leader in AI and accelerated computing.

 
So NV is paying Marvell to support NVLink, with a very small amount in their scale, to potentially get into current non-NV infrastructures, a small bet that might expand their market reach.
 
So NV is paying Marvell to support NVLink, with a very small amount in their scale, to potentially get into current non-NV infrastructures, a small bet that might expand their market reach.
That is how I read it. Jensen sure is putting their money where their mouth is. Better than the stock buy backs and other financial monkey business.

Nvidia is going to have to change their tag line to "Nvidia Investment Everywhere"
 
to potentially get into current non-NV infrastructures, a small bet that might expand their market reach.
From the video, it sounds more like NVIDIA wants to get other people’s processors and hardware into NVIDIA’s rack-level solutions. NVLink is key.

Given the Amazon / Cerebras linkup on disaggregated inference and the speed with which NVIDIA added the Groq LPUs to their rack-level solution, both at a hardware and software level via a new flavor of disaggregation, I can see a future where there is much more plug-and-play specialized acceleration inside the data center.
 
Last edited:
So NV is paying Marvell to support NVLink, with a very small amount in their scale, to potentially get into current non-NV infrastructures, a small bet that might expand their market reach.
That is how I read it. Jensen sure is putting their money where their mouth is. Better than the stock buy backs and other financial monkey business.

Nvidia is going to have to change their tag line to "Nvidia Investment Everywhere"
From the video, it sounds more like NVIDIA wants to get other people’s processors and hardware into NVIDIA’s rack-level solutions. NVLink is key.

Given the Amazon / Cerebras linkup on disaggregated inference and the speed with which NVIDIA added the Groq LPUs to their rack-level solution, both at a hardware and software level via a new flavor of disaggregation, I can see a future where there is much more plug-and-play specialized acceleration inside the data center.
Based on this announcement, I'm not seeing an obvious huge gain for customers or for AI ASIC providers. NVLink is a so-called scale-up network, meaning it supports two different kinds of shared memory access semantics (depending on whether or not the C2C version is supported), rather than networking semantics like scale-out (and scale-across) networks do.

The industry is working on an industry specification for a scale-up network, called UALink. The UALink 1.0 specification was just published for evaluation and feedback purposes, and I think it's a mess. Most industry specifications designed in multi-company committees are a mess, though there are a precious few exceptions. The exceptions are mostly specifications which have been initially conceived by a single company, and the first completed version is then "contributed" to an industry group created specifically to "mature" the initial contributed version. UALink was defined in multi-company committees, and IMO it really shows, in a bad way. It's trying to be flexible enough to be used by any AI chip vendor, and in the process it is a spec with many sections left to be defined by implementing companies.

I think Nvidia sees an opportunity to get the AI industry to standardize, in a way, on Nvidia's scale-up and cache coherent interconnects, specifically because of the holes that are going unfilled by UALink, and Nvidia is a major chip company opening up its proven cache coherency and memory sharing architectures. If NVLink Fusion takes off, Nvidia will be defining industry server architecture to an extent even Intel never even dreamt about. I don't think the real value proposition for NVLink Fusion is anytime soon, it is a decade++ vision for server architecture domination.

[For multi-node memory sharing at the level of memory controller semantics, CXL is getting some traction. Nvidia has made some, well, comments, that NVLink will interoperate at some level with CXL for memory expansion, but I haven't seen a detailed explanation of how that will work. (If someone here has seen that, I would appreciate if you would post it.)]

I think the NVLink strategy is brilliant when viewed in this context, because it just might succeed.
 
Back
Top