Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/threads/ofc-2026-summary-how-silicon-photonics-cpo-oci-and-ocs-are-redefining-the-physical-boundaries-of-data-centers.24852/
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2030970
            [XFI] => 1060170
        )

    [wordpress] => /var/www/html
)

OFC 2026 Summary: How Silicon Photonics, CPO, OCI, and OCS Are Redefining the Physical Boundaries of Data Centers

user nl

Well-known member
https://tspasemiconductor.substack.com/p/ofc-2026-summary-how-silicon-photonics

OFC 2026: The Moment AI Infrastructure Moved Beyond the Chip

If the dominant theme of the semiconductor industry over the past decade was process scaling, then what truly stood out at OFC 2026 was a deeper and far more structural shift:

In the AI era, the bottleneck is no longer just within the chip itself—
it lies between chips, between racks, and between switches and accelerators.



The real question is whether we can connect the entire system—
with acceptable power consumption, manageable latency, and manufacturable packaging.

This is the essence of OFC 2026.



Optical communications is no longer just a showcase for modules, lasers, DSPs, or switching components. It has officially become a system-level battlefield for AI infrastructure.

From NVIDIA’s definition of the AI Factory, to Meta’s 90-million-hour reliability validation for CPO; from Broadcom pushing CPO toward commercial maturity, to Intel driving OCI into the package; from ST and Samsung bringing silicon photonics onto 300mm platforms, to FormFactor and GF/Cadence tackling testing and EDA—two of the least glamorous yet most critical barriers to scale—

the entire industry is, in fact, answering the same question:

As AI clusters scale to 100K, 500K, or even 1 million GPUs, what should the future data center interconnect actually look like?

OFC 2026: Not a Telecom Show, but a Debate on “System Physics” in the AI Era​

In this context, OFC 2026 was not merely an optical communications conference.
It was an open debate about the “system physics” of the AI era.

Because when:

  • Rack-level power scales from 120 kW to 600 kW,
  • A single AI factory demands hundreds of megawatts,
  • Both training and inference begin treating networking as a first-class resource, interconnect is no longer a supporting role—it becomes core infrastructure that determines whether compute can be monetized.
NVIDIA articulated this clearly at the conference:

“The data center is a computer, and the network defines its boundaries.”
What this really implies is:

The future competition is not just about who has more GPUs—
but who can assemble those GPUs into a working AI factory with:


  • The lowest pJ/bit,
  • The smallest failure domain (blast radius),
  • The highest serviceability.

The Real Bottleneck of AI Factories: Not Compute, but Whether Interconnect Can Keep Up with Agentic Scaling​

Historically, discussions around AI infrastructure have focused on GPUs, HBM, advanced packaging, and process nodes, but at OFC 2026 one signal became unmistakably clear: the performance scaling of compute cores is now outpacing the scaling of I/O and networking, fundamentally shifting the bottleneck. The key question is no longer whether a single chip is fast enough, but whether the entire cluster can operate as if it were a single chip.

From Bandwidth to Architecture: Scale-Up vs. Scale-Out​

This is why companies across the spectrum— OpenAI, Microsoft, AMD, NVIDIA, and Broadcom— are shifting the discussion:

From raw bandwidth → to architectural partitioning between scale-up and scale-out

  • Scale-up: ultra-low latency, high synchronization efficiency between accelerators
  • Scale-out: throughput, elasticity, and scalability across racks and clusters
In the past, both domains could partially rely on shared copper interconnects and electrical switching architectures.

But as model sizes and data flows explode, that assumption is breaking down.

OpenAI’s framing at the conference was particularly direct: inference token growth is no longer linear but expanding by orders of magnitude, which implies that each new generation of infrastructure must simultaneously deliver roughly 2× improvements in bandwidth density while continuously reducing I/O power; otherwise, the economics of the AI factory will deteriorate rapidly.



Source: epoch.ai
SEMIVISION @_@ is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

Subscribe


Below we will share:​

  • Silicon Photonics Has Finally Entered the True 300 mm Era—But the Bottleneck Is No Longer Just the Device
  • 1.6T Is No Longer the Future — The Real Battlefield Lies in 3.2T and Beyond 400G/lane
  • From Pluggables to CPO to OCI: The Interconnect Boundary Keeps Moving Closer to the Chip
  • Meta Has Answered the Hardest Question Around CPO: Reliability
  • NVIDIA, Lightmatter, and Broadcom Represent Three Different Futures: High-Density Microrings, In-Package DWDM, and System-Level CPO
  • The Real Implication for Taiwan and the Supply Chain: The Center of Value in Optical Interconnects Is Shifting from Modules to Platforms and Packaging
 
Think this roadmap for CPO was shared at both GTC and OFC.
 

Attachments

  • Screenshot 2026-03-30 at 8.55.18 AM.png
    Screenshot 2026-03-30 at 8.55.18 AM.png
    482.1 KB · Views: 9
Probably the most significant potential change is the one driven by OCI, which is basically the elimination of high-speed SERDES (power-hungry, short reach) by replacing them with optical CPO with UCIe on the chip side and parallel CWDM/DWDM fibers on the other side, all carrying simple NRZ data with much lower overall power consumption. Given the companies who are already backing this, it's difficult to continue arguing that it won't happen... ;-)


"The six OCI MSA founding members are AMD, Broadcom, Nvidia, Meta, Microsoft, and OpenAI. Their aim is to make optics rather than copper the preferred interconnect approach for AI scale-up networks by building an industry consensus and moving away from proprietary designs.
By removing the SerDes electrical overhead, Broadcom believes optical links can cross a new power threshold. “Our switches and XPUs are going to be lower power with optics than with copper,” he says.
If that claim holds, it would represent a fundamental shift in interconnects. Optical links would be justified by power efficiency compared to electrical alternatives, not just by reach or bandwidth."

I would agree with this analysis, but there will undoubtedly be strong pushback from the many companies whose business model will be pretty much destroyed by such a fundamental shift away from pluggable optical modules and high-speed electrical SERDES interfaces... ;-)
 
Last edited:
Probably the most significant potential change is the one driven by OCI, which is basically the elimination of high-speed SERDES (power-hungry, short reach) by replacing them with optical CPO with UCIe on the chip side and parallel CWDM/DWDM fibers on the other side, all carrying simple NRZ data with much lower overall power consumption. Given the companies who are already backing this, it's difficult to continue arguing that it won't happen... ;-)


"The six OCI MSA founding members are AMD, Broadcom, Nvidia, Meta, Microsoft, and OpenAI. Their aim is to make optics rather than copper the preferred interconnect approach for AI scale-up networks by building an industry consensus and moving away from proprietary designs.
By removing the SerDes electrical overhead, Broadcom believes optical links can cross a new power threshold. “Our switches and XPUs are going to be lower power with optics than with copper,” he says.
If that claim holds, it would represent a fundamental shift in interconnects. Optical links would be justified by power efficiency compared to electrical alternatives, not just by reach or bandwidth."

I would agree with this analysis, but there will undoubtedly be strong pushback from the many companies whose business model will be pretty much destroyed by such a fundamental shift away from pluggable optical modules and high-speed electrical SERDES interfaces... ;-)
when do you guys think volume production of CPO will show up? in 2028?
 
https://www.hankyung.com/article/2026032940411

Silicon Photonics Roadmap Unveiled:

Technology and Platform to be Secured by Next Year,
Combined with AI Semiconductors for Mass Production in Two Years
Samsung Electronics’ Foundry (semiconductor contract manufacturing) division has unveiled a roadmap to begin mass production of silicon photonics, often referred to as the "semiconductor of dreams," starting in 2028. The company plans to accelerate its pursuit of Taiwan's TSMC, the industry leader, through a "turnkey" strategy that bundles silicon photonics with High Bandwidth Memory (HBM), system semiconductor foundry services, and packaging.

According to industry sources on the 29th, Samsung Electronics announced this strategy at the "OFC 2026" conference held in Los Angeles on the 17th. Silicon photonics is a technology that transmits and receives data using light instead of electricity. Compared to the current method of transmitting data via copper circuits, it offers a dramatic increase in speed. It is attracting attention as an alternative solution to address the data bottlenecks identified as a major issue in artificial intelligence (AI) semiconductors.

Samsung Electronics will focus on securing fundamental silicon photonics technology and platforms by next year. This involves combining optoelectronic semiconductors (PICs), which convert light into electricity, and electrical circuits (EICs), which finely control this electricity, to function as a single semiconductor.

The full-scale integration with AI semiconductors is set to begin in 2028. This involves mounting silicon photonics semiconductors next to the 'switch chip,' where external information is first gathered. This approach is similar to the 'Spectrum-X' product family that Nvidia developed with TSMC last year. The scope of application will be expanded in 2029. The structure involves incorporating silicon photonics into a packaged chip that combines a Graphics Processing Unit (GPU), responsible for AI computation, with HBM.

Samsung Electronics is focusing on this technology to secure global big tech companies as foundry clients. Although the commercialization of silicon photonics is still in its early stages, major semiconductor firms regard it as a core next-generation technology. Nvidia, in particular, is moving most actively. Nvidia CEO Jensen Huang has been mentioning silicon photonics at every official event since last year. Recently, the company invested $2 billion (approximately 3 trillion won) in Lumentum, a U.S. company specializing in silicon photonics, to secure related technology. Observers suggest that Samsung Electronics’ foundry division must also quickly secure mass production technology for silicon photonics to gain an advantage in winning orders for next-generation products from semiconductor clients.

According to the roadmap disclosed by Samsung Electronics, the technological gap with TSMC is approximately three years.
Samsung Electronics plans to differentiate itself from TSMC through a turnkey strategy. The goal is to establish an integrated semiconductor mass production system by adding silicon photonics to memory (HBM), system semiconductors, and advanced packaging—companies that TSMC lacks. This implies that Samsung will be able to attract clients by leveraging shortened production times and lower costs. An industry insider stated, "The real showdown with TSMC will begin the moment Samsung Electronics applies silicon photonics to actual mass production."

By Kang Hae-ryeong, hr.kang@hankyung.com
 
Back
Top