WP_Term Object
(
    [term_id] => 14
    [name] => Synopsys
    [slug] => synopsys
    [term_group] => 0
    [term_taxonomy_id] => 14
    [taxonomy] => category
    [description] => 
    [parent] => 157
    [count] => 698
    [filter] => raw
    [cat_ID] => 14
    [category_count] => 698
    [category_description] => 
    [cat_name] => Synopsys
    [category_nicename] => synopsys
    [category_parent] => 157
)
            
800x100 Efficient and Robust Memory Verification
WP_Term Object
(
    [term_id] => 14
    [name] => Synopsys
    [slug] => synopsys
    [term_group] => 0
    [term_taxonomy_id] => 14
    [taxonomy] => category
    [description] => 
    [parent] => 157
    [count] => 698
    [filter] => raw
    [cat_ID] => 14
    [category_count] => 698
    [category_description] => 
    [cat_name] => Synopsys
    [category_nicename] => synopsys
    [category_parent] => 157
)

Die-to-Die Connections Crucial for SOCs built with Chiplets

Die-to-Die Connections Crucial for SOCs built with Chiplets
by Tom Simon on 06-21-2021 at 6:00 am

If you ascribe to the notion that things move in circles, or concentrically, the move to die-to-die connectivity makes complete sense. Just as multi-chip modules (MCM) were the right technology decades ago to improve power, areas, performance and cost, the use of chiplets with die-to-die connections provides many advantages for today’s envelop pushing designs. In an article by Manuel Mota from Synopsys titled “How to Achieve High Bandwidth and Low Latency Die-to-Die Connectivity” there is a good overview and analysis of the reasons for using chiplets. The article also discusses IP that can be used to implement die-to-die connections.

die to die connections
Die-to-die connections

The traditional approach of implementing monolithic designs begins to break down as die sizes increase. Wafer-scale chips are staggeringly large now, with trillions of transistors. Building chips on the most advanced nodes usually requires moving IOs and other analog or RF blocks to the new process node. This can be time consuming and costly. Additionally, a single fabrication failure on a large die can scrap the entire chip, leading to yield issues.

Once the use of chiplets is examined as a solution to these potential problems, other benefits become apparent. The Synopsys article enumerates four major use cases for employing die-to-die connections. While the article is focused mainly on hyperscale data center needs, the use cases are applicable to other applications.

First off, chiplets allow various configurations of accelerators using a core set of CPU, AI, or GPU accelerator blocks with tightly coupled connections. As mentioned above using smaller dies helps manage yield, while extending Moore’s law by enabling the assembly of even larger compute engines from smaller chiplets. Die-to-die connections between chiplets lets each individual functional element be fabricated on the optimal process node. This is a big help when it comes to RF, FPGAs and other applications that have unique functional elements. The final use case cited in the article is how large digital chip cores, that are striving toward the most advanced node, can leverage IO’s designed on more conservative nodes for lower cost and improve reusability.

The motivations given in the article for using die-to-die connections are very compelling. The tougher part of the equation is finding the optimal die-to-die interface. Something as simple as on-chip buses or connections cannot be used. On the other hand, IO interfaces used for chip to chip connections would defeat the purpose by adding latency, area and power consumption. There needs to be a kind of Goldilocks solution that balances all the factors to arrive at the optimal solution.

Today there is no industry standard for die-to-die interfaces, though Synopsys is working with others on developing one. Die-to-die interfaces need to offer error correction to ensure reliable links. They must also support high bandwidth connections so that overall speeds are comparable to block to block connections on the same die. The PHY layer should be optimized for short reach and low loss connections. And, of course they should be power efficient.

The Synopsys article concludes with a summary of their die-to-die IP offering, which includes a controller and a PHY. The DesignWare Die-to-Die Controller IP offers industry leading low latency, with error recovery for high reliability. The controller supports AMBA CXS and AXI protocols and integrates with the Arm Neoverse Coherent Mesh Network. The DesignWare Die-to-Die PHY IP uses high speed SerDes PHY technology that runs up to 112Gbps for ultra- and extra-short reach links. For high density 2.5D packed SoCs they offer a High-Band Interconnect (HBI) PHY that delivers 8 Gbps.

The article also touches on how their die-to-die IP can be easily integrated into designs with the Synopsys 3DIC Compiler. The move away from monolithic ICs for many applications will continue. Of course, we will also still see large wafer-scale designs and larger chips. Regardless, the advantages of die-to-die connections will lead to their increased use for the foreseeable future. The article provides good background and tangible solutions for those looking at employing die-to-die in upcoming designs. The article is available on the Synopsys website.

Also Read:

Mars Perseverance Rover Features First Zoom Lens in Deep Space

Verification Management the Synopsys Way

Synopsys Debuts Major New Analog Simulation Capabilities

Share this post via:

Comments

There are no comments yet.

You must register or log in to view/post comments.