WP_Term Object
(
    [term_id] => 17274
    [name] => Avery Design Systems
    [slug] => avery-design-systems
    [term_group] => 0
    [term_taxonomy_id] => 17274
    [taxonomy] => category
    [description] => 
    [parent] => 178
    [count] => 6
    [filter] => raw
    [cat_ID] => 17274
    [category_count] => 6
    [category_description] => 
    [cat_name] => Avery Design Systems
    [category_nicename] => avery-design-systems
    [category_parent] => 178
)
            
Avery Logo 800x100 1
WP_Term Object
(
    [term_id] => 17274
    [name] => Avery Design Systems
    [slug] => avery-design-systems
    [term_group] => 0
    [term_taxonomy_id] => 17274
    [taxonomy] => category
    [description] => 
    [parent] => 178
    [count] => 6
    [filter] => raw
    [cat_ID] => 17274
    [category_count] => 6
    [category_description] => 
    [cat_name] => Avery Design Systems
    [category_nicename] => avery-design-systems
    [category_parent] => 178
)

Verifying Inter-Chiplet Communication

Verifying Inter-Chiplet Communication
by Daniel Nenni on 07-04-2022 at 6:00 am

Chiplets are hot now as a way to extend Moore’s Law, dividing functionality across multiple die within a single package. It’s no longer practical to jam all functionality onto a single die in the very latest processes, exceeding reticle limits in some cases and in others straining cost/yield. This is not an academic concern. Already server processors, FPGAs and large AI training platforms run to multiple chiplets on a die. The breakthrough in expanding functional design to chiplets serves not only growing gate counts in large systems. It also allows many functions can be parceled out to individual die at less aggressive technologies for lower cost and potentially broader availability. Reserving the most aggressive processes only for functions/die needing that advantage.

This seems like the best of all possible worlds, but the idea only works if you have a very fast (and low power) interconnect between those chiplets. That’s the goal of the Universal Chiplet Interconnect Express (UCIe). How do you verify compliance with a standard that is new in town? You must work with a company that has a track record in tight relationships with standard developers, in delivering VIP and compliance checking. Avery has that track record.

Verifying Inter-Chiplet Communication

The foundations of UCIe

UCIe builds on well proven standards, particularly PCIe as a host extension interface, already long established in the PC and server world. Add to this CXL for coherent memory connectivity (memory, IO and cache) between chiplets. PCIe and CXL are mapped natively in UCIe in acknowledgement of the reality that they are already widely used. The fact that they plug-and-play with existing software is another not inconsiderable detail. Add this support for a raw streaming protocol as a way to extend to further protocols. Together, this combination seems like a no-brainer for chiplet-to-chiplet communication. I’ve heard some grumbling from the AI training world about the PCIe overhead impeding coherent communication performance with the core. Perhaps the streaming protocol might mitigate this issue. But anyway, for everyone else the benefits outweigh that bleeding edge limitation.

Thanks to short signal paths on substrate or interposer (for example), IO performance is expected to be 20x better than conventional PCIe SERDES, also at significantly lower power. The standard is also designed to support off-package connectivity, at board, rack or pod level, supported by retimers as needed.  Scaling out is clearly a longer term goal.

High performance at low power and building on established standards. It is easy to see why UCIe has garnered wide support – from Intel (or course), also AMD, Google Cloud, Meta, Microsoft, Arm, Samsung, Qualcomm, TSMC and others.

Verification

A standard depends on tooling to verify compliance with the standard. I can’t speak to aspects of physical compliance checking but I do know that Avery is a contributing member and has built a VIP to validate functional compliance at the protocol and logical PHY layers. As an established provider of VIPs across multiple domains – high speed IO, storage, embedded storage, mobile, memory and others – Avery already has the chops to deliver for UCIe. Their PCIe and CXL VIPs are proven and their QEMU co-simulation platform simplifies software co-design and validation with RTL design.

Avery offers a complete functional verification platform based on its robustly tested verification IP (VIP) portfolio that enables pre-silicon validation of design elements. Its UCIe offering supports standalone UCIe die to die adapter and LogPHY verification along with integrated PCIe and CXL VIP to run over the UCIe stack. In addition to UCIe models it provides comprehensive protocol checkers, coverage, reference testbenches, and compliance test-suites utilizing a flexible and open architecture.

You can learn more HERE.

Also read:

Data Processing Unit (DPU) uses Verification IP (VIP) for PCI Express

PCIe 6.0, LPDDR5, HBM2E and HBM3 Speed Adapters to FPGA Prototyping Solutions

Controlling the Automotive Network – CAN and TSN Update

 

Share this post via:

Comments

There are no comments yet.

You must register or log in to view/post comments.