WP_Term Object
(
    [term_id] => 14
    [name] => Synopsys
    [slug] => synopsys
    [term_group] => 0
    [term_taxonomy_id] => 14
    [taxonomy] => category
    [description] => 
    [parent] => 157
    [count] => 694
    [filter] => raw
    [cat_ID] => 14
    [category_count] => 694
    [category_description] => 
    [cat_name] => Synopsys
    [category_nicename] => synopsys
    [category_parent] => 157
)
            
800x100 Efficient and Robust Memory Verification
WP_Term Object
(
    [term_id] => 14
    [name] => Synopsys
    [slug] => synopsys
    [term_group] => 0
    [term_taxonomy_id] => 14
    [taxonomy] => category
    [description] => 
    [parent] => 157
    [count] => 694
    [filter] => raw
    [cat_ID] => 14
    [category_count] => 694
    [category_description] => 
    [cat_name] => Synopsys
    [category_nicename] => synopsys
    [category_parent] => 157
)

Chiplet Q&A with Henry Sheng of Synopsys

Chiplet Q&A with Henry Sheng of Synopsys
by Daniel Nenni on 05-05-2023 at 6:00 am

SNUG Panel

At the recent Synopsys Users Group Meeting (SNUG) I had the honor of leading a panel of experts on the topic of chiplets. One of those panelists was the very personable Dr. Henry Sheng, Group Director of R&D in the EDA Group at Synopsys. Henry currently leads engineering for 3DIC, advanced technology and visualization.

Are we seeing other markets move in this direction?

We’re seeing a broad movement for multi-die systems for some very good reasons. Early on some of the advantages were seen in the area of high performance computing (HPC) but now automotive is starting to adopt multi-die systems.

There are other technical motivations such as heterogeneous integration. If you migrate a design to the most advance process node, do you really need the entire system to be at that three nanometer node? Or do you implement the service functions of your system with some different technology node. Memory access has been another game changer where in the past you have to go through a board to get the memory, and then with interposers you can get much closer and with much higher bandwidth.

Stacking unleashes a lot of possibilities. It’s not necessarily just memories, but also applications such image sensors. Instead of taking data through a straw, eventually you’re getting to the point where it is raining down data into your compute die. I think there’s a lot to like about multi-die system, from a lot of different applications.

What other industry collaborations, IP, and methodologies are required to address the system-level complexity challenge?

There’s a lot of collaboration needed. John just mentioned the partnership that Synopsys has with ANSYS on system analysis. That kind of collaboration is really key. Back in the day, you had manufacturing, design and tooling all under one roof. And then over time – market forces and – market efficiencies pulled that apart into different enterprises. But while that’s economics, the nature of the technical problem is still very much intertwined. And if you look across this panel, you see a very tightly connected graph amongst all of us here. There’s a lot of collaborations that’s needed. And I think that’s pretty remarkable. I don’t know how many other industries that have this deep level of collaboration in order to mutually compete, but also to make progress.

You’ll see things like UCIe as a prime example. Standards are just the tip of the iceberg. Underneath that, there’s a whole lot of different collaborations needed to move the needle. More formalization, more standardization. This morning’s keynote called out a need for more standardization around chiplets.

And then with our friends at TSMC and 3DFabric and 3DBlox you’re starting to see what we’ve always seen in 2D in the emergence of formalization and alignment between different participants in the ecosystem. So I think it’s vital and I think we’ve always done it. So I’m pretty confident there’s a lot of rich material for collaboration and we will continue to come up with collaborative solutions.

How are the EDA design flows and the associated IP evolving and where do customers want to see them go?

It’s evolved a lot. It was mentioned earlier that multi-die system is not new. We started working on it probably 12 years ago. But it’s only recently where the commercial significance and the complexity has grown and evolved from more of a hobbyist type of environment earlier. Now it is become more of a professional environment, where what we’re trying to do is to evolve it from design methods a few years ago which basically revolved around assembly – sYou have components, and assemble components together. Now we’re getting into more of a multi-die system type of activity, going from an assembly problem to more of a design automation problem and trying to elevate it to where you’re now looking at designing the system together, because the chips are so co-dependent on each other. You can’t design the chiplets in isolation from each other because there’s a host of inter-related dependencies.

Principally where we are as an industry, we’ve invested decades of work into highly complex products and flows, and we don’t want to throw that away, right? You don’t want to disrupt that. You want to ride on top of that and augment it.

Where I see the EDA space going – we will continue to see a lot of the fine-grained optimizations that you would see in a traditional 2D problem space. Where I come from in Place and Route, you have a lot of very nice and almost convex problems that are fairly suitable to apply traditional techniques to solve them.

However, when you get to system level, these problems get kind of lumpy, and your solution space can become highly non-convex and difficult to solve with traditional techniques. That’s where looking into future on AI and ML and these kinds of things that can really help drive it forward.

So design has evolved from manual implementation, to computer-aided design, to electronic design automation, to AI-driven design automation. And probably in the future, instead of computer-aided design, maybe it becomes human-aided design. The AI will tell me “Hey Henry, I need that spec tightened up by next week. I need you to get that to me.“ With the complexity, you really need the automation in order to reasonably build and optimize these systems.

Do you see multi-die system as a significant driver for this technology moving forward?

Yes. For things like silicon lifecycle management that’s an emerging for 2D – if it’s important for 2D, it’s even more so for 3D.

If you look at it from the standpoint of yield, normally you look at 2D dies and there’s the concept of known good dies. So you can test before you put it on all in. But really if you look at a multi-die system, the system yield is the product of your yields, right? So even if you have all known good dies, you still have to put them together. And so there’s some multiplicative factors and you can roughly translate that same type of analysis over into the overall health of the system as well which depends on the multiplicative health of the components.

You have heterogeneous dies with known different properties, different workloads, and different behaviors across your different dies. So it becomes all the more important to be able to keep on top of that in monitoring.

Thank you very much Henry!

Also Read:

Synopsys Accelerates First-Pass Silicon Success for Banias Labs’ Networking SoC

Multi-Die Systems: The Biggest Disruption in Computing for Years

Taking the Risk out of Developing Your Own RISC-V Processor with Fast, Architecture-Driven, PPA Optimization

Share this post via:

Comments

There are no comments yet.

You must register or log in to view/post comments.