hip webinar automating integration workflow 800x100 (1)
WP_Term Object
(
    [term_id] => 19172
    [name] => Chiplet
    [slug] => chiplet
    [term_group] => 0
    [term_taxonomy_id] => 19172
    [taxonomy] => category
    [description] => 
    [parent] => 0
    [count] => 101
    [filter] => raw
    [cat_ID] => 19172
    [category_count] => 101
    [category_description] => 
    [cat_name] => Chiplet
    [category_nicename] => chiplet
    [category_parent] => 0
)

Takeaways from SNUG 2023

Takeaways from SNUG 2023
by Bernard Murphy on 04-07-2023 at 6:00 am

Synopsys pulled out all the stops for this event. I attended the first full day, tightly scripted from Aart’s keynote kick off to 1×1 interviews with Synopsys executives to a fireside chat between Sassine Ghazi (President and COO) and Rob Aitken (ex-Fellow at Arm, now Distinguished Architect at Synopsys). That’s a lot of material; I will condense heavily to just takeaways. My colleague Kalar will cover the main Synopsys.ai theme (especially Aart’s talk).

Takeaways from SNUG 2023

AI in EDA

The big reveal was broad application of AI in the Synopsys tool suite, under the Synopsys.ai banner. This is exemplified in DSO.ai (design space optimization), VSO.ai (verification space optimization) and TSO.ai (test space optimization). I talked about DSO.ai in an earlier blog, a reinforcement learning method starting from an untrained algorithm. The latest version is now learning through multiple parallel training runs, advancing quickly to better PPAs that customers have been able to find in much less time and needing only one engineer. Synopsys claim similar results for VSO.ai for which the objective is to reduce time to coverage targets and increase coverage, and for TSO.ai for which the objective is to reduce the number of ATPG vectors required for the same or better coverage.

My discussions with execs including Thomas Anderson (VP AI and Machine Learning at Synopsys) suggest that these are all block level optimizations. DSO.ai wraps around Fusion Compiler (a block-level capability) and TSO.ai wraps around TestMax here advertised for the ATPG feature, not BIST. Similarly, the coverage metric for VSO.ai suggests functional coverage, not including higher level coverage metrics. Don’t get me wrong. These are all excellent capabilities, providing a strong base for implementation and verification at the system level.

Aart did talk about optimization for memories, I think in a system context, indicating they are exploring the system level. I think AI application at that level will be harder and will advance incrementally. Advances in performance and power now depend on architecture rather than process, driving a lot of architecture innovation which will impede approaches to optimization which insufficiently understand the architecture. Further, any system-level optimizer will need to collaborate with in-house and commercial generators (for mesh networks and NoCs for example), inevitably slowing progress. Finally, optimization at the system level must conform to a broader set of constraints, including natural language specifications and software-level use-case tests. Speculating on Aart’s memory example, perhaps optimization can be applied to a late-stage design to replace existing IP instances with improved instances. That would certainly be valuable.

What about pushing AI through the field to customers? Alessandra Costa (Sr. VP WW customer success at Synopsys) tells me that at a time when talent scarcity has become an issue for everyone, the hope of increasing productivity is front and center for design teams. She tells me, “There are high expectations and some anxiety on AI to deliver on its promises.”

DSO.ai has delivered, with over 160 tapeouts at this point, encouraging hope that expectations may outweigh anxiety. DSO.ai is now in the curriculum for implementation AEs, driving wider adoption of the technology across market segments and regions. Verification is an area where shortage of customer talent is even more acute. Alessandra expects the same type of excitement and adoption in this space as proven for DSO.ai. Verification AEs are now being trained on VSO.ai and are actively involved in deployment campaigns.

Multi-die systems

Aart talked about this, I also talked with Shekhar Kapoor (Sr. Director of Marketing) to understand multi-die as a fast-emerging parameter in design architecture. These ideas seem much more real now, driven by HPC need, also automotive and mobile. Shekhar had a good mobile example. The system form factor is already set, yet battery sizes are increasing, shrinking the space for chips. At the same time each new release must add more functionality, like support for multiplayer video games. Too much to fit in a single reticle, but you still need high performance and low power at a reasonable unit cost. In HPC, huge bandwidth is always in demand and memory needs to sit close to logic. Multi-die isn’t easy, but customers are now saying they have no choice.

Where does AI fit in all of this? Demand for stacking is still limited but expected to grow. Interfaces between stacked die will support embedded UCIe and HBM interfaces. These high frequency links require signal and power integrity analyses. Stacking limits amplifies thermal problems so thermal analysis is also critical. Over-margining everything becomes increasingly impractical at these complexities, requiring a more intelligent solution. Enter reinforcement learning. Learning still must run the full suite of analyses (just as DSO.ai does with Fusion Compiler), running multiple jobs in parallel to find its way to goal parameters.

There are still open challenges in multi-die as Dan Nenni has observed. How do you manage liability? Mainstream adopters like AMD build all their own die (apart from memories) for their products so can manage the whole process. The industry is still figuring out how to manageably democratize this process to more potential users.

Other notable insights

I had a fun chat with Sassine at dinner the night before. We talked particularly about design business dynamics between semiconductor providers and system companies, whether design activity among the hyperscalers and others is a blip or a secular change. I’m sure he has more experience than I do but he is a good listener and was interested in my views. He made a point that systems companies want vertical solutions, which can demand significant in-house expertise to specify, design and test and of course should be differentiated from competitors.

Rapid advances in systems technology and the scale of those technologies make it difficult for semiconductor component builders to stay in sync. So maybe the change is secular, at least until hyperscalar, Open RAN and automotive architectures settle on stable stacks? I suggested that a new breed of systems startups might step into the gap. Sassine wasn’t so certain, citing even more challenging scale problems and competition with in-house teams. True, though I think development under partnerships could be a way around that barrier.

I had another interesting talk with Alessandra, this time on DEI (diversity, equity and inclusion). Remember her earlier point about lack of talent and the need to introduce more automation? A complementary approach is to start developing interest among kids in high school. University-centric programs may be too late. She has a focus on girls and minorities at that age, encouraging them to play with technologies, through Raspberry Pi or Arduino. I think this a brilliant idea. Some may simply be attracted to the technology for the sake of the technology. Perhaps others could be drawn in by helping them see the tech as a means to an end – projects around agriculture or elder care for example.

Good meeting and my hat is off to the organizers!

Also Read:

Full-Stack, AI-driven EDA Suite for Chipmakers

Power Delivery Network Analysis in DRAM Design

Intel Keynote on Formal a Mind-Stretcher

Multi-Die Systems Key to Next Wave of Systems Innovations

 

Share this post via:

Comments

2 Replies to “Takeaways from SNUG 2023”

You must register or log in to view/post comments.