Banner 800x100 0810

An Insider’s View of the 2023 Global Semiconductor Alliance’s (GSA) Annual Awards

An Insider’s View of the 2023 Global Semiconductor Alliance’s (GSA) Annual Awards
by Daniel Nenni on 12-14-2023 at 10:00 am

IMG DEF60C112922 1

My beautiful wife and I attended the annual Global Semiconductor Alliance (GSA) Awards event last week. Usually this is a solo event but since my wife is CFO of SemiWiki I was able to get her an invite. I go every year and she wanted to see what all of the excitement was about. She also knows quite a few industry people from attending the Design Automation Conference with me. Her first DAC was 1985 in Las Vegas.

Surprisingly she had a good time. They must have broke an attendance record this year, the place was packed. It really was the Who’s Who of the semiconductor industry. I would guess that most of the GSA events were sold out this year, the ones I attended certainly were. We have used the term Rock Star in the past but today semiconductor professionals really are Rock Stars.

The GSA was established in 1994. It was founded to address the challenges and opportunities within the semiconductor industry, fostering collaboration and innovation among its members. It really is an incredible organization and a credit to the semiconductor ecosystem. You can read more about them on their website or ask Chat-GPT.

For me the GSA is known for the events, conferences, and forums where professionals from the semiconductor industry can come together to discuss emerging trends, share insights, and collaborate on the challenges facing the industry. These events have been invaluable for me over the years, absolutely.

The theme of the networking reception was digital twins sponsored by Cadence. They did an excellent job, even my wife said so. She actually knows what a digital twin is now so bravo to Cadence. She was less impressed by the Tesla truck however and I agree completely. As my kids would say “a serious dumpster fire”. It’s digital twin must be fraternal.

The ceremony generally starts with a comedian or celebrity of some sort. I remember one year it was Jay Leno and like the rest he was not funny. They always try and tell semiconductor jokes but they have no idea what they are talking about so it just does not work. The most memorable GSA opening act for me was Steve Forbes, son of Malcolm Forbes. That man had vision (flat tax and term limits) and was very passionate and articulate.

My wife was all about the food and it did not disappoint, especially the dessert!

Then came the awards:

Dr. Morris Chang Exemplary Leadership Award

The GSA’s most prestigious award recognizes individuals, such as its namesake, Dr. Morris Chang, for their exceptional contributions to drive the development, innovation, growth, and long-term opportunities for the semiconductor industry. This year’s recipient is Dr. Rick (Lih Shyng) Tsai, CEO and Vice Chairman of MediaTek.

Back in the day Morris Chang used to personally present this award and that was worth the price of admission, even though it is free for me since I am famous. Morris now appears via video recording and did not disappoint. Morris does not pull punches. He praised Rick’s work at TSMC (1989-2014) and his time as CEO (2005-2009) but he did mention the TSMC diversion of LED and Solar under Rick. What he did not mention was Rick’s downfall, TSMC’s first and last lay-off in 2009. I was in Taiwan at the time and let me tell you it was something to see. There were protests in front of TSMC, it was that shocking for TSMC and Taiwan in general. I believe it was something like 5% of the workforce, and that was after Rick said in 2008 no layoffs were planed.

Shortly thereafter Morris Chang took over as CEO and asked all laid-off employees to return to work to make amends for what he described as “regrettable actions” to dismiss them amid the economic downturn.

I spent most of my 40 year semiconductor career orbiting TSMC so this is from memory, go ahead and correct me if I am wrong here.

Rick then reinvented himself and joined SoC powerhouse MediaTek in 2017. Under his leadership MediaTek went from a trailing edge to a leading edge semiconductor company so this award is well deserved. I saw this transformation first hand, it was exceptional.

Rising Women of Influence Award

This award recognizes and profiles the next generation of women leaders in the semiconductor industry who are believed to be rising to top executive roles within their organizations. This year’s award was presented to Thy Tran, Vice President of Global Frontend Procurement at Micron Technology, Inc.

I do not know Thy but her story was certainly inspirational. I had to look her up and found this article which mirrored her speech. It is definitely worth a read:

From Refugee to Micron VP IEEE Spectrum

Company Awards

Most Respected Semiconductor Companies GSA members identified the winners in this category by casting ballots for the industry’s most respected
companies, judged for their vision, technology and market leadership. This year’s recipients include:

Most Respected Public Semiconductor Company Achieving Greater than $5 Billion in Annual Sales

• NVIDIA

Most Respected Public Semiconductor Company Achieving $1 Billion to $5 Billion in Annual Sales
• Silicon Labs
Most Respected Public Semiconductor Company Achieving $500 Million to $1 Billion in Annual Sales
• Lattice Semiconductor
Most Respected Emerging Public Semiconductor Company Achieving $100 Million to $500 Million in Annual Sales
• Rambus

Most Respected Private Company

• Astera Labs

Best Financially Managed Semiconductor Companies

These awards are derived from a broad evaluation of the financial health and performance of public semiconductor companies. This year’s recipients are:

Best Financially Managed Semiconductor Company Achieving up to $1 Billion in Annual Sales
• Lattice Semiconductor

Best Financially Managed Semiconductor Company Achieving Greater than $1 Billion in Annual Sales

• NVIDIA

Start-Up to Watch

GSA’s Private Awards Committee, comprised of successful executives, entrepreneurs and venture capitalists, chose the winner by identifying a promising startup that has demonstrated the potential to positively change its market or the industry through innovation and market application. This year’s
winner is SiMa.ai.

As a global organization, the GSA recognizes outstanding companies headquartered in the Europe/Middle East/Africa (EMEA) and Asia-Pacific regions having a global impact and demonstrating a strong vision, portfolio and market leadership. Two awards were presented in this category:

Outstanding Asia-Pacific Semiconductor Company
• MediaTek

Outstanding EMEA Semiconductor Company
• Robert Bosch GmbH

Analyst Favorite Semiconductor Company
Two analyst pick awards were presented based on technology and financial performance, as well as future projections:
• Credo Technology Group was chosen by Needham & Company, LLC
• MACOM Technology Solutions, Inc. was chosen by Jefferies, LLC
This year’s in-person ceremony was attended by 1,500 global executives in the semiconductor and
technology industries.

All in all a very good experience. Hopefully my wife and I will see you there next year!

Also Read:

IEDM Buzz – Intel Previews New Vertical Transistor Scaling Innovation

Webinar: “Navigating our AI Wonderland” … with humans-in-the-Loop?

RISC-V Summit Buzz – Axiomise Accelerates RISC-V Designs with Next Generation formalISA®


WEBINAR: FPGA-Accelerated AI Speech Recognition

WEBINAR: FPGA-Accelerated AI Speech Recognition
by Don Dingee on 12-14-2023 at 6:00 am

Cloud ASR demo on Speedster 7t FPGA

The three-step conversational AI (CAI) process – automatic speech recognition (ASR), natural language processing, and text-to-synthesized speech response – is now deeply embedded in the user experience for smartphones, smart speakers, and other devices. More powerful large language models (LLMs) can answer more queries accurately after appropriate training. However, the computational price of keeping up with large numbers of ASR streams in real-time running complex models stresses conventional GPU-based solutions. A webinar, moderated by Sally Ward-Foxton of EETimes featuring speakers Bill Jenkins, Director of AI Product Marketing at Achronix, and Julian Mack, Sr. Machine Learning Scientist at Myrtle.ai, discusses how Achronix FPGA technology applied to FPGA-accelerated AI speech recognition is taking real-time ASR to new levels.

Deploying an FPGA with a WebSocket API for AI inference

“Conversational AI is interacting with a computer like a human,” says Jenkins. “The idea is it needs to be as real-time as possible so the conversation is fluid – we’ve all called a customer service number and gotten long pauses or discovered it only knows certain words, and that’s what we’re trying to get away from.” Jenkins sees a growing CAI market going beyond customer service to challenges like medical and law enforcement bodycam transcription, where speed and accuracy are essential.

GPUs are almost always the choice for AI training, where there are fewer constraints on time and resources. But in real-time CAI applications, developers are running into the limits of what GPUs can deliver for AI inference, even with racks of equipment and massive amounts of power and cooling, whether in the cloud or on-premises.

FPGA-accelerated AI speech recognition combines the hardware benefits of FPGAs with the software benefits of easier programmability. “Our solution is a real-time appliance running the Myrtle.ai software stack jointly on a server-class CPU and our VectorPath Accelerator Card with a Speedster7t FPGA,” continues Jenkins. “A very simple WebSocket API interface abstracts away the fact that there is an FPGA in the system.”

“A WebSocket API is very similar to sending HTTP ‘get’ requests, except that it creates a stateful connection,” says Mack. “The server and client can continue talking to each other with low latency, even as the number of streams scales.”

Evaluating ASR performance in Achronix’s virtual lab

Achronix and Myrtle.ai have taken FPGA-accelerated AI speech recognition into Achronix’s virtual lab, available to ASR developers by remote access on request, to demonstrate the potential. “On one Speedster7t FPGA, we can run ASR on 1050 streams with 90th percentile latency under 55 milliseconds,” observes Jenkins. “Users can click on any stream, listen to spoken words, and see the real-time transcription.”

This performance translates to one Speedster7t FPGA replacing up to 20 servers, each running conventional CPU-plus-GPU payloads, with lower latency and no loss in ASR accuracy. “GPUs are a warp-locked architecture, executing one instruction across a lot of data that has to go back and forth from memory,” says Jenkins. “In our FPGA, we can run functions simultaneously using data from GDDR6 memory with up to 4TB of bandwidth without going to external memory or the host CPU.” A two-dimensional network-on-chip (NoC) speeds up data ingress with low latency when transfers occur.

Efficient number formats are critical to achieving conversational AI performance with the required accuracy. “You want as few bits as possible to represent weights because you win twice, once in data transfers and once in multiply-accumulate operations that make up the backbone of neural network models,” says Mack. “We’re using block float 16 (bfloat16), which is hard to use on GPUs that don’t support it, but it is natively supported on the Speedster7t FPGA.” Training at floating point 32 (fp32) quantizes to bfloat16 with accuracy intact, compared to int4 or int8 quantizations often used in ASIC-based hardware.

Continuing scalability as LLMs grow

Most of this webinar is a conversation between Jenkins, Mack, and Ward-Foxton, with a few slides and a Q&A session at the end. For instance, Ward-Foxton asks what happens when LLMs inevitably get larger. Mack suggests they can  fit the 7B parameter Llama 2 model in one Speedster7t FPGA now and should be able to fit the 13B parameter model soon. Jenkins adds their ASR server complex can grow to a 4U solution with eight VectorPath cards. This scalability means FPGA-accelerated AI speech recognition will take on more realistic real-time translation with more languages and broader vocabularies.

For the entire conversation, watch the archived Achronix webinar:
LinkedIn Live: FPGA-Accelerated AI Speech Recognition Revolution


RISC-V and Chiplets: A Panel Discussion

RISC-V and Chiplets: A Panel Discussion
by Paul McLellan on 12-13-2023 at 10:00 am

rvnames

At the recent RISC-V Summit, the very last session was a panel about chiplets called Chiplets in the RISC-V Ecosystem. It was moderated by Calista Redmond, the CEO of RISC-V International. The panelists were:

  • Laurent Moll, COO of Arteris
  • Aniket Saha, VP of Product Management of Tenstorrent
  • Dale Greenley, VP of Engineering of Ventana Microsystems
  • Rob Aitken, Distinguished Architect of Synopsys

This is a slightly odd combination of topics to me. Obviously, you can put a RISC-V processor on a chiplet but the challenges are not really different from any other processor. But RISC-V is hot and so are chiplets, and companies like Ventana are combining them.

Let me give you a bit of background about the companies to put them in context:

  • As you probably know, Arteris makes networks-on-chip (NoCs). It is a neutral company among chiplet vendors (and IP vendors).
  • Tenstorrent is designing a portfolio of very high-performance multicore RISC-V chips
  • Ventana has RISC-V IP but it also delivers it as chiplets
  • Synopsys is obviously an EDA company but they announced RISC-V cores earlier in the summit

]

The Actual Discussion

The first question from Calista was a softball asking what was the value of chiplets.

Dale said there was nothing specific about RISC-V for chiplets but the market decides when you do big monolithic things or chiplets. It depends on what a customer will pay you to do. “We provide both IP and chiplets, there is room for both.”

Aniket said that “doing chiplets is not cheap but doing chiplets and RISC-V is flexible and you can come up with hew products fast.”

Laurent went for production costs. NRE is very important to keep under control since not many people are building 100M parts. So there are more vendors involved and a complicated supply chain. An SoC is complex but chiplets are worse.

Rob pointed out heterogeneity like adding chiplets for RF and analog, having an optional accelerator, and so on. This potentially opens up new markets.

Calista went on to ask about where we are in automotive.

Aniket pointed out that automotive is very conservative and now they are aggressive about platforms that can scale from low end cars to high end cars. With chiplets, no one has really considered functional safety.

Rob went to aerospace (not quite automotive) and discussed how there is usually a fixed physical volume defined decades ago. It is hard to fit things in.

Laurent: Automotive companies are the ultimate catalog shoppers and chiplets let them take the best in AI, radar, infotainment, and so on.

How do you get the software to run?

Rob: if you make the system small, that is fine. But the automotive catalog shopping makes it harder.

Aniket: Related a statement “if you add it we won’t use it”. Automotive software stacks will support RISC-V in 5 years, which is fast. It took Arm 15 years to get there.

Q: What do we need for connectivity?

Laurent: It is very complex especially with people shopping around for chiplets. PHYs from different vendors, may be interoperable. Everyone is keen on UCIe. People want standards that make chiplets fit better.

Aniket complained that there are no standard design flows for chiplet. A big lack of standards.

Rob thinks we can come up with a standard flow but with different chiplets we don’t wan N different design flows.

Q: Where do you see things in 3-5 years?

Rob: we will be further along with different

“catalog shopping maybe depending on automotive OEMs. It will take a lot of industry effort. Any heterogenous stuff will take longer.

Aniket said chiplets will first be in the datacenter and then automotive. But first wave will be single vendor.

Summary

This is a combination of things that the participants said and my own opinions.

I think that for the time being, chiplet-based RISC-V designs will be single company effort (except, perhaps, for high-bandwidth-memory (HBM). It is too complex to build designs with multiple chiplets from different companies, interposers, and the network to connect them all, usually known as RDL.

Designs will be 2.5D not true 3D (where die are stacked on top of each other and communicate with thru-silicon-vias or TSVs) for the foreseeable future.

Automotive has its own set of challenges, in particular ensuring that chiplet-based designs are reliable in an environment with a lot of vibration. This will require extensive testing. Another issue is ensuring functional safety in an multi-die environment.

UCIe is promising and is somewhat based on PCIe. PCIe companies ensured reliability through plugfests. I don’t see how you can economically ensure UCIe interoperability in chiplets through a similar mechanism.

Finally, in addition to technical challenges, there are commercial challenges if we are to get to the nirvana of being able to purchase chiplets off-the-shelf and assemble them into systems at a reasonable cost. The biggest challenge is who will pay for and hold the inventory of chiplets. If all chiplets have to be manufactured on-demand then a lot of the advantages of a fast cycle time will be lost.

But RISC-V chiplets are certainly coming fast in the form of multi-die designs on 2.5D interposers built by a single company.

Also Read:

NoCs give architects flexibility in system-in RISC-V design

Pairing RISC-V cores with NoCs ties SoC protocols together

#60DAC Update from Arteris


When Will Structured Assembly Cross the Chasm?

When Will Structured Assembly Cross the Chasm?
by Bernard Murphy on 12-13-2023 at 6:00 am

Trends in assembly min

First, a quick definition. By “structured assembly,” I mean the collection of tools to support IP packaging with standardized interfaces, SoC integration based on those IPs together with bus fabric and other connectivity hookups, register definition and management in support of hardware/software interface definition, together with collateral support functions such as document generation and traceability. In other words, automation technologies in and around the IEEE 1685 (IP-XACT) standard or other systems with similar objectives.

I have been involved with structured assembly in one way or another for almost 20 years, first in developing our own product at Atrenta and later in writing about Magillem’s better-known IP-XACT line of products. I’m a believer in the concept – you can’t integrate billion-gate SoCs with a text editor. But, the commercial product opportunity has not been as clear. I have seen significant resistance to commercial products for small designs or small teams, understandably, but also for big designs among teams with their own highly tuned and specialized integration methodologies and software. Magillem from Arteris, has marquee logos to their credit (NXP, Samsung, STM, for example), but are commercial IP-XACT technologies stuck at the chasm or have they crossed?

(Graphic Source: Arteris, Inc)

New Marketing Team, New Vision

Michal Siwinski joined Arteris almost 2 years ago as CMO, shortly followed by a team of industry experts. They have been re-engineering big chunks of the Arteris company, product strategy, solution pitch, customer engagement, and brand starting, of course, with the NoC technologies. I must admit I have been impressed. They have a new, fresh approach to positioning Arteris NoC technologies in the fast-evolving world of HPC / AI / multi-die systems / automotive / edge and, no doubt, other domains. So, I was interested to see what they would do with Magillem and my perceived crossing-the-chasm problem.

I was encouraged that Michal understood why I felt the way I did. He had similar experiences with a Cadence attempt at such a product (I’m sure shared by counterparts at Synopsys). He agreed that adoption across the overall industry is not high. But when he looks at the top 10, 50 or 100 semis and system companies building big chips, they are all using structured assembly in some form. In that sense, the technology has already crossed the chasm. That said, CAD and individual design teams are given a lot of freedom in which assembly solution they adopt for their own projects, even when there is a centralized standard. Some organizations standardize on Magillem, some on an in-house standard (though mergers have made this option less common), and many vary from product team to product team. That said, Michal sees those home-grown systems coming under increasing pressure in what seems a perfect storm of disruptions triggered by macro, industry, and design trends.

The Gathering Storm

Disruptions force re-engineer/buy decisions for in-house software. You can rebuild the system to adapt to a change, or you can buy a system that already handles that change. Nothing new here – disruptions pretty much drove the evolution of the entire EDA and IP industry. In-house solutions work best until the cost of keeping up with evolving demands outweighs the benefits. At some point, often different for different design teams, a switch may become less costly than re-architecting the home-based tool.

One such trigger in this instance is the release of the IP-XACT 2022 standard. Michal tells me there are several semantic changes, such as more compatibility with SystemVerilog features, which will force rework to in-house systems. Equally important, there are changes to deal with really large systems, which may be even more challenging for re-engineering projects.

A second problem is that tiled subsystems are becoming more popular: in AI, in many-core compute, and in graphics cores. The disruption here is a scaling problem because the complexity of interconnect that must be hooked up between these tiles rises dramatically. Not impossible to handle in a legacy system, but the effort to create and rework and the opportunity for errors can grow quadratically given the nature of these structures.

A third challenge is the growing diversity of sources for IP, external for the standard stuff but now also for AI cores, DSPs in support of AI pipelines, sensing and sensor fusion, eFPGA and multi-die connectivity. Internal cores from other divisions and design partners add to this complexity, each commonly in different flavors of IP-XACT. This isn’t a technology challenge; it’s an administrative volume challenge, particularly as we move to from dozens to 100s of IP blocks that make up a chip, but even more so when that number goes even higher, perhaps across multiple die . Technology challenges are fun to take on internally; administrative/scaling problems not so much and this is where EDA and IP companies excel.

Another disruption comes from a trend to continuous integration (CI) flows in SoC design, something I heard hints of back at the beginning of the Innovation in Verification series. As designs become more specialized to meet more application-specific needs and schedules tighten, it is natural to plan families of derivatives which must be pushed out in even tighter schedules. Michal is now hearing of design flows with an 80%-ready core as a base, from which different derivatives can evolve even while the base itself may continue to evolve.

Then, of course, there is multi-die design, in which some “IPs” will be pre-designed chiplets with pre-determined interface constraints for communication, power management, version compatibility issues, etc., etc. Meanwhile, boundaries between proposed chiplets can remain fluid until quite late in design but with enhanced need for careful control of implications for implementation.

The End Is Nigh

In short, Michal sees a perfect storm building of new and evolving demands on integration platforms. Perhaps not all will be important immediately to a given product plan, but then you start to worry about how long you can continue to kick the can down the road until a transition becomes unavoidable. Either way, it seems clear there will be building pressure to re-evaluate tradeoffs between in-house solutions with significant re-engineering costs on the horizon, versus a planned and controlled switch to commercial platforms.

You can learn more about Arteris Magillem solutions for SoC connectivity HERE, for register management HERE, for hardware/software interface development HERE, and learn more about some public customers HERE.


Using PCB Automation for an AI and Machine Vision Product

Using PCB Automation for an AI and Machine Vision Product
by Daniel Payne on 12-12-2023 at 10:00 am

machine vision testing

I knew that HDMI was a popular standard used to connect consumer products like a monitor to a laptop, but most professional video and broadcast systems use the SDI (Serial Digital Interface) connector standard. Pleora Technologies, founded in 2000, currently serves the machine vision and manufacturing markets, including those for military and security high-reliability applications built upon AI and machine vision automation products. One of these is the RuggedCONNECT Smart Video Switcher, used for video capture, processing, streaming, and display. This switcher supports RS-170/NTSC/PAL or 2 HD-SDI video inputs, plus two independent HD-SDI single link displays. Customers require products from Pleora that meet their unique size, energy, and bandwidth requirements.

This blog follows how Pleora got their smart switch to market using EDA tools from Siemens EDA. For video connectivity products the requirement is to network multi-vendor cameras, displays, processors and sensors using standards. Gigabit Ethernet was used in their video switcher, along with GPU and software to enhance video with AI and computer vision processing, enabling situational awareness and ADAS features.

In the case of the RuggedCONNECT Smart Video Switcher, military requirements meant that its boards had to be 1275 certified, as 28V vehicle batteries have to function with voltage spikes up to 250V in magnitude, while remaining quiet to meet rugged emission standards.

EDA Tools

Robert Turzo, Principal Hardware designer at Pleora used the Xpedition Enterprise and HyperLynx tools for the smart switch product design. Critical PCB signals were listed in the constraint manager, plus the DRC feature was used in Xpedition Layout to mitigate issues. They had 13 boards for their system design, and for design reuse they would clone part of an original design to seed the other designs. The seven different via types were stored in a central library with Xpedition, for easy sharing.

Automation performed the matching of the DDR3 and DDR4 interface, instead of manual efforts to meet specifications more quickly. The sketch router feature allowed human-guided auto-routing for multiple nets at a time, saving time and enabled meeting their schedule.

Lab testing of a multi-board system

The smart switcher boards had four stack-ups for their rigid PC, and the fifth stack-up for a rigid-flex PCB. Each impedance-controlled trace was verified on the boards by running the HyperLynx tool. SI/PI (Signal Integrity, Power Integrity) analysis also was performed with HyperLynx in both front-end and back-end of the design process. Only one revision board was required prior to production.  Simulation results performed during design did match the measurements after fabrication, so all that design analysis paid off. Thermal analysis with the Siemens Simcenter Flotherm tool was made easier by using PCB thermal data from HyperLynx.

RuggedConnect, Pleora

Summary

Pleora was successful in their PCB design flow by using Xpedition and HyperLynx software tools on a project with buried vias, stacked micro vias and rigid-flex boards. SI/PI goals were met through analysis in HyperLynx. At the 29th annual Siemens Xcelerator Technology Innovation Awards, Pleora won 1st pace in the multi-board systems category. Read the complete case study online.

Related Blogs


Automated Constraints Promotion Methodology for IP to Complex SoC Designs

Automated Constraints Promotion Methodology for IP to Complex SoC Designs
by Kalar Rajendiran on 12-12-2023 at 6:00 am

Synopsys Timing Constraints Manager

In the world of semiconductor design, constraints are essentially specifications and requirements that guide the implementation of a specific hardware or software component within a larger system. They dictate timing, area, power, performance, and of course functionality of a design, playing a crucial role in ensuring that the design meets its intended objectives. There are also system-level constraints that address requirements and specifications at a broader level, encompassing the entire system or a collection of interconnected components.

As such, applying constraints accurately at various levels of the design hierarchy is essential, whether it is an IP block or a complete SoC that is being designed. At the same time, this process poses significant challenges, particularly across different levels of a complex design hierarchy. As design specifications evolve and intricate IP blocks are integrated into System-on-Chip (SoC) designs, the intricacies of timing, power, and area constraints become tricky. The manual management of this process is highly error-prone, often resulting in inconsistencies and conflicts that may go undetected until later stages of the design cycle.

Additionally, as a design evolves, constraints are refined to meet performance targets, introducing further complexities. The need for precision in propagating constraints from the system level to IP blocks, coupled with the dynamic nature of design iterations, underscores the importance of automated tools and methodologies. Automated constraint management becomes a critical enabler for achieving design predictability, reliability, and ultimately, successful tape-outs. An automated constraints promotion (ACP) methodology not only streamlines the constraint extraction, mapping, and propagation processes but also contributes to error reduction, ensuring that constraints remain accurate and coherent throughout the design evolution.

Synopsys recently hosted a webinar to address the topic of timing constraints management, covering the ACP methodology in the context of utilizing Synopsys Timing Constraints Manager. The webinar closed with the presentation of results from a case study that focused on early release PCIe® Gen6 subsystem timing constraints management.

Synopsys Timing Constraints Manager (TCM) Tool

Synopsys TCM empowers designers with a visual inspection and review capability for promoted constraints, offering a clear representation of their application across various levels of the design hierarchy. This feature facilitates a comprehensive understanding of how constraints are influencing different aspects of the design. Furthermore, TCM’s integration with verification tools plays a critical role in ensuring the accuracy and alignment of promoted constraints with the design intent and specifications. Thorough verification is facilitated through this integration, providing design teams with the confidence that the constraints are in line with the project’s requirements.

Key aspects of TCM’s value proposition include dedicated flows for constraints verification, promotion, and demotion, ensuring constraint integrity across different design hierarchy levels. Equally important is its scalability, catering to diverse IPs and configurations, making it adaptable to the varied complexities of semiconductor projects. TCM’s automation capabilities contribute to a notable turnaround time, reducing constraint-related activities from weeks to days without compromising the integrity of IP constraint reuse.

QuickStart Implementation Kits (QIKs)

Complementing TCM’s capabilities, the Fusion  QuickStart Implementation Kits (QIKs) enhance productivity for design teams using Synopsys Fusion Compiler™, providing a swift start with tailored automation features. Fusion QIKs play a pivotal role in expediting the design process by providing designers with a valuable starting point. Specifically, these kits come equipped with pre-configured constraints, offering a consistent and reliable foundation for design teams embarking on their projects. This jumpstart proves invaluable especially when dealing with intricate designs such as a PCIe Gen 6 subsystem.

Furthermore, Fusion QIKs contribute to the efficiency of the design flow by simplifying the process of viewing and verifying results. Designers can leverage these kits to visualize and inspect results effectively, ensuring that the promoted constraints align with the design intent. This visualization step is crucial for designers to verify the accuracy and coherence of constraints, providing insights at the early stages of the design process and allowing for prompt identification and resolution of any potential issues. Ultimately, Fusion QIKs serve as a valuable tool in enhancing both the speed and reliability of the design process, ensuring a solid foundation for subsequent stages in the semiconductor design workflow.

The Benefits of the TCM/QIK Combo

The integration of TCM, coupled with the strategic use of Fusion QIKs increases the efficiency and accuracy of the design process. This is particularly crucial when dealing with high-speed and complex designs such as PCIe Gen 6 subsystems. The emphasis on early constraint promotion becomes a cornerstone for achieving enhanced design predictability and meeting the demanding timing requirements inherent in intricate designs. The above benefits were corroborated during a case study centered around an early release PCIe Gen6 subsystem configuration. The study spotlighted the critical importance of precise and early constraint promotion at the outset of the design process. Leveraging the Synopsys TCM tool enabled designers to not only address timing constraints but also to identify optimization opportunities related to power and area constraints specific to the PCIe Gen 6 subsystem.

Summary

Synopsys TCM helps streamline the creation and management of timing constraints for subsystems and System-on-Chips (SoCs), mitigating the challenges associated with manual approaches. TCM offers a comprehensive suite of functionalities, covering the entire spectrum from creation and verification to promotion, demotion, and equivalence checking. By adopting Synopsys’ tools and methodologies, design teams can navigate the challenges of the design process more effectively, contributing to successful outcomes in the development of advanced semiconductor products and electronics systems.

For more details, visit the Synopsys Timing Constraints Manager product page.

To listen to the webinar, watch on-demand here.

Also Read:

Synopsys.ai Ups the AI Ante with Copilot

Synopsys 224G SerDes IP’s Extensive Ecosystem Interoperability

Synopsys Debuts RISC-V IP Product Families


WEBINAR: Joint Pre synthesis RTL & Power Intent Assembly flow for Large System on Chips and Subsystems

WEBINAR: Joint Pre synthesis RTL & Power Intent Assembly flow for Large System on Chips and Subsystems
by Daniel Nenni on 12-11-2023 at 10:00 am

Blog UPF Picture1

Nowadays, low power design requirements are key for large SoCs (system on chips) for different applications: AI, Mobile, HPC, etc. Power intent management early in the design flow is becoming crucial to help facing PPA (Power Performance Area) design challenges.

WEBINAR REGISTRATION

With the increasing complexity of such designs including challenging power optimization requirements, power intent should be managed right from the start through the design assembly process. RTL & power intent management requires nowadays to be less painful and with a higher degree of automation.

Indeed, a seamless and joint RTL & power intent integration process is beneficial at different levels knowing that power intent management is tightly correlated to the RTL integration process and vice versa. This is mainly explained by the numerous interactions which are needed between logic (RTL) and power intent (UPF) during the building processes and the correlation between RTL and power intent files. Consequently, a joint flow seems to be the good approach to avoids back and forth iterations between SoC/RTL and power engineers.

This joint flow must be able to cope with different scenario and maturity levels of the design project, such IP cores which have or not UPF files, missing power intent definitions at both IP and top levels between RTL and UPF, etc. with almost press button fixes.

The ultimate goal of the tight integration processes is to generate top level, for both RTL and UPF views, ready for synthesis and simulation. A joint flow should be easy to use in order to  enable even non-UPF experts to run the overall logic and power intent integration process.

Figure 1: Tight integration of RTL and power intent in a joint design flow
Key automated capabilities are expected from such a joint RTL & UPF flow as summarized below.
  • RTL vs UPF vs libraries consistency checks

It is key to detect early any inconsistency between logic design and power intent descriptions, including design libraries and any source of information which covers power intent attributes.  Such checks need to be applied at both IP and subsystem levels. A typical example is a port naming mismatch between an RTL and a UPF file (for one or several power control signal). Another one is a   missing power attribute a liberty cell and UPF, etc.

  • Enable fast design learning

Since the joint flow is intended also for non-power experts, design learning capabilities need to be provided though simple APIs to help exploring existing power intent information (power state tables (PST), power switching strategies, etc.) including 3rd party IP cores and subsystems. During the learning process, designers should be able to easily catch missing power intent information such as power definitions and rules.

  • Cross reporting APIs between RTL and UPF

As a joint flow, this should enable to provide cross reporting queries between RTL design and Power intent information. As a typical example, reporting the list of instances with the related power domains and associated supplies will really help designers to understand the correlation between RTL and UPF.

Of course, the related APIs should be intuitive for designers and straightforward.

  • Automated Check & Fix capabilities

The flow should leverage check & fix features between RTL and UPF files. An incoherent name for instance would lead to a press button fix with an automatic update and file generation. Any fix should be automatically reflected from RTL to UPF and vice-versa. Also, power attributes from technology libraries need to be correctly reflected into the generated UPF files.

The power intent consistency must be checked at any time to ensure its completeness. Typically, the detection of missing level shifters and dangling supplies must happen the earliest in the flow to prevent further issues.

Finally, any RTL editing event like adding a new port, renaming a signal or other, should automatically lead to a UPF update.

  • UPF generation at any hierarchical levels

Once the UPF for all the IPs is validated, the Top level UPF needs to be generated. No designer wants to write the Top level UPF manually. Starting from a clearly specified/captured power strategy, the UPF should be generated press-button.

  • Consistent RTL/UPF hierarchical manipulation

As RTL hierarchical changes need to be automated, the same expectation is required for power intent. In a divide and conquer design strategy with physically awareness, hierarchical manipulation is expected to help in many situations such as parallel synthesis, RTL with UPF simulation, etc.

  • Enable efficient Design reuse & data extraction

Power intent needs also to be considered in a reuse process when building new SoC subsystems. Both RTL and UPF require smooth and automated extraction for a particular subsystem specification. UPF promotion and demotion capabilities are subsequently expected to help in this reuse process.

Figure 2: Promotion & Demotion towards an automated power intent management

With its 20 years of expertise in RTL management and more than 10 years in UPF support, Defacto Technologies is providing a mature design solution to answer above needs and requirements from a joint RTL and power assembly flow.

Defacto’s SoC Compiler with its major release 10 covers all of the above requirements regardless UPF versions and RTL languages.

This solution joint RTL & power intent assembly flow pre synthesis is silicon proven and already got excellent results in particular on the UPF promotion and demotion.

Defacto will be holding a webinar by next week (December 14 at 10:00AM PST) where the Defacto experts with present a complete joint RTL & power intent assembly flow including all the steps describes above in the blog:

WEBINAR REGISTRATION

Following the webinar, a Whitepaper will also be available covering detailed explanations on the different steps of the joint flow through a typical design use case.

If any questions or to give a try to this joint flow, the Defacto team can also be contacted through their website: https://defactotech.com/contact

Also Read:

Lowering the DFT Cost for Large SoCs with a Novel Test Point Exploration & Implementation Methodology

Defacto Celebrates 20th Anniversary @ DAC 2023!

Defacto’s SoC Compiler 10.0 is Making the SoC Building Process So Easy


The First Automotive Design ASIC Platform

The First Automotive Design ASIC Platform
by Daniel Nenni on 12-11-2023 at 8:00 am

Alchip Automotive ASIC Design Platform

Alchip Technologies, Ltd. is a company that specializes in ASIC (Application-Specific Integrated Circuit) design and manufacturing. They are known for providing high-performance and customized ASIC solutions for a variety of applications. Alchip works with clients to design and develop integrated circuits that meet specific requirements and deliver optimal performance for their intended purposes.

The company offers services such as ASIC design services, production services, and IP services. Their expertise lies in creating tailored solutions for clients in industries such as artificial intelligence, data centers, networking, and now they have announced the first Automotive Design ASIC Platform.

Last week Alchip celebrated its 20th Anniversary at the Taipei Marriott with a gala event that thanked and recognized luminaries representing Alchip’s investors and partners for the company’s success. Alchip was founded in 2003 with 23 employees and went public in 2014 on the Taiwan Stock Exchange with an initial capitalization of US$195 million. This year, the company achieved a market cap of approximately US$7 billion and employs nearly 600 people in 13 locations around the globe.

Last month Alchip unveiled the  industry’s first Automotive ASIC Design Platform. This is in lock step with TSMC and their automotive push and as you might know Alchip and TSMC are close partners. Alchip is a TSMC-certified Value Chain Aggregator, and is a founding member of the TSMC 3DFabric Alliance®.

Alchip Automotive ASIC Design Platform

The platform consists of six modules: Design for Autonomous Driving (AD)/ Advanced Driver Assistance System (ADAS), Design for Safety, Design for Test, Design for Reliability, Automotive Chip Sign-off, and Automotive Chip Manufacturing (MFG) Service.

Design for AD/ADAS is the platform’s starting point. Its Ultra-scale design capabilities integrates Central Processing Unit (CPU) and Neural Processing Unit (NPU) into the smallest possible die size, while meeting aggressive higher performance and lower power consumption required by automotive applications.

The Design for Safety module follows the ISO26262 pre-scribed flow that includes required isolated TMR/Lock-Step design methodology. The module also features an experienced safety manager and includes the mandated Development Interface Agreement (DIA) that defines the relationship between the manufacturer and the supplier throughout the entire automotive safety lifecycle and activities.

Design for Reliability includes enhanced Electromigration (EM) as part of silicon lifecycle management. It also covers AEC-Q grade IP sourcing and implementation.

The Automotive Chip Manufacturing Service works with IATF16949 approved manufacturing suppliers. Services include tri-temp testing by target AEC-Q grade, automotive wafer, automotive substrate, assembly and burn-in.

Design for Test capabilities support In System Test (IST) and MBIST/LBIST design, critical and redundancy logic for yield harvest, automotive-level ATPG coverage, and physical-aware ATPG.

The final sign-off module covers an aging library based on a customer mission profile, OD/UD/AVS/DVFS library support, and the final Design for Manufacturing sign-off.

“This is a huge step forward for ADAS and autonomous driving ASIC components and the global automotive electronics industry, said Johnny Shen, CEO of Alchip.” It will speed up the development and time-to-market of essential safety-critical ADAS applications, while significantly advancing the innovation with increasing complex autonomous driving implementation and features.”

Alchip Technologies Ltd., founded in 2003 and headquartered in Taipei, Taiwan, is a leading global provider of silicon and design and production services for system companies developing complex and high-volume ASICs and SoCs. Alchip provides faster time-to-market and cost-effective solutions for SoC design at mainstream and advanced process technology. Alchip has built its reputation as a high-performance ASIC leader through its advanced 2.5D/3DIC design, CoWoS/chiplet design and manufacturing management. Customers include global leaders in AI, HPC/supercomputer, mobile phones, entertainment device, networking equipment and other electronic product categories. Alchip is listed on the Taiwan Stock Exchange (TWSE: 3661).

For more information, please visit the Alchip website: http://www.alchip.com

Also Read:

Alchip is Golden, Keeps Breaking Records on Multiple KPIs

Achieving 400W Thermal Envelope for AI Datacenter SoCs

Alchip Technologies Offers 3nm ASIC Design Services


UCIe InterOp Testchip Unleashes Growth of Open Chiplet Ecosystem

UCIe InterOp Testchip Unleashes Growth of Open Chiplet Ecosystem
by Kalar Rajendiran on 12-11-2023 at 6:00 am

Pike Creek UCIe Test chip

Intel recently made headlines when CEO Pat Gelsinger unveiled the world’s first UCIe interoperability test chip demo at Innovation 2023. The test chip built using advanced packaging technology is codenamed Pike Creek and is used to demonstrate interoperability across chiplets designed by Intel and Synopsys. More details on this later in this writeup. This announcement marked a critical milestone in the journey toward an open and interoperable chiplet ecosystem and highlights the UCIe standard’s commitment to driving the chiplet revolution forward.

Proofpoint of UCIe InterOp

The significance of Intel’s announcement lies in its emphasis on interoperability—the ability of chiplets to communicate seamlessly and effectively, regardless of origin. The announcement marks the public debut of functioning UCIe-enabled silicon, featuring an Intel UCIe IP manufactured on Intel 3 process node and a Synopsys UCIe IP fabricated on the advanced TSMC N3E process node. These two chiplets in the Pike Creek test chip communicate via Intel’s EMIB interconnect bridge, ushering in a new era of heterogeneous chiplet technology.

The Pike Creek test chip serves as a tangible demonstration of UCIe’s capabilities, showcasing how chiplets from different vendors can work together efficiently within a single system. Intel has announced plans to transition from proprietary interface to the UCIe interface on its next-generation Arrow Lake consumer processors. This demonstrates Intel’s commitment to fostering an open, standardized ecosystem for chiplets and aligning with the industry’s shift towards UCIe.

The Backdrop

In recent years, industry leaders such as Intel, AMD, NVIDIA and others have embraced chiplet-based multi-die systems—an innovative approach that involves integrating small, specialized, heterogeneous or homogeneous dies (or chiplets) into a single package. However, the predominant focus has been on of captive systems, where all chiplets within a package are developed by the same vendor. However, this approach limits innovation that arises from incorporating specialized chiplets from different sources.

Heterogeneous integration offers the potential for more versatile and powerful systems by allowing chiplets from various vendors to seamlessly work together in a multi-die system. To fully unlock the potential of chiplet-based multi-die systems, the industry recognizes the imperative of heterogeneous integration. The success of a chiplet-based industry in turn depends heavily on encouraging a broad base of vendors to enter and grow an open chiplet ecosystem. But without a standardized interface for chiplet-to-chiplet communication, integrating chiplets from different vendors becomes complex. Interoperability (InterOp), the seamless communication between chiplets regardless of their origin, stands as a central goal for realizing the full potential of heterogeneous chiplet integration.

Heterogeneous Interoperability is Key

Addressing the heterogeneous interoperability need, the Universal Chiplet Interconnect Express (UCIe) standard was introduced in 2022 through a consortium. With promoter members such as Intel, AMD, TSMC and others and contributor members such as Synopsys, Amkor, Keysight and many others, the consortium is now 120+ members strong. Developed collaboratively by major players in the semiconductor industry, the UCIe standard aims to provide an open-source interface for chiplet interconnects interoperability. By standardizing communication between chiplets, UCIe not only simplifies the integration process but also fosters a broader ecosystem where chiplets from different vendors can seamlessly be incorporated into a single design.

Benefits of UCIe

UCIe consortium members have set ambitious performance and area targets for the technology. By categorizing target markets into two broad ranges with standard 2D packaging techniques and advanced 2.5D techniques, UCIe offers versatility in meeting the diverse needs of chip designers. Advanced 2.5D techniques include technologies such as Intel’s Embedded Multi-Die Interconnect Bridge (EMIB) and TSMC’s Chip-in-Wafer-on-Substrate (CoWoS). Chipmakers can select chiplets from various designers and seamlessly incorporate them into new projects, significantly reducing design and validation work. UCIe allows designers and manufacturers to select chiplets based on their specific requirements, enabling a more flexible and diverse approach to semiconductor design.

In essence, UCIe helps accelerate time-to-market, reduce development costs, promotes innovation, broadens supplier base and enhances overall product development efficiency. Support for 3D packaging is on the roadmap.

Summary

As the semiconductor industry moves forward, the implications of UCIe are profound. The standard not only propels chiplet technology into the era of heterogeneous integration but also opens doors to a new wave of innovation. With a standardized interface in place, chip designers can mix and match chiplets with confidence, creating tailored solutions for a wide range of applications. For example, the potential for heterogeneous chiplets integration opportunities in the automotive market is tremendous. The UCIe consortium recently announced the UCIe 1.1 specification to deliver valuable improvements in the chiplet ecosystem, extending reliability mechanisms to more protocols and supporting broader usage models. Enhancements for automotive usages include predictive failure analysis and health monitoring and enabling lower-cost packaging implementations.

Synopsys

As the leader in EDA and semiconductor IP, Synopsys offers comprehensive solutions to address the ecosystem needs for chiplets integration.

For more details on Synopsys UCIe IP, visit Synopsys UCIe IP Solutions.

Also Read:

Synopsys.ai Ups the AI Ante with Copilot

Synopsys 224G SerDes IP’s Extensive Ecosystem Interoperability

Synopsys Debuts RISC-V IP Product Families


IEDM Buzz – Intel Previews New Vertical Transistor Scaling Innovation

IEDM Buzz – Intel Previews New Vertical Transistor Scaling Innovation
by Mike Gianfagna on 12-10-2023 at 2:00 pm

IEDM Buzz – Intel Previews New Vertical Transistor Scaling Innovation

For more than 65 years, the IEEE International Electron Devices Meeting (IEDM) has been the world’s pre-eminent forum for reporting technological breakthroughs in the areas of semiconductor and electronic device technology, design, manufacturing, physics, and modeling. As I post this, the conference is underway in San Francisco and Intel is presenting a series of first-of-a-kind advances to extend Moore’s Law. The palette of innovations being presented at the conference creates a new path to vertical device scaling, opening the opportunity for a trillion transistors on a package by 2030. This is a story with several parts. Here are the details of how Intel previews new vertical transistor scaling innovation at IEDM.

The Impact

Everyone knows about the incredible exponential scaling delivered by Moore’s Law over the past 50 years or so. We’ve also seen the monolithic effects of Moore’s Law slowing of late. Multi-die design is now adding to the exponential density increases the industry has come to rely on. But that’s not the whole story. It turns out on-chip transistor density scaling is alive and well and is a key contributor to semiconductor industry health.

And Intel, the birthplace of Moore’s Law, is leading the way with innovation that fuels both monolithic and multi-die trends.  In the area of advanced packaging to fuel multi-die design, you can read about Intel’s innovation with glass substrates here. The subject of this post is what Intel is doing to fuel the other trend – monolithic transistor scaling. This is a story of innovation in the Z-axis; how to stack devices on top of each other to deliver more in the same area.

It turns out there are two fundamental barriers to overcome here. First, how to stack CMOS devices to deliver reliable, high-performance characteristics. And second, how to get power to those devices without reducing reliability and performance.  There are a series of presentations at IEDM this week that present several innovations that address these problems. Here are some details…

A Preview of Intel’s Announcements

I was fortunate to attend a pre-IEDM briefing where some of Intel’s advanced researchers previewed what was being presented at IEDM. What follows is a summary of their comments.

Paul Fisher

First to speak was Paul Fisher, Director of Chip Mesoscale Processing Components Research at Intel. Paul began with an introduction to the Components Research Group. He explained this organization is responsible for delivering revolutionary process and packaging technology options that advance Moore’s Law and enable Intel products and services. Some of the research that came from this group and found its way into commercial Intel products includes strained silicon, high-K metal gate, the FinFET transistor, Power Via technology and the RibbonFET. The list is much longer – quite impressive.

Another remarkable characteristic of this organization is the breadth of its worldwide collaboration. Beyond US government agencies, Paul explained the group also collaborates with consortia around the world such as Imec, Leti, Fraunhofer, and others in Asia. The group also directly sponsors university work and mentors other programs through organizations such as the Semiconductor Research Corporation (SRC). The group also works with the semiconductor ecosystem to ensure the equipment and processes needed for new developments are available.

Paul then set the stage for the three briefings that followed. The first discussed innovations in backside power delivery. The second discussed three-dimensional transistor scaling and interconnect. And the third presented advances for on-chip power delivery using Gallium-Nitride (GaN). These three areas are summarized in the top graphic for this post.

Mauro J. Kobrinsky

Next to speak was Mauro J. Kobrinsky, Intel Fellow, Technology Development Director of Novel Interconnect Structures and Architectures. Mauro began by explaining that large, low resistance power routing competes with fine, low capacitance signal routing. The result is a compromise in density and performance. A significant advance that reduces this problem is back-side power delivery. Using this approach, power delivery routing can be done on the backside of the device, freeing critical front-side real estate for more optimal signal routing.

Mauro explained that Intel’s Power Via technology will move to production is 2024 and this will begin to open new options for back-side power delivery. Additional research will also be presented that takes back-side power delivery to a new level. This includes the development of back-side contacts to allow power to be delivered through the backside while signals are delivered through the front-side of the device.

Mauro also discussed critical enhancements for stacked device routing that are underway. Stacked devices present a unique set of challenges for both power and signal routing. In the signal area, new approaches for epi-epi and gate-gate connection must be developed and this is part of the research Mauro discussed.

Marko Radosavljevic

After Mauro, Marko Radosavljevic, Principal Engineer at Intel discussed three-dimensional transistor scaling and interconnect. Essentially what comes after RibbonFET. Marko explained that initial device stacking results were presented by Intel at IEDM in 2021.

What will be presented at IEDM this year is the implementation of a vertically stacked NMOS and PMOS RibbonFET device configuration with Power Via and direct back-side device contacts with a poly pitch of 60nm. The resultant compact inverter exhibits excellent performance characteristics, paving the way for more widespread use of vertical device stacking.

The final speaker was Han Wui, Principal Engineer, Components Research at Intel. Han discussed new approaches to on-chip power delivery. He explained that Intel proposed the first MOS power driver in 2004. This device, often called DrMOS is now used in a wide variety of products.

Han Wui

Han went on to explain that Gallium Nitride, or GaN devices are popular today for high-voltage applications like the 200-volt devices in many laptop charging “bricks”. It turns out GaN exhibits far superior performance at lower voltages (48-volt and below) when compared to CMOS power devices.

At this year’s IEDM, Han explained that Intel will show the first implementation of a process that integrates CMOS devices with GaN power devices on a 300mm wafer.  Dubbed DrGaN, Han explained that this technology will open new levels of performance and density for future designs by integrating CMOS drivers with highly efficient GaN power devices on the same wafer.

To Learn More

You can get a broader view of Intel’s device and process innovation here. And that’s how Intel previews new vertical transistor scaling innovation at IEDM.

Also Read:

Intel Ushers a New Era of Advanced Packaging with Glass Substrates

How Intel, Samsung and TSMC are Changing the World

Intel Enables the Multi-Die Revolution with Packaging Innovation