[carousel-horizontal-posts-content-slider]
SNPS1670747138 DAC 2025 800x100px HRes

Podcast EP221: The Importance of Design Robustness with Mayukh Bhattacharya

Podcast EP221: The Importance of Design Robustness with Mayukh Bhattacharya
by Daniel Nenni on 05-03-2024 at 10:00 am

Dan is joined by Mayukh Bhattacharya, Engineering, Executive Director, at Synopsys. Mayukh has been with Synopsys since 2003. For the first 14 years, he made many technical contributions to PrimeSim XA. Currently, he leads R&D teams for PrimeSim Design Robustness and PrimeSim Custom Fault products. He was one of the early adopters of AI/ML in EDA. He led the development of a FastSPICE option tuner – Customizer – as a weekend hobby, which later became the inspiration behind the popular DSO.ai product. He has 11 granted (and 4 pending) patents, 7 journal papers, and 20 conference publications.

Dan explores the concept of design robustness with Mayukh. Design robustness is a measure of how sensitive a design is to variation – less sensitivity means a more robust design. For advanced nodes where there is significant potential for variation, design robustness becomes very important.

Mayukh explains the many dimensions of robustness, with a particular focus on memory design. He describes the methods required and how the Synopsys PrimeSim portfolio supports those methods. How AI fits into the process is also discussed, along with the benefits of finding problems early , the importance of adaptive flows and the overall impact on reliability.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


CEO Interview: Harish Mandadi of AiFA Labs

CEO Interview: Harish Mandadi of AiFA Labs
by Daniel Nenni on 05-03-2024 at 6:00 am

image2

Harish Mandadi is the CEO and Founder of AiFA Labs, a service-based IT company that provides best-in-class solutions for clients across various industries. With over 20 years of experience in IT sales and delivery, I have a unique blend of entrepreneurial vision and hands-on expertise in the dynamic landscape of technology.

Tell us about your company?
Sure. So, AiFA Labs is an IT solutions company that guides clients through digital transformations as they update their business processes to incorporate generative AI, machine learning, optical character recognition, and other technologies.

In the past, we were a service-based company. However, we just launched our first product, Cerebro AI, on March 30th, 2024. It’s an all-in-one generative AI platform with more tools, features, and integrations than any other AI platform on the market. It will change the way companies do business and we are very proud of it.

What problems are you solving?
We are solving problems related to scalability, high overhead, and time to market. Cerebro has the ability to expand reach globally, reduce labor costs by 30-40%, and speed up content creation by 10x. One of Cerebro’s main features is SAP AI Code Assist, which automates SAP ABAP development  SDLC process and brings down the effort by 30 to 50% with the click of a button.

We also have an AI Prompt Marketplace, where users can buy and sell AI prompts to make their interactions with generative AI more efficient and effective. Knowledge AI collects all company data and trains AI based on it. It allows business users to interact with their business data in natural language. In total, Cerebro is 17 tech products in one and we expect that number to grow. It is truly a one-stop shop for everything AI.

What application areas are your strongest?
Our strongest application areas are IT, life sciences, Consumer, marketing, customer service, HR, and education. We are hoping to break into law, entertainment, and a few other use cases.

What keeps your customers up at night?
I don’t know for sure, but I think missed opportunities keep our customers up at night. Experiencing increased demand without the means to meet client expectations is every business owner’s nightmare. With Cerebro, no customer inquiry goes unanswered, every hot topic is covered in print within a few hours, and software solutions are delivered in half the amount of time it usually takes.

What does the competitive landscape look like and how do you differentiate?

Right now, the competitive landscape is flooded with new generative AI products, and we expect to see many more of them come onto the market in the next few years. Most of them have a singular mode of operation, similar to ChatGPT or Gemini.

Our product is special because it incorporates all of the most popular large language models and allows users to choose which ones they use. It also integrates with Amazon AWS, Microsoft Azure, Google, SAP, and more. Cerebro possesses almost any AI functionality you can think of and then some. 

What new features/technology are you working on?
Our latest features are AI Test Automation and an AI Data Synthesizer. The first feature runs tests on SAP ABAP code to gauge performance and identify potential issues. The second feature processes data with missing information and fills in the gaps based on context.

How do customers normally engage with your company?
Customers engage with us on LinkedIn, Twitter/X, or our company website.

Also Read:

CEO Interview with Clay Johnson of CacheQ Systems

CEO Interview: Khaled Maalej, VSORA Founder and CEO

CEO Interview with Ninad Huilgol of Innergy Systems


Self-heating and trapping enhancements in GaN HEMT models

Self-heating and trapping enhancements in GaN HEMT models
by Don Dingee on 05-02-2024 at 10:00 am

RTH0 extraction

High-fidelity models incorporating real-world, cross-domain effects are essential for accurate RF system simulation. The surging popularity of gallium nitride (GaN) technology in 5G base stations, satellite communication, defense systems, and other applications raises the bar for transistor modeling. Keysight dives deeply into two GaN effects – self-heating and trapping – in enhanced ASM-HEMT 101.4 and MVSG_CMC 3.2.0 GaN HEMT models shipping in the latest release of Advanced Design System (ADS), developed using its advanced parameter extraction package in IC-CAP.

A quick intro to ASM-HEMT and MVSG_CMC

The Compact Model Coalition (CMC), a Silicon Integration Initiative (Si2) working group,  continues refining two industry-leading GaN transistor model specifications, ASM-HEMT and MVSG_CMC.

  • ASM-HEMT (Advanced SPICE Model for High Electron Mobility Transistors) is a computationally efficient, surface-potential-based model for terminal current and charge, accounting for various secondary device effects, including self-heating and trapping.
  • MVSG_CMC (MIT Virtual Source GaNFET Compact Model Coalition) is a self-consistent charge-based model with versatile field plate current and charge configurations. It also includes effects like leakage, noise, bias dependencies, and self-heating and trapping.

Both models provide analytical solutions for GaN device behavior that are suitable for accurate simulation in frequency and time domains. They each use an R-C network with thermal resistance and capacitance to model self-heating effects. Both also provide parameter selections for various trapping scenarios, including the latest versions modeled with R-C networks incorporating variable drain-lag and gate-lag.

Self-heating parameter extraction

The increased power density of GaN devices concentrates self-heating in a smaller area, reducing mobility, increasing signal delays, and potentially shortening a device’s lifespan. The extraction of self-heating parameters using IC-CAP is similar for either ASM-HEMT or MVSG_CMC GaN HEMT models.

Modeling thermal resistance RTH0 is effective when using drain current Id with varying drain and gate voltage in static and pulsed stimulation. First, a static Id-Vd curve taken at room temperature provides a baseline. Then, short Id pulses applied with Vd0 and Vg0 held at 0V to minimize trapping and self-heating provide response curves at various temperatures. Overlaying the static curve with the pulsed curves results in intersections where Id is the same. Power is calculated and plotted versus temperature, and the slope of the line is RTH0.

Using the pulsed Id approach provides a more straightforward extraction method than extracting RTH0 from DC static characteristics alone.

Trapping parameter extraction

Trapping effects in GaN devices also factor heavily into performance and reliability. Charge trapping in buffer and interface layers reduces 2DEG channel charge density and dynamic ION, increases dynamic RON and cut-off voltage, and modulates Id.

Again, the methodology for parameter extraction is similar between ASM-HEMT and MVSG_CMC, even with the differences in the implementation of the R-C network between the models. Trapping parameter extraction is done after the DC, IV, thermal, and S-parameter extraction. Gate-lag trapping extraction happens first since it affects the initial transistor response and overall behavior, activating only surface traps. With gate-lag behavior analyzed, drain-lag trapping extraction is more accurate, activating both surface and buffer traps.

ASM-HEMT Trapping Model 4 uses two R-C circuits to model drain-lag and gate-lag.

MVSG_CMC Trapping Model 2 uses a similar network with a slightly different physical model, accounting for variable trapping (capture) and de-trapping (emission) time.

Parameter extraction pulses Vg while holding Vd constant for gate-lag and pulses Vd while holding Vg constant for drain-lag. A representative drain-lag plot for MVSG-CMC illustrates the difference in capture and emission effects.

IC-CAP keeps pace with the latest GaN HEMT models

The automated parameter extraction flow in IC-CAP simplifies the process for any developer of GaN device models, whether they are a CMC member or not. Keysight’s experience with these industry-leading models also helps IC-CAP customers apply extraction strategies for their process, improving GaN device model fidelity.

IC-CAP also supports ADS users with the latest GaN HEMT models shipped in each successive release. Self-heating and trapping are good examples of adding more complex effects to improve RF circuit simulation results. As the CMC continues improving its models, Keysight keeps pace with tools for foundry and RF design customers.

Further information on the CMC and its upcoming meetings is available at:

https://si2.org/cmc/

Two application notes explain Keysight’s automated parameter extraction strategy for robust GaN HEMT models in more detail:

How to Extract the ASM-HEMT Model for GaN RF Devices Including Thermal Effects

Trapping Extraction of GaN HEMTs


KLAC- Past bottom of cycle- up from here- early positive signs-packaging upside

KLAC- Past bottom of cycle- up from here- early positive signs-packaging upside
by Robert Maire on 05-02-2024 at 8:00 am

Semiconductor Manufacturing

– KLA reported a good QTR but more importantly passing the bottom
– Lead times mean KLA gets orders early in up cycle-just behind ASML
– Potential upside in upcycle as packaging needs more process control
– 2024 2nd half weighted with stronger recovery likely in 2025

A solid quarter as expected with good guide

KLAC reported revenues of $2.36B and EPS of $5.26 versus street of $2.31B and $5.01, a modest beat. Guidance is for $2.5B +-$125M and EPS of $6.07+-$0.60 versus street of $2.42B and $5.68.

So all around a decent report….

Past the bottom in March

The most important comment is that the company has clearly put a stake in the ground as March being the bottom of the down cycle with subsequent quarters being up from here. This is certainly a much more definitive answer than what we heard last night from Lam and sets a much more positive tone going forward. While it does not sound like 2024 will be a barn burner but at least we will see steady recovery from here and a second half weighted year going into a stronger 2025.

The stronger 2025 agrees with Lam’s comments and other comments we have heard as we still have a number of issues that are headwinds in the industry, like NAND over supply, trailing edge weaker etc; etc;.

But it does sound like a lot of the other issues will resolve or reduce by the end of the year. This has been one of the more extended down cycles we have been through and KLA has done a good job through it all.

KLA tools tend to be early in the order cycle

KLA tools tend to be ordered early in the cycle for two main reasons.1) you need KLA tools to get other tools and the overall fab process up to speed in the next node 2) KLA tools have longer lead times than process tools which are typically morte of a turns business where KLA tools have lead times of multiple quarters. The only tools that have longer lead times and precede KLA tools are litho tools from ASML.

We would imagine that the order book will likely start to fill throughout 2024 for delivery starting in 2025 and beyond.

Investors need to remember that there are a lot of new greenfield fabs that need building construction to be finished before equipment can be received

Exiting flat panel business is a good move

The flat panel business sometimes made the semiconductor business look stable by comparison. We never saw flat panel as being strongly inside KLAs wheelhouse. It makes a lot more sense for KLA to focus on things closer to home or adjacent to home. Back end is obviously adjacent to front end……

Packaging finally gets some respect

We have been talking about the back end of the business needing more process control for a number of years now and it seems as if it has been very late in coming but may finally be getting somewhere. The days of rows of “sewing machine” wire bonders under bare light bulbs in an ugly, dirty factory in Taiwan are behind us.

Packaging is now a front end process business with micron level dimensions seen in the front end a while ago.

We think this could be a significant opportunity for growth outside of KLAs core wafer and reticle inspection markets and closer to KLAs wheelhouse than the Orbotech acquisition and is obviously lower cost, organic growth to boot.

While the back end is notoriously cheap and avoids expense we think the complexity has gotten to the point where there is an overwhelming need for front end like process control.

The Stocks

We would expect a more positive investor response to KLA than what we saw and heard from Lam. The only tempering factor may be the Intel earnings report released at the same time which seems to be somewhat underwhelming which may put a wet blanket on the overall industry momentum.

It still clear to us that we are far from being out of the woods of the downcycle but at least KLA is past the bottom and sees upside from here. We would repeat again that this is going to be a long, slow recovery and 2024 isn’t going to be great and probably look a lot like a mirror image of 2023 but its a start.

We still remain cautious that the stocks don’t get too far out over the tip of their surfboard as they had gotten in past months and perhaps the retraction we have seen will keep expectations and stock prices in check a bit more.

On the positive side, we think downside disappointment is likely limited going forward so we primarily have to pay attention to valuation

About Semiconductor Advisors LLC

Semiconductor Advisors is an RIA (a Registered Investment Advisor),
specializing in technology companies with particular emphasis on semiconductor and semiconductor equipment companies.

We have been covering the space longer and been involved with more transactions than any other financial professional in the space.

We provide research, consulting and advisory services on strategic and financial matters to both industry participants as well as investors.

We offer expert, intelligent, balanced research and advice. Our opinions are very direct and honest and offer an unbiased view as compared to other sources.

Also Read:

LRCX- Mediocre, flattish, long, U shaped bottom- No recovery in sight yet-2025?

ASML- Soft revenues & Orders – But…China 49% – Memory Improving

ASML moving to U.S.- Nvidia to change name to AISi & acquire PSI Quantum


One Step Ahead of the Quantum Threat

One Step Ahead of the Quantum Threat
by admin on 05-02-2024 at 6:00 am

PQShield Quantum Threat

When it comes to the security of tomorrow, the time to prepare is today, and at PQShield, we’re focused on shaping the way the digital world is protected from the inevitable quantum threat. We deliver real-world, quantum-safe hardware and software upgrades, and it’s our mission to help modernize the legacy security systems and components of the world’s technology supply chain.

Based in the UK, PQShield began as a spin-out from the University of Oxford, and is now the largest collaboration of post-quantum cryptographers under one roof, anywhere in the world. We’re also world-leaders in advanced hardware side-channel protection, and we’re a source of truth, providing clarity to our stakeholders at every level. With teams across 10 countries, covering EU, UK, US and Japan, we’ve been involved in the PQC conversation globally, working with industry, academia, and government.

Within a decade, the mathematical defenses that currently keep online information safe will be at risk from a cryptographically relevant quantum computer, sufficiently powerful to break those defenses. In fact, even before quantum technology exists, there’s a significant risk of ‘harvest-now-decrypt-later’ attacks, poised to extract stolen information when the technology to do so becomes available. We believe it’s critical that industries, organizations, governments, and manufacturers are aware of the threat, and follow the best roadmap to quantum resistance.

This is a critical moment. With the recent push for legislation in the US, such as NSM-10 and HR.7535, as well as CNSA 2.0 and the National Cybersecurity Strategy, federal agencies and government departments are now mandated to prepare and budget for migration to full PQC by 2033. Meanwhile in Europe, organizations such as ANSSI (French Cybersecurity Agency) and BSI (German Federal Office for Information Security) have published key recommendations on deployment scenarios, and in the UK, the National Cyber Security Centre (NCSC) are recommending next steps in preparing for post-quantum cryptography. International influence is also growing quickly. We recently presented at the European Parliament, attended a roundtable discussion at the White House, and we’ve been key contributors to the World Economic Forum on regulation for the financial sector. There’s no doubt that the world is waking up to the quantum threat.

PQC is also finding its way into major applications. Recently, Apple unveiled a major update, introducing their PQ3 protocol for post-quantum secure iMessaging. This follows Signal’s large-scale update to post-quantum messaging (referencing PQShield’s research in this domain), as well as Cloudflare’s deployment of post-quantum cryptography on outbound connections. Google Chrome version 116 also includes hybrid PQC support for browsing, and AWS Key Management service now includes support for post-quantum TLS. Other providers are certain to follow.

In addition, the publication of the finalized NIST PQC standards, 2024 is set to kickstart even more widespread awareness and adoption. It’s certainly a point that the team at PQShield have been working towards; our ‘think openly, build securely’ ethos has helped us contribute directly to the NIST project, and we’ve created a portfolio of forward-thinking solutions using the expected algorithms. Our products are already in the hands of key customers such as Microchip, AMD, Raytheon, Tata Consulting Services, and many more.

PQShield’s goal is to stay one step ahead of the attackers, and we believe our security portfolio can help. With our FIPS 140-3-ready software libraries, our side-channel protected hardware solutions, and our embedded IP for microcontrollers, we’re aiming to provide configurable products that maximise high performance and high security for the technology supply chain. We’ve understood the reality of the quantum threat, and at PQShield we’re focused on helping the world to defend against it.

Also Read:

Crypto modernization timeline starting to take shape

WEBINAR: Secure messaging in a post-quantum world

NIST Standardizes PQShield Algorithms for International Post-Quantum Cryptography


WEBINAR: Navigating the Power Challenges of Datacenter Infrastructure

WEBINAR: Navigating the Power Challenges of Datacenter Infrastructure
by Mike Gianfagna on 05-01-2024 at 10:00 am

WEBINAR Navigating the Power Challenges of Datacenter Infrastructure 1

 

We all know power and energy management is a top-of-mind item for many, if not all new system designs. Optimizing system power is a vexing problem. Success requires coordination of many hardware and software activities. The strategies to harmonize operation for high performance and low power are often not obvious. Much work is going on here. Data center design is at the heart of the problem as the cloud has created massive processing capability with a massive power bill. You need to look at the problem from multiple points of view to make progress. A recent webinar from proteanTecs brought together a panel of experts who look at this problem from multiple points of view. The insights they offer are significant, some quite surprising and counterintuitive. Read on to get a better understanding of navigating the power challenges of datacenter infrastructure.

The Webinar

proteanTecs is a unique company that focuses deep data analytics through on-chip monitoring. You can learn about the company’s technology here. Recently, proteanTecs announced a power reduction solution that leverages its existing sensing and analytics infrastructure. I covered that announcement here. So, this combination of solutions around power optimization give the company a broad and unique perspective.

In a recent webinar, proteanTecs brought together a group of distinguished experts from A-list companies who are involved in various aspects of data center design. The insights offered by this group are quite valuable. A link is coming so you can get all the information from the panel directly. Let’s first review some of the key take-aways.

The Panel

The panelists are shown in the graphic at the top of this post. I will review each panelist’s opening remarks to set the stage for what follows. Going from right to left:

Mark Potter moderated the panel. He has been involved in the data center industry for over 30 years, much of that time at Hewlett Packard Enterprise and Hewlett Packard Labs, where he was Global CTO and Director. Today, Mark is involved in venture capital and is on the advisory board of proteanTecs. The moderator is a key role in an event like this. Mark is clearly up to the challenge of guiding the discussion in the right direction, uncovering significant insights along the way.

Evelyn Landman, Co-Founder and CTO at proteanTecs. Evelyn has broad responsibility for all proteanTec solutions across the markets the company serves. She pointed out that new workloads are creating new demands at the chip and system level. While advanced technology remains important, density requirements are forcing a move to chiplet-based design. There is also a focus on reducing operating voltage to save power, but this brings a new set of challenges.

Eddie Ramirez, VP Go-To-Market, Infrastructure Line of Business at Arm. Eddie focuses on working with Arm’s substantial ecosystem to build efficient semiconductor solutions for the cloud, networking, and edge markets. Eddie discussed the exploding size of the user base as driving compute power challenges. Everyone wants to do more with larger data sets, and AI is driving a lot of that.

Artur Levin, VP AI Silicon Engineering at Microsoft. Artur and his team focus on developing the most efficient and effective AI solutions. Artur also sees unprecedented growth in compute demands. Thanks to new AI algorithms, he is seeing new forms of compute infrastructure that previously did not exist. Cooling becomes a system and silicon challenge. The mandate for sustainability will also impact the approaches taken.

Shesha Krishnapuria, Fellow and IT CTO at Intel. Shesha has the broad charter of advancing data center design at Intel and throughout the industry for energy and space efficiency. He has focused on this area for over 20 years. Shesha pointed out that over the past 20 years Intel chip computing for data centers has grown over 140,000 percent. An incredible statistic. Looking ahead, this growth is likely to accelerate due to the widespread use of GPUs for AI applications.

With this backdrop of power and cooling problems that are difficult and getting worse, Mark began exploring strategies and potential solutions with the panel. What followed was a series of insightful and valuable comments. You should invest an hour of your time to hear it live.

To whet your appetite, I will leave you with one insight offered by Shesha. He pointed out that data center infrastructure is still built with the same design parameters that have existed since the days of the mainframe.  That is, use extensive refrigeration systems to maintain a 68 degree Fahrenheit ambient. Looking at the operating characteristics of contemporary technology suggests an ambient of 91 degrees Fahrenheit should work fine. This suggests you can remove all the expensive and power-hungry cooling infrastructure and instead use the gray water provided by water utilities at a reduced price to drive simple heat exchangers, significantly simplifying the systems involved and lowering the cost.

To Learn More

There are many more useful insights discussed in the webinar. You can access the replay here. You can also access an informative white paper from proteanTecs on Application-Specific Power Performance Optimizer Based on Chip Telemetry. If you’d like to reach out and explore more details about this unique company you can do that here. All this will help you understand navigating the power challenges of datacenter infrastructure.


Nvidia Sells while Intel Tells

Nvidia Sells while Intel Tells
by Claus Aasholm on 05-01-2024 at 8:00 am

AMD Transformation 2024

AMD’s Q1-2024 financial results are out, prompting us to delve into the Data Center Processing market. This analysis, usually reserved for us Semiconductor aficionados, has taken on a new dimension. The rise of AI products, now the gold standard for semiconductor companies, has sparked a revolution in the industry, making this analysis relevant to all.

Jenson Huang of Nvidia is called the “Taylor Swift of Semiconductors” and just appeared on CBS 60 Minutes. He found time for this between autographing Nvidia AI Systems and suppliers’ memory products.

Lisa Su of AMD, who has turned the company’s fate, is now one of only 26 self-made female billionaires in the US. Later, she was the CEO of the year in Chief Executive Magazine and has been on the cover of Forbes magazine. Lisa Su still needs to be famous in Formula 1

Hock Tan of Broadcom, desperately trying to avoid critical questions about the change of WMware licensing, would rather discuss the company’s strides in AI accelerator products for the Data Center, which has been significant.

An honorable mention goes to Pat Gelsinger of Intel, the former owner of the Data Center processing market. He has relentlessly been in the media and on stage, explaining the new Intel strategy and his faith in the new course. He has been brutally honest about Intel’s problems and the monumental challenges ahead. We deeply respect this refreshing approach but also deal with the facts. The facts do not look good for Intel.

AMD’s reporting

While the AMD result was challenging from a corporate perspective, the Data Center business, the topic of this article, did better than the other divisions.

The gaming division took a significant decline, leaving the Data Center business as the sole division likely to deliver robust growth in the future. As can be seen, the Data Center business delivered a solid operating profit. Still, it was insufficient to take a larger share of the overall profit in the Data Center Processing market. The 500-pound gorilla in the AI jungle is not challenged yet.

The Data Center Processing Market

Nvidia’s Q1 numbers have been known for a while (our method is to allocate all of the quarterly revenue in the quarter of the last fiscal month), together with Broadcom’s, the newest entry into the AI processing market. With Intel and AMD’s results, the Q1 overview of the market can be made:

Despite a lower growth rate in Q1-24, Nvidia kept gaining market share, keeping the other players away from the table. Nvidas’ Data Center Processing market share increased from 66.5% to 73.0% of revenue. In comparison, the share of Operating profit declined from 88.4% to 87.8% as Intel managed to get better operating profits from their declining revenue in Q1-24.

Intel has decided to stop hunting low-margin businesses while AMD and Broadcom maintain reasonable margins.

As good consultants, we are never surprised by any development in our area once presented with numbers. That will not stop us from diving deeper into the Data Center Processing supply chain. This is where all energy in the Semiconductor market is concentrated right now.

The Supply Chain view of the Data Center Processing

A CEO I used to work for used to remind me: “When we discuss facts, we are all equal, but when we start talking about opinions, mine is a hell of a lot bigger than yours.”

Our consultancy is built on a foundation of not just knowing what is happening but also being able to demonstrate it. We believe in fostering discussions around facts rather than imposing our views on customers. Once the facts are established, the strategic starting point becomes apparent, leading to more informed decisions.

“There is nothing more deceptive than an obvious fact.” Sherlock Homes

Our preferred tool of analysis is our Semiconductor Market model, seen below:

The model has several different categories that have proven helpful for our analysis and are described in more detail here:

We use a submodel to investigate the Data Center supply chain. This is also an effective way of presenting our data and insights (the “Rainbow” supply and demand indicators) and adding our interpretations as text. Our interpretations can undoubtedly be challenged, but we are okay with that.

Our current findings that the supply chain is struggling to get sufficient CoWoS packaging technology and High Bandwith Memory is not a controversial view and is shared by most that follow the Semiconductor Industry.

This will not stop us from taking a deeper dive to be able to demonstrate what is going on.

The Rainbow bars between the different elements in the supply chain represent the current status.

The interface between Materials & Foundry shows that the supply is high, while the demand from TSMC and other foundries is relatively low.

Materials situation

This supply/demand situation should create a higher inventory position until the two bars align again in a new equilibrium. The materials inventory index does show elevated inventory, and the materials markets are likely some distance away from recovery.

Semiconductor Tools

The recent results of the semiconductor tools companies show that revenues are going down, and the appetite of IDMs and foundries indicates that the investment alike is saturated. The combined result can be seen below, along with essential semiconductor events:

The tools market has flatlined since the Chips Act was signed, and there can certainly be a causal effect (something we will investigate in a future post). Even though many new factories are under construction, these activities have not yet affected the tools market.

A similar view of the subcategory of logic tools which TSMC uses shows an even more depressed revenue situation. The tools revenue is back to a level of late 2021, in a time with unprecedented expansion of the semiconductor manufacturing foot print:

This situation is confirmed on the demand side as seen in the TSMC Capital Investments chart below.

Right after the Chips Act was signed, TSMC lowered the capex spend to close to half, making life difficult for the tools manufacturers.

The tools foundry interface has high supply and low demand as could be seen in the supply chain model. The tools vendors are not the limiting factor of GPU AI systems.

The Foundry/Fabless interface

To investigate the supply demand situation between TSMC and it’s main customers we choose to select AMD and Nvidia as they have the simplest relationship with TSMC as the bulk of their business is processors made by TSMC.

The inventory situation of the 3 companies can be seen below.

As TSMC’s inventory is building up slightly does not indicate a supply problem however this is TSMC total so their could be other moving parts. The Nvidia peak aligns with the introduction of the H100.

TSMC’s HPC revenue aligns with the Cost of Goods sold of AMD and Nvidia.

As should be expected, these is no surpises in this view. As TSMC’s HPC revenue is growing faster than the COGS of Nvidia and AMD, we can infer that a larger part of revenue is with other customers than Nvidia and AMD. This is a good indication that TSMC is not supply limited from a HPC silicon perspective. Still, the demand is still outstripping supply at the gate of the data centers.

The Memory, IDM interface

That the skyhigh demand for AI systems is supply is limited, can be seen by the wild operating profit Nvidia is enjoying right no. The supply chain of AI processors looks smooth as we saw before. This is confirmed by the TSMC’s passivity in buying new tools. If there was a production bottle neck, TSMC would have taken action from a tools perspective.

An anlysis of Memory production tools hints at the current supply problem.

The memory companies put the brakes on investments right after the last downcycle began. The last two quarters the demand has increased in anticipation of the High Bandwidth Memory needed for AI.

Hynix in their rececent investor call, confirmed that they had been underinvesting and will have to limit standard DRAM manufacturing in order to supply HBM. This is very visible in our Hynix analysis below.

Apart from the limited supply of HBM, there is also a limitation of advanced packaging capacity for AI systems. As this market is still embryonic and developing, we have not yet developed a good data method to be able to analyze it but are working on it.

While our methods does not prove everything, we can bring a lot of color to your strategy discussions should you decide to engage with our data, insights and models.

Thanks for reading Semiconductor Business Intelligence! Subscribe for free to receive new posts and support my work.

Also Read:

Real men have fabs!

Intel High NA Adoption

Intel is Bringing AI Everywhere

TSMC and Synopsys Bring Breakthrough NVIDIA Computational Lithography Platform to Production


Anirudh Keynote at CadenceLIVE 2024. Big Advances, Big Goals

Anirudh Keynote at CadenceLIVE 2024. Big Advances, Big Goals
by Bernard Murphy on 05-01-2024 at 6:00 am

Anirudh keynote at CadenceLive 2024

The great things about CEO keynotes, at least from larger companies, is that you not only learn about recent advances but you also get a sense of the underlying algorithm for growth. Particularly reinforced when followed by discussions with high profile partner CEOs on their directions and areas of common interest. I saw this recently in Pat Gelsinger’s Intel Vision pitch and I saw it recently in Cadence’s Anirudh Devgan’s keynote and follow on discussions with Jensen Huang of NVIDIA and Cristiano Amon of Qualcomm which I will cover in a separate blog.

The big takeaways

Lots of good information on anticipated growth in key markets, especially an expectation that the semiconductor market will grow to a trillion dollars in the next few years. Anirudh thinks this may be an underestimate given fast growth in AI, datacenter, and automotive. The latter answers my earlier question about why everyone was asking for blogs on automotive. AI of course is driving a lot of growth but so are digital twins. We’re used to digital twin design in semiconductors (in the form of EDA/System Design & Analysis (SDA), but the concept has also been catching on in other markets like aerospace and automotive. Not just for electronics subsystems, now also in physical modeling which has traditionally relied more (as much as 80% of the design cycle) on much slower and more expensive prototype-based experimentation. Drug design is even further behind, offering (so far) limited advantages through virtualization. Yet aircraft, car and therapeutics designers want the same digital twin accelerators already enjoyed in electronics design.

Why the huge difference? There are obvious differences in disciplines: in fluid dynamics around wings or through jet engines, in mechanical CAD and multiphysics, in molecular dynamics. Yet these are well understood domains with decades of principled science and engineering/experimental practices to back them up. Virtual models for these systems have been possible but were commonly too slow and inaccurate to be useful.

In contrast, electronic design has aimed for a long time to be completely virtual. Anirudh attributes this success to a 3-layer cake: in the middle principled simulation and optimization to guarantee obligatory accuracy, sitting on top of accelerated compute to deliver many orders of magnitude improvement in throughput over even massive CPU parallelism, with AI on top to intelligently guide these analytics through massive design state spaces. In line with the Cadence computational software mantra, he sees potential to extend the same benefits to those other domains, as they are already seeing in modeling turbulent flows in CFD for aero and auto design.

Application to EDA/SDA

This philosophy also drives advances in EDA/SDA. Cadence Cerebrus is now demonstrating customer implementation results not only faster than manual trials but with better PPA. MediaTek is one cited example. Customers are seeing almost half the benefit of a node migration with better AI tools and optimization, a huge benefit. Analog design automation, long a holy grail, is also starting to make real progress. In verification, Cadence announced their Palladium Z3 and Protium X3 acceleration platforms, offering higher performance and lower power per gate for up to 48 billion gates. NVIDIA has been a big user of Palladium acceleration and a Palladium development partner for 20 years. Meanwhile hardware verification is complemented by the AI-driven Verisium platform to optimize across verification runs. In PCB and 3D-IC design Allegro remains the leading platform, especially in aerospace and defense and is now able to offer 10X productivity improvement through AI enhancement.

There is more in multiphysics optimization: Celsius for thermal simulation, Clarity for electromagnetics, Sigrity for signal integrity are now integrated in one platform for improved accuracy through tighter coupling. Together with Optimality for workflow optimization, the multiphysics platform claims significant productivity improvements from 3D-IC all the way up to rack level analytics and optimization. Finally, and also notable is that Cadence is increasing its investment in IP with a new leader and acquisitions of interface IP and cores particularly around 3D-IC/chiplet requirements.

Beyond EDA/SDA

Cadence is already aiming at playing a bigger role in total system design. Not just big electronics but expanding its role in automotive, aircraft, datacenter, power generation and transmission, football stadiums, and of course drug design. Not in one big step naturally but headed in those directions. Cadence are already partnered with McLaren and Honda for aerodynamic design, with the 49ers for sustainability optimization, with datacenters for power and cooling optimization, with Pfizer for advanced molecular design, and even in the International Maritime Organization’s efforts to make the shipping industry greener. Big goals but also huge opportunities 😀

I’ll have more to say on the molecular sciences topic in a separate blog. For now, I’ll just offer a brief prelude. This domain may seem far removed from the world of semiconductor design with some significant differences but in the initial stages of exploration – exploring options in a huge state space – research technologies have a lot in common. Which is good because drug design desperately needs help. In contrast to semiconductor design, the cost of developing a new drug is doubling every 9 years, which some have labeled Eroom’s Law (Moore’s law backwards😀). A 3-layer cake approach could help manage this cost with huge implications for health care. Which make Cadence’s acquisition of Open Eye look like a pretty smart move.

As I said, big advances and big goals. It’s encouraging to see a nominally EDA/SDA company expanding beyond those bounds.


Will my High-Speed Serial Link Work?

Will my High-Speed Serial Link Work?
by Daniel Payne on 04-30-2024 at 10:00 am

traditional flow min

PCB designers can perform pre-route simulations, follow layout and routing rules, hope for the best from their prototype fab, and yet design errors cause respins which delays the project schedule. Just because post-route analysis is time consuming doesn’t mean that it should be avoided. Serial links are found in many PCB designs, and doing post-route verification will ensure proper operation with no surprises, yet there is reluctance to commit signal integrity experts to verify all the links. I read a recent white paper from Siemens that offers some relief.

Here are four typical methods for PCB design teams to analyze designs after layout.

  1. Send PCB to fab while following guidelines and expect it to work.
  2. Perform visual inspection of the layout to find errors.
  3. Ask a signal integrity expert to analyze the PCB.
  4. Have a signal integrity consultant analyze the PCB.

These methods are error prone, time consuming and therefore risky. There must be a better way to validate every serial link to ensure protocol compliance prior to fabrication in a timely manner by using some clever automation.

Post-route Verification of serial links

Validating serial links is a process of electromagnetic modeling, analysis, and results processing. High signal frequencies used with serial channels require a full-wave electromagnetic solver to model the intricacies where the signals change layers, going from device pin to device pin. Analysis looks at the channel model including the transmitter (Tx) and receiver (Rx) devices, and the channel protocol to understand what the signal looks like at the link end. Results processing helps to measure if our design passes and the specific margins.

Channel Modeling

With the cut-and-stitch approach the channel is cut into regions of transverse electromagnetic mode (TEM) and non-TEM propagation, solving each region independently, and stitch each region together to create the full channel. Cut-and-stitch is less accurate than modeling the full channel at once, yet it’s a faster approach worth taking. Knowing where to make each cut is critical for accuracy and each cut region needs to include the discontinuity like a via, any traces nearby and the signal’s return path. An experienced SI engineer knows where to make these cuts.

The clever automation comes in the form of HyperLynx from Siemens, as it knows where to cut and then automatically create signal ports and setting up the solver for you. HyperLynx users can setup hundreds of areas per hour for full-wave simulations. To speed up run times the simulations can be run in parallel across many computers. On stitching, HyperLynx automates by adding lossy transmission lines with solved models. The length of transmission lines are adjusted, because parts of the signal trace are inside 3D areas. HyperLynx also automates each transmission line adjustments. You now can have interconnect models for hundreds of signal channels, by using automation and get the simulation results overnight.

Analysis

IBIS-AMI simulation is the most accurate way to analyze serial links after layout, as the Tx/Rx models come from the vendors, however you may have to wait to get a model and the runtimes can be long. Another way to analyze a serial channel is with standards-based compliance, which is based on channel requirements in the protocol specification and compliance analysis runs quickly – in under a minute. The downside of compliance analysis is that there are dozens of protocols with hundreds of documentation pages and having at least five different analysis methods.

With HyperLynx there’s a SerDes Compliance Wizard to help support all the different methods for determining compliance. Users just specify the channels to analyze, select the protocol, and then run. There are 210 protocols supported in HyperLynx, and parameters can be adjusted for each protocol.

Results Processing

An IBIS-AMI simulator uses clock tick information, centering the equalized signal in the bit period, producing an eye diagram, while assuming the clock sampling is in the middle. An eye mask compares to the eye diagram, so if the inner portion of the eye doesn’t cross into the mask, then the test has passed. A statistical simulation is run to determine if the target bit error rate is reached, like 1e-12 or lower. If only a few million time-domain simulations are run, then extrapolation must be used. Modeling jitter is another challenge, and users may have to find and add jitter budgets. Meaningful AMI analysis results require a full-time SI engineer that knows the IBIS-AMI spec and simulator well.

Compliance analysis is more reliable than IBIS-AMI simulation as you can run it despite having vendor models, and it’s quicker and easier to do. HyperLynx reports show which signals passed or failed, plus the margins.

Automated Compliance Analysis

The traditional flow for post-route verification of serial channels is shown below, where red arrows indicate where data is examined, and the process is repeated for any adjustments.

Traditional compliance analysis flow

The HyperLynx flow is much simpler than the traditional compliance analysis flow, as automation helps speed up the process, so that all channels in a system design can be modeled and analyzed.

Using HyperLynx for post-route serial channel protocol verification

Summary

High-speed serial links require careful channel modeling, analysis and results processing to ensure reliable operation and meeting the specifications. A traditional approach has been compared to the HyperLynx approach, where the benefits of HyperLynx were noted:

  • Analyze all channels in a design for compliance
  • Overnight results
  • Reports which channels pass or fail, and by how much margin

Read the entire 13 page white paper from Siemens online.

Related Blogs


Enabling Imagination: Siemens’ Integrated Approach to System Design

Enabling Imagination: Siemens’ Integrated Approach to System Design
by Kalar Rajendiran on 04-30-2024 at 6:00 am

Siemens EDA Important to Siemens

In today’s rapidly advancing technological landscape, semiconductors are at the heart of innovation across diverse industries such as automotive, healthcare, telecommunications, and consumer electronics. As a leader in technology and engineering, Siemens plays a pivotal role in empowering the next generation of designs with its integrated approach to system design. This fact may sometimes get drowned in a torrent of others news, particularly when people don’t hear the decades old familiar name “Mentor Graphics” in the news anymore. Siemens retired that name in 2021 and replaced it with Siemens EDA, a segment of Siemens Digital Industries Software. Siemens EDA’s financials are not separately disclosed publicly as when Mentor Graphics was a separate company. Naturally, there are lots of questions in people’s minds about Siemens EDA’s role within the broader ecosystem, how it is performing and where it is headed.

At the recent User2user conference, Mike Ellow, Siemens EDA’s Executive Vice President gave an excellent keynote talk that addressed all these questions and more. His talk provided insights into how Siemens EDA is doing, its vision, its key investment areas and why Siemens EDA is an investment priority at Siemens. The following is a synthesis with some excerpts from Mike’s keynote presentation.

How is Siemens EDA Doing?

Siemens EDA demonstrated its EDA leadership through its 14% year-on-year growth in their recently closed fiscal year. This is noteworthy given Siemens EDA’s revenue does not include any significant IP revenue stream. The division also experienced a double-digit percentage increase in R&D headcount, which is the highest investment in Siemens EDA’s history (excluding acquisitions).

The following charts provide more financial details and speak for themselves.

Why is Siemens Investing in Siemens EDA?

We in the semiconductor and electronics industries have always known that semiconductors are at the center of a changing world. The only difference now is that everyone else has recognized it too.

And the semiconductor industry is projected to grow at an incredibly accelerated pace, crossing $1 trillion by 2030 [Sources: International Business Strategies/Nov 2022 and VLSI Research/Dec 2022].

Siemens EDA Enabling A New Era for System Design

Siemens EDA’s comprehensive digital twin technology plays a critical role in the design, verification, and manufacturing of complex electronic systems. A digital twin is a virtual representation of a physical system or product, and in the context of electronic design automation (EDA), it encompasses various aspects of electronic system development. Siemens EDA focuses on three key investment areas that enhance the capabilities of Siemens EDA’s digital twin technology, providing an integrated, holistic approach to design, verification, and manufacturing.

Accelerated System Design:

Leveraging advanced tools and methodologies to speed up the design process, accelerated system design includes high-level synthesis, system-level design and verification, and virtual prototyping. These tools enable engineers to quickly model and simulate complex electronic systems, leading to faster time-to-market and improved quality.

Advanced Heterogeneous Integration:

Combining different types of components and technologies on a single package or substrate, advanced heterogeneous integration facilitates the development of highly integrated and compact systems. Siemens EDA’s solutions include 3D ICs, multi-die packaging, and advanced packaging and assembly.

Manufacturing-Aware Advanced Node Design:

This area involves creating electronic designs that take into account the intricacies of advanced manufacturing processes. Design for manufacturability (DFM), process technology co-design, and support for advanced node technologies enable engineers to create optimized designs that can be reliably manufactured.

Revolutionizing Electronic System Design

Some key solutions that Mike touched upon during his keynote talk include:

Veloce CS Accelerates All Areas of System Design

Recently announced Siemens EDA’s Veloce CS platform offers high-speed emulation and prototyping capabilities, accelerating the verification of complex electronic systems. Veloce CS streamlines design, verification, and testing processes, enhancing overall product development efficiency. At 40B gates capacity, the solution boasts the highest capacity solution in the industry. Key features include:

Early Software Development: Veloce CS provides a hardware platform for early software development, allowing software teams to test and debug their code on virtual hardware.

Full-System Simulation: Engineers can simulate entire systems, including hardware, software, and peripherals, to ensure all aspects of the design work together seamlessly.

Comprehensive Debugging: Advanced debugging features such as waveform viewing, performance profiling, and hardware-assisted tracing help engineers identify and resolve issues quickly.

3DIC Tooling

Siemens EDA’s 3D integrated circuit (3DIC) tooling spans its entire portfolio, providing comprehensive support for the design, verification, and manufacturing of 3DICs. This includes:

Design Tools: Siemens EDA offers tools for floorplanning, partitioning, and routing 3DIC designs to optimize performance and space usage.

Verification and Simulation: Advanced tools for simulating power, thermal, and signal integrity aspects of 3DICs ensure reliable performance.

Physical Implementation: 3DIC layout and design for manufacturability (DFM) tools help create detailed designs that can be manufactured efficiently.

3DIC Modeling and Visualization: Engineers can use advanced modeling and visualization tools to better understand spatial relationships and optimize designs.

Solido Statistical Analysis and Optimization

Solido is a technology suite focusing on the design, verification, and optimization of integrated circuits (ICs) using advanced statistical analysis and machine learning techniques, especially in the context of process variability. Solido’s tools allow engineers to handle the complexities of modern IC design, creating reliable, high-quality designs.

Tessent Embedded Analytics

Siemens EDA Tessent offers a suite of tools for design-for-test (DFT), design-for-diagnosis (DFD), and design-for-reliability (DFR) in semiconductor devices. These solutions improve testability, diagnosis, and reliability in electronic designs, contributing to the creation of high-quality, functional semiconductor devices.

Artificial Intelligence (AI) not new to Siemens EDA

Siemens EDA has been leveraging AI for many years well before AI became a buzz word in the industry, through its products such as Solido and Tessent. Now of course, AI techniques are being leveraged by products across its entire EDA portfolio.

Summary

Siemens EDA’s integrated approach to system design, combined with its comprehensive EDA solutions, positions the company as a leader in enabling imagination and driving innovation in the semiconductor industry. Through early software validation, manufacturing-aware design, AI-enhanced design automation tooling, open ecosystem enablement, and advanced EDA tools, Siemens EDA is empowering engineers and designers to create the next generation of high-quality, leading-edge systems. As technology continues to evolve, Siemens EDA’s solutions will play a crucial role in shaping the future of electronics and ensuring continued success for its customers and the wider industry.

Also Read:

Design Stage Verification Gives a Boost for IP Designers

Checking and Fixing Antenna Effects in IC Layouts

Siemens Promotes Digital Threads for Electronic Systems Design