DAC2025 SemiWiki 800x100

WEBINAR The Rise of the SmartNIC

WEBINAR The Rise of the SmartNIC
by Don Dingee on 09-08-2022 at 10:00 am

Achronix Webinar - Rise of the SmartNIC

A recent live discussion between experts Scott Schweitzer, Director of SmartNIC Product Planning with Achronix, and Jon Sreekanth, CTO of Accolade Technology, looked at the idea behind the rise of the SmartNIC and ran an “ask us anything” session fielding audience questions about the technology and its use cases.

Three phases of network interface cards

The standards collectively known as Ethernet have made fantastic progress since the early days of “thick net” and vampire tap media attachment units. In those days, simple network interface cards translated packets between the network cable and a parallel bus interface inside a computer, maybe like ISA.

Speeds were not that fast in this first phase of network interface cards, but the simple act of adding Ethernet connectivity opened all kinds of possibilities. The now-famous catchphrase “the network is the computer” defined this era with the ability to move files and send messages easily. Incremental speed improvements continued with successive releases of the standard, a shift to Cat5 cable, more powerful networking chips, and faster bus interfaces up to PCIe.

At higher wire speeds, computers can begin to fall behind even with faster interfaces and chips. Packets can arrive more quickly than some hosts can process them. The second phase, with TCP/IP offload engines, added DMA capability and front-end packet processing like checksums, freeing host processor cycles for other needs. Most flows were raw, with stateless packets, and offload engines mostly offered fixed functions with limited programmability.

For advanced networks, stateful flows are critical to application performance and security. Each flow is set up with its attributes: IP addresses and ports, protocols and applications, user identities, and even content-specific information. Flow tables can be gigantic, with 16M entries or more. In this third phase, a SmartNIC rises to the challenge.

What makes a SmartNIC different?

A few observations about a SmartNICs:

  • Packet processing is soft, fully programmable for any role in today’s network, and able to anticipate future requirements.
  • CPU cores don’t scale well for the high-speed data plane. (The webinar presenters pick on Arm a bit in their discussion, but RISC-V or other CPU cores are at a similar disadvantage. They still play a role in control plane management.) A high-end FPGA can be configured for specific data plane roles and reconfigured on the fly if conditions are detected, such as a denial-of-service attack.
  • Everything needed for stateful flows must run from memory, so FPGA memory performance and interconnect are critical. Technologies like HBM or GDDR6 keep data moving in the FPGA fabric.

Here’s a block diagram of a SmartNIC programmable accelerator based on the Achronix Speedster 7t1500 FPGA, a part combining four 400Gb (or sixteen 100Gb) Ethernet ports with a multi-fractural MAC array and a PCIe Gen 5 interface. Another key in the Speedster 7t architecture is the innovative 2D network on chip, or 2D NoC.  The 2D NoC is a hardened data path which connects all of the FPGA’s external interfaces and memory to each other and deep within the FPGA fabric.  Using the 2D NoC latency is reduced compared to using FPGA logic for data routing across the chip.

Like any workflow-optimized architecture, the theme is to run the Ethernet pipes at speed, keep as many banks of processing and memory as busy as possible, and work on multiple packets in the pipeline. At several points, the presenters mention this is not the high-frequency trading use case, a stateless flow where every nanosecond counts. A few nanoseconds of latency in a stateful flow make little difference at these wire speeds.

Some good questions … and answers

One welcome difference in this Rise of the SmartNIC webinar is that there isn’t much presentation material. After a short preamble with the agenda and some industry factoids, the image above is the only slide in the live stream. More time is spent on audience questions including these:

  • Would a P4 engine run in a SmartNIC?
  • Is “wormhole routing” still a thing, and would a SmartNIC help?
  • Why should both the packet and flow engines be FPGA cores?
  • How does timing closure in the FPGA affect packet processing determinism?
  • What is the role of timestamping in multiple packets from different links?

The answers might surprise you, but you’ll have to watch to find out. This webinar is archived for viewing anytime – follow the link below to register and view the entire discussion.

Achronix Webinar: The Rise of the SmartNIC

Also Read:

A clear VectorPath when AI inference models are uncertain

Time is of the Essence for High-Frequency Traders

How to Cut Costs of Conversational AI by up to 90%


Application-Specific Lithography: 5nm Node Gate Patterning

Application-Specific Lithography: 5nm Node Gate Patterning
by Fred Chen on 09-08-2022 at 6:00 am

Blur Limitations for EUV Exposure

It has recently been revealed that the N5 node from TSMC has a minimum gate pitch of 51 nm [1,2] with a channel length as small as 6 nm [2]. Such a tight channel length entails tight CD control in the patterning process, well under 0.5 nm. What are the possible lithography scenarios?

Blur Limitations for EUV Exposure

A state-of-the-art EUV system has limited options for 51 nm pitch. Assuming the use of sub-resolution assist features (SRAFs) [3], an ideal binary image is projected with good NILS (normalized image log-slope) and depth of focus; however, blur spoils this outcome (Figure 1). The intensity modulation is diminished by blur.

Figure 1. Impact of blur on 51 nm pitch image on a 0.33 NA EUV system. A Gaussian or exponential blur function is convoluted with the blur-free image. Only relative blur magnitudes are given here.

Blur itself cannot be expected to have a fixed magnitude, as secondary electron yield is itself a variable quantity [4]. This alone generates a massive range of possible CDs. Moreover, blur from electrons is more exponential in nature than Gaussian [5]. This further worsens the impact, as exponential blur accumulates more contributions from electrons further away from the point under consideration (Figure 2).

Figure 2. Exponential vs. Gaussian blur. Exponential blur decays faster at shorter distance while Gaussian blur decays faster at larger distances.

Consequently, with CD changes easily approaching or even exceeding 50%, EUV exposure is unsafe for gate patterning, which requires tolerances <10%. High-NA suffers from the same issue. Even if the NA went as high as the vacuum limit of 1.0 (Figure 3), blur, not wavelength/NA, dominates the image.

Figure 3. Blur degrades the ideal image even for the maximum EUV NA of 1.0.

Solution: SADP

The situation is changed entirely if the gate CD is not determined by lithography directly, but by a sidewall spacer width. The lithography pitch for spacer patterning is doubled to 102 nm, which is easily accommodated by ArF immersion lithography. This self-aligned double patterning (SADP) approach has been around for a long time [6,7]. Thus, this gate patterning approach will likely never go away.

References

[1] https://www.angstronomics.com/p/the-truth-of-tsmc-5nm

[2] https://www.dolphin-ic.com/products/standard-cell/tsmc_5ff_cell.html; https://www.dolphin-ic.com/products/standard-cell/tsmc_4ff_cell.html

[3] http://www.lithoguru.com/scientist/litho_tutor/TUTOR43%20(Nov%2003).pdf

[4] H. Fukuda, “Stochasticity in extreme-ultraviolet lithography predicted by principal component analysis of Monte Carlo simulated event distributions in resist films.” J. Appl. Phys. 132, 064905 (2022).

[5] M. Kotera et al., “Extreme Ultraviolet Lithography Simulation by Tracing Photoelectron Trajectories in Resist,” Jpn. J. Appl. Phys. 47, 4944 (2008).

[6] E. Jeong et al., “Double patterning in lithography for 65nm node with oxidation process,” Proc. SPIE 6924, 692424 (2008).

[7] https://seekingalpha.com/article/4513009-applied-materials-smic-move-another-headwind

This article first appeared in LinkedIn Pulse: Application-Specific Lithography: 5nm Node Gate Patterning.

Also Read:


Does SMIC have 7nm and if so, what does it mean

Does SMIC have 7nm and if so, what does it mean
by Scotten Jones on 09-07-2022 at 10:00 am

SMIC 7nm

Recently TechInsights analyzed a Bitcoin Miner chip fabbed at SMIC and declared SMIC has a 7nm process. There has been some debate as to whether the SMIC process is really 7nm and what it means if it is 7nm. I wanted to discuss the case for and against the process being 7nm, and what I think it means.

First off, I want to say I am not going to reveal all the specific pitches, if you want that data you need to purchase a report from TechInsights.

Is it 7nm?

The key pitches for a process technology are Fin Pitch (FP), Contacted Poly Pitch (CPP) and Metal 2 Pitch (M2P). The SMIC pitches for FP are larger than TSMC 10nm FP, and the CPP and M2P are the same as TSMC 10nm. So is this really a relaxed 10nm process, it is not that simple.

The SMIC process also has some Design Technology Co-Optimization (DTCO) features not seen at 10nm. Specifically, TSMC and Samsung 10nm have 8.25 and 8.75 track height respectively at 10nm, SMIC is 6 tracks something that Samsung didn’t do until 5nm and TSMC at 7nm. SMIC also has a Single Diffusion Break (SDB) something Samsung had at 10nm but went away from at 7nm and didn’t get back to until 5nm, and TSMC didn’t implement until their second generation 7nm process (7+).

The bottom line to me is the high-density logic cell density for SMIC is 89 million transistors per millimeter squared, this is very similar to Samsung and TSMC first generation 7nm processes. In my opinion this is a 7nm “class” process and appears to be an acceptable 7nm alternative.

How did SMIC get here

I have seen several comments that SMIC copied TSMC’s first generation 7nm process, and while they may have adopted elements of it, there are a lot of differences too, for example as noted above, all the pitches are relaxed to 10nm or greater dimensions and some of the DTCO features are more advanced than TSMC’s first generation 7nm.

TSMC’s first generation 7nm process was an all-optical process with no EUV layers and due to the US blocking EUV systems from shipping to China, SMIC is limited to an optical approach and this process has no EUV layers.

I find the large CPP dimension particularly interesting. CPP is the combination of gate length, contact width and gate to contact spacer thickness. Gate length is limited by leakage and device type, contact width is limited by a company’s ability to drive down specific contact resistance and therefore achieve an acceptable contact resistance, gate to contact spacer thickness is limited by the capacitance of the spacer material and the resulting gate to contact parasitic capacitance. The fact that the CPP is “10nm like” suggests SMIC is still struggling with these processes. It is common to increase CPP to improve performance and this suggests to me that to get to acceptable performance SMIC had to do that.

Where can SMIC go from here

Assuming EUV systems continue to not be available in China, this limits SMIC’s options for further improvements. It seems likely the US will continue to block EUV shipments to China and I don’t see China developing their own EUV system any time soon.

The most straight forward approach in my view is to reduce the pitches to match TSMC’s first generation 7nm optical process, this combined with the SDB, and 6-track cell would yield a second generation 7nm or even 6nm process. I believe SMIC should be able to achieve this given some time to further optimize the process steps, this could be a reasonable goal for SMIC for 2023. This would contrast with Samsung and TSMC that both have had 5nm in production since 2020 and are currently ramping 3nm, with 2nm in development.

The next obvious question is could SMIC get to 5nm. Without EUV going below 7nm requires increasingly complex multi-patterning with increasingly restrictive design rules and spiraling costs. It is theoretically possible to do 5nm all optical or even 3nm. Self-Aligned Quadruple Patterning with immersion lithography can produce a 20nm pitch small enough for any 3nm requirements but would require a lot of cut masks for Fin and metal patterning to get there.

Conclusion

SMIC appears to have a serviceable first generation 7nm process now with a reasonable prospect to get to second generation 7nm/6nm in the near futures. 5nm and 3nm while theoretically possible would be highly constrained and expensive process versions if pursued due to the lack of EUV.

Also Read:

SEMICON West 2022 and the Imec Roadmap

ASML EUV Update at SPIE

The Lost Opportunity for 450mm

Intel and the EUV Shortage


Samtec is Fueling the AI Revolution

Samtec is Fueling the AI Revolution
by Mike Gianfagna on 09-07-2022 at 6:00 am

Samtec is Fueling the AI Revolution

It’s all around us. The pervasive use of AI is changing our world. From planetary analysis of weather patterns to monitoring your vital statistics to assess health, it seems as though smart everything is everywhere. Much has been written about the profound impact AI is having on our lives and society. Everyone seems to agree that AI software algorithms deliver the transformative technology that powers these changes. Those who are more thoughtful about the process (and perhaps work in the semiconductor industry) realize it is the incredible processing power of semiconductors that brings the software to life. There is a major conference dedicated to hardware and AI coming soon. If you want to learn more, the AI Hardware Summit is the place to be. More on that in a moment. There is another aspect of the AI transformation that is the subject of this post. It is the critical nature of information flow in AI systems. It is here that Samtec is fueling the AI revolution.

Data Everywhere

For a very long time, data was generated by humans interacting with applications. This created something of a self-limiting process. Humans can do so much work per day, so aggregate data grew at a predictable and steady rate. Around 2018 that changed. It was then that machines began generating data. Think autonomous vehicles, aircraft, personal monitoring devices and the ubiquitous use of sensors in almost everything. The fuel for AI is data, so this change had a lot to do with the AI revolution.  A useful measure is a zettabyte, or 1,000,000,000,000,000,000,000 (1021) bytes of information.

According to a Forbes article, there was about .004 zettabytes of data in world in 1997. According to Statista,   the world housed 47 zettabytes of data in 2020. That number is projected to grow to 612 zettabytes by 2030 and 2,142 zettabytes by 2035.  You get the picture.

Applying deep analytics to this data to find the world-changing facts hidden there is a significant benefit of AI. Data is the fuel that powers AI. As processing speed and latency demands grow, more of this processing is being done at the edge or on the device itself. There is simply not enough time to get to the cloud and back. All of this creates substantial complexity in the form of heterogeneous architectures. Many collections of CPU, GPU, DSP, FPGA and custom-built processors working in unison.

All this creates substantial demand on data communication across the architecture. This is where Samtec brings a lot to the table and how Samtec is fueling the AI revolution.

Connectivity Solutions for AI Architectures

Samtec brings value to AI systems design across three primary areas:

  • Next Gen System Expertise: The connectivity solutions provided by Samtec are engineered with the complete system in mind. By taking this big picture approach, all design parameters such as throughput, density, scaling and power/thermal management can be addressed.
  • High-Performance Interconnects: This is the foundational expertise delivered by Samtec. Its large catalog of advanced interconnect solutions offers something for every design challenge. Its ultra-high density, signal-integrity optimized, and high-power interconnects fit well with the challenges of AI system design.
  • Full System Support: Samtec collaborates with its customers. This simple strategy is the margin of victory in many applications. The company’s industry-leading expertise extends the capabilities of any design team, so the entire high-performance signal channel can be optimized.

You can learn more about Samtec on SemiWiki and on Samtec’s website. Here, you can see the full impact of Samtec and its products on high-profile applications, including chipsets, embedded platforms, accelerators, and application-specific architectures.  You will learn a lot.

The Next Big Event

I mentioned the AI Hardware Summit. I attended the first one a few years ago. The conference has grown dramatically since then. It turns out there are many, many AI-focused events. But not that many that focus on the hardware side of AI. This is what brings AI to life and the AI Hardware Summit has a singular focus here.

The event will be September 13-15, 2022 at the Santa Clara Marriott. Samtec will be exhibiting there. You can even get a break on the registration fee if you mention them. See the details below. Stop by and see how Samtec is fueling the AI revolution, live and in person.

Also read:

A Look at the PCIe Standard – the Silent Partner of Innovation

A MasterClass in Signal Path Design with Samtec’s Scott McMorrow

Passion for Innovation – an Interview with Samtec’s Keith Guetig


Webinar: Semifore Offers Three Perspectives on System Design Challenges

Webinar: Semifore Offers Three Perspectives on System Design Challenges
by Mike Gianfagna on 09-06-2022 at 10:00 am

Semifore RTL Designer Flow

The exponential increase in design complexity is a popular topic these days. In fact, it’s been a topic of discussion for a very long time. The explosion of chip and system design complexity over the past ten years has become legendary and haunts many of us daily. A lot of the complexity we face has to do with coordinating across an ever-increasing ecosystem. Chip and software design are now intimately linked, and verification must encompass both, including subtle interactions between both. When considering this backdrop, an upcoming webinar from Semifore caught my eye. The event focuses on a critical part of the system design problem – the interface between hardware and the software that controls it. Through a clever series of “channeling” three points of view, a lot of key pieces of the puzzle are brought to light. The webinar is coming soon, and so is a registration link. Read on the find out how Semifore offers three perspectives on system design challenges.

See the replay here

Webinar Background

The hardware/software interface, or HSI is the critical piece of technology that allows software to communicate with the hardware it’s controlling. With all the dedicated processors in most designs today, this is a very important part of the architecture. If it doesn’t work, the product doesn’t ship. If it has a subtle bug, new features may be impossible to add later.

All parts of the design team have their own view of the HSI – what they need it do to, how they want it done and what they need to know about it to get their job done. This is just the start; there are many more cross-dependencies. Software teams struggle to get involved early in the hardware portion of the design, verification teams struggle to find ways to test the HSI across both software and hardware interactions. And architects often have a vision of how the system should work that may not be shared by the software and verification teams.

In this entertaining webinar, you will hear the perspectives of an RTL architect, verification engineer and firmware developer. Each will bare their soul regarding their challenges and frustrations. Who has the best perspective, and how can these teams all work better toward a superior system design?

These are some of the questions that will be answered during this unique and informative webinar. To whet your appetite, here is some key perspectives of each team member. The fact that all three speakers resemble each other is by design.

  • The RTL Architect is the first to accuse the software team of being the long pole for design completion. The benefit of byte enables, and the challenges of endianness are touched on. This person admits losing sleep over building complexity that can’t be verified reliably.
  • The Verification Engineer provides some background on why the verification job had gotten so difficult. Byte enables are one reason, there are more. Generally, clever design tricks to save space in hardware design result in real challenges in verification. The software team is once again singled out as the long pole for design completion.
  • The Firmware Driver Developer admits to being the long pole up-front. He points out that, with regard to design completion, “it doesn’t ship until the device drivers work”. The RTL Architect said that, too. He observes that, for a long time, his team’s work began when everyone else was done, guaranteeing their long pole status. Shift-left approaches are starting to change that. This person has more ideas to offer.

To Learn More

If you face system design complexity challenges, you will learn some key points of view across the design ecosystem and hear about some high-impact strategies to tame complexity as well. I highly recommend this webinar. You can See the replay here and quickly learn how Semifore offers three perspectives on system design challenges

About Semifore

Software engineers outnumber hardware engineers by 5X or more for a typical advanced semiconductor design. Complex software algorithms must control a growing array of specialized processors and hardware accelerators to deliver a robust product.

The HSI provides the technology for software to control this hardware and it forms the foundation of the entire design project. Semifore’s CSRCompiler™ system automates the creation of this foundation.

You can learn more about Semifore from this CEO interview, and don’t forget to check out the webinar to learn how Semifore offers three perspectives on system design challenges.

Also read: 

The Rising Tide of Semiconductor Cost

The Roots Of Silicon Valley

The Semiconductor Ecosystem Explained


Today’s SoC Design Verification and Validation Require Three Types of Hardware-Assisted Engines

Today’s SoC Design Verification and Validation Require Three Types of Hardware-Assisted Engines
by Daniel Nenni on 09-06-2022 at 6:00 am

IC Chip Low angle light emitting 600x600

Lauro Rizzatti offers Semiwiki readers a two-part series on why three kinds of hardware-assisted verification engines are now a must have for semiconductor designs continues today. His interview below with Juergen Jaeger, Prototyping Product Strategy Director in the Scalable Verification Solution division at Siemens EDA, addresses why different hardware platforms should be used in a verification and for which tasks.

In part one of the series, Lauro interviewed Vijay Chobisa, Product Marketing Director in the Scalable Verification Solution division at Siemens EDA, about why verification of 10+ billion-gate design requires a distinct architecture. That blog post can be found here.

LR: Siemens EDA acquired proFPGA, a popular FPGA prototyping system, and integrated it into the Veloce hardware-assisted verification platforms. What drove this acquisition and what has been the customer response?

JJ: Let me first address the question of what drove the acquisition.

For many years, FPGA designers created FPGA prototypes in-house. Lately though, the task has become challenging and expensive because of the complexity of the latest generation of FPGAs. In addition, because large ASIC designs require multiple FPGAs for their mapping, the design of FPGA prototypes evolved into a significant project, rather expensive and time-consuming, making off-the-shelf systems cost effective.

Common customers encouraged Siemens to partner with PRO DESIGN because of the synergies between the Veloce emulation platform and the proFPGA family. An OEM agreement was signed in 2017 and engineering work was done on both sides to integrate the Veloce Prototyping System software with proFPGA. With this implementation, we accelerated Veloce proFGPA’s deployment in Veloce customer installations.

It turned out that customer response has been very favorable to the acquisition. The Veloce proFPGA boards are of high quality, the system is scalable and flexible, and supports various AMD and Intel FPGAs. It is capable to fulfill many needs in the prototyping space. Today, under the umbrella of Siemens, benefitting from a global sales channel reaching a wide customer base, its adoption is expanding rapidly.

LR: With the addition of proFPGA to your Veloce Strato+ emulator and Veloce Primo FPGA enterprise prototype, now you propose three different, but complimentary platforms. Can you describe the role of each platform?

JJ: Let me start with what drives customer behavior to choose various hardware platforms. If you look at emulation, emulators can do many different things. You can perform hardware verification, software development, power analysis, DFT coverage, and more tasks. Customers primarily purchase an emulator to reduce the risk of re-spinning the chip itself (confirm that the hardware and baseline software perform as expected). Predominantly, emulators are used for RTL verification, namely, to get the hardware design clean. That means that emulators like our Veloce Strato/Strato+ systems need certain characteristics like very fast and reliable compile times, and superior debug capabilities, all mandatory for hardware verification. And then of course you can carry out many other tasks because you already own it. Those additional use modes increase the value of what you can do with it.

Over the last four to five years, software contents in chips and SoCs have grown dramatically. So did the complexity of SoC hardware with multicores, accelerators, and DSPs, as well as lots of interfaces that require drivers and firmware. As a result of that, embedded software teams have expanded rapidly, which consequently led to an escalating demand to run software workloads much earlier in the project cycle.

Emulators can certainly accomplish that, but an emulator is also a relatively expensive platform and again the primary reason for buying it is to verify the RTL code. This opened the door to the FPGA prototyping platform. Compared to an emulator, an FPGA prototyping platform delivers higher performance, let’s say, five times faster run-time and lower cost that help to proliferate its deployment across several software engineering teams. That’s a second platform you need here, which is covered by our Veloce Primo.

Today, SoCs include lots of different interfaces depending on what the chip does. Popular ones, of course, include PCIe, USB, MIPI, and many others. All these interfaces must also be verified in the context of the basic functionality of the interface. They also must verify the software that is run on it to exercise and utilize those interfaces in the correct way to ensure that hardware and software work together. That is where a platform like Veloce proFPGA comes into play because with Veloce proFPGA you can include the interface and run it at speed connected –– for example, a real PCIe interface connected to a graphics card.

That is why we offer three platforms. Hardware emulation is the perfect platform for full, chip and SoC verification. Enterprise prototyping targets embedded software validation as well as system-level validation. For these tasks, the prototyping system needs certain characteristics such as, fast transition from emulation, reliable compile, sufficient debug, and higher performance than emulation. And then at at-time speed testing of interfaces with proFPGA.

Trying to merge all of that into one tool may be possible, but then you end up with one tool that can somewhat do everything but does not do anything right or excel in any task that customers really need.

LR: Your two competitors in this field offer two complimentary platforms, that is, emulators and FPGA prototypes. Why do you believe that three platforms are necessary?

JJ: In a nutshell, you want to have the optimal platform, the best solution for each phase in your project to reduce the risk of re-spins, get your software validated, keep the verification/validation cycle on schedule, and to deliver the end product on time and on budget to your customers.

In order to do that, I’m convinced that you need three platforms that are best-in-class for what they are intended to do. Emulation for hardware verification, power analysis, all of the tasks you run on it. Enterprise prototyping to bring up your software on the pre-silicon chip, comprising the full operating system, firmware and application software. Fast proFPGA prototyping for at-speed interface validation.

LR: To conclude, you have been working in the hardware-assisted verification domain for quite a while. What are some of the aspects of the job that continue to motivate and fascinate you most?

JJ: From childhood on, I was always fascinated by learning new things and building things. Now, if you think about what verification and especially hardware verification is, it puts you on a platform with the most advanced designs and systems in the industry. You are working with customers on leading-edge projects that will be launched, in some cases, years from now.

You are also dealing with some of the most technical challenges and costly challenges in the industry, which is verification of billion-gate designs executing very complex software workloads.

In my case, I enjoy being at the forefront of technology. It gives me the opportunity to learn new things, and that keeps me young.

LR: Thank you, Juergen.

JJ: You’re very welcome.

Also read:

Resilient Supply Chains a Must for Electronic Systems

Five Key Workflows For 3D IC Packaging Success

IC Layout Symmetry Challenges


Why China hates CHIPS

Why China hates CHIPS
by Craig Addison on 09-05-2022 at 6:00 am

Joe Bidden CHIPS Act 2022

The CHIPS and Science Act has its fair share of critics, with detractors calling it corporate welfare for “losers” like Intel, or lacking guardrails to prevent companies making legacy chips in China.

One of the most vocal opponents of the act has been China’s communist-ruled government.

CHIPS – an acronym for Creating Helpful Incentives to Produce Semiconductors – offers $52.7 billion in subsidies for chip investments on  American soil.

China’s foreign ministry spokesman said the act was “economic coercion” by the US. State-owned newspaper Global Times slammed CHIPS as an attempt to “artificially isolate China from the industrial chain”.

More recently, state-backed industry groups have joined the chorus. The head of China’s Semiconductor Industry Association (CSIA) said parts of the act violated “fair market principles”, and called on the US to “correct its mistakes”.

Language like that is often used in Communist Party propaganda, so the CSIA statement was more likely aimed at pleasing Beijing than swaying foreign sentiment on the issue.

An official at a different trade group said CHIPS would disrupt “normal” cooperation and investment between China and the US, while another labeled it a form of “semiconductor hegemony”.

What’s going on here?

Besides the hypocrisy – China’s own National IC Industry Investment Fund, aka the Big Fund, raised $50 billion to invest locally – Beijing is worried that the days of foreign chip makers investing billions in China may be over.

That would weaken the country’s role in the global supply chain, and limit the knowledge transfer that occurs when Chinese engineers trained in a foreign-owned venture leave to start their own company.

Another reason for the angst on the Chinese side is that their so-called “self sufficiency” efforts in semiconductors are not paying off, at least not fast enough.

Sorry, SMIC’s 7-nm chip produced without an EUV scanner doesn’t count. Regardless of the headlines and armchair experts proclaiming that it leveled the playing field between China and the West, SemiWiki readers know that producing an experimental chip is not the same as making one in high volume at high yields.

More worrying for Beijing, though, is the fact that several senior Chinese officials in charge of disbursing the Big Fund money are now under investigation for graft.

While there’s bound to be differences of opinion over how to best spend the CHIPS money, Beijing won’t feel any better knowing that none of it will line the pockets of American chip executives, several of whom were invited to the White House to witness President Joe Biden sign the bill into law.

Also read:

The Semiconductor Cycle Snowballs Down the Food Chain – Gravitational Cognizance

2022 Semiconductor Supercycle and 2023 Crash Scenario

An EDA AI Master Class by Synopsys CEO Aart de Geus


MAB: The Future of Radio is Here

MAB: The Future of Radio is Here
by Roger C. Lanctot on 09-04-2022 at 6:00 am

MAB The Future of Radio is Here 1

Good story telling is what helps drive change, engage consumers, and define progress. Steve Newberry, CEO of Quu, is a master of the craft.

He told two stories, in particular, at the Michigan Association of Broadcasters event last week in Traverse City, Mich. The first story was to simply note for the broadcasters in attendance that whether they knew it or not their world was changed forever when the National Highway Traffic Safety Administration mandated backup cameras in cars.

With the stroke of a regulatory pen, NHTSA decreed that within a few years – in other words, now – all cars would be outfitted with eight-inch (or larger) LCD displays to enable drivers to see where they were going – when they were driving backwards (in the interest of saving approximately 150 lives each year).  Simultaneously, auto industry engineers suddenly gained a huge canvas upon which to render the future of content consumption in cars – bye-bye narrow displays with five preset buttons, and a couple of knobs. Hello future radio!

The second story Steve told was of a meeting between the National Association of Broadcasters and General Motors, where a senior GM executive said:

“I love meeting with you guys – always do – but I must be honest. How can we expect radio to deliver on this technology in the future when you guys can’t get your act together on the basic RDS and HD information? Radio is a mess.”

The participants in that meeting needn’t have looked any further than the dashboards in cars parked outside the building. Every one of them would be guaranteed to have a different presentation of relevant metadata for the same radio station with the same content including information displayed in all caps or not, words cutoff or misspelled, or, most likely, information entirely missing.

SOURCE: Slide supplied by Quu showing different presentations of the same content from the same broadcaster in different cars.

The presentation of the information in the cars – wanting though it may be – is no fault of the auto makers. Auto makers understand that the radio is the one piece of customer interaction that they can actually control. Designers and engineers are doing their best to capture the information delivered via the over-the-air broadcast signal and render it for the convenience of the consumer.

Sadly, broadcasters do not universally have their act together. This creates confusion for the consumer and a disappointing experience in the dashboard – in spite of the best efforts of the car makers.

In an age when Google, Apple, Amazon, and other tech companies are seeking to commandeer automotive user experiences, there is no room for failure of this sort. Broadcasters need to get their collective act together simply to get in the game and participate in the snazzy new interfaces being delivered by car makers such as Audi and Mercedes Benz.

On stage at the MAB event was Juan Galdamez, senior director of broadcast strategy and business development at Xperi, which is laboring diligently to deliver the back-end systems and consumer-facing content capable of supporting those snazzy interfaces.

Galdamez emphasized, though, that Xperi is merely a toolkit for the automotive industry. It is worthless without proper inputs from broadcasters in the form of carefully curated metadata.

Fred Jacobs, moderator of the panel and owner of Jacobs Media, pointed out – ominously – that for the first time consumers surveyed as part of Jacobs Media’s annual TechSurvey identified Bluetooth, not radio, as “the most important media feature” among new car buyers. In other words, consumers want to plug their phones into their infotainment systems and project their mobile apps and content.

This is unquestionably bad news for broadcasters and auto makers. Screen projection solutions such as Apple’s CarPlay and Google’s Android Auto prejudice Internet sources of content over access to local media. Once these systems take over the screen it can be nearly impossible for users to find their way back to the radio.

Broadcasters need to clean up their acts. The tools and the screen real estate are in place to deliver the future of radio. For some broadcasters, that future has already arrived and they are thriving. Broadcasters need to embrace digital technology to make their stations easier to discover and enjoy. The auto makers have already done their part.

Also read:


Podcast EP105: Cadence STA Strategy and Capabilities, Today and Tomorrow with Brandon Bautz

Podcast EP105: Cadence STA Strategy and Capabilities, Today and Tomorrow with Brandon Bautz
by Daniel Nenni on 09-02-2022 at 10:00 am

Dan is joined by Brandon Bautz, senior group director of product management responsible for silicon signoff and verification product lines in the Cadence Digital & Signoff Group. Brandon has more than 20 years of experience in chip design and the EDA industry and has been at Cadence for over 10 years.

Dan explores the current and future design challenges being addressed by STA at Cadence. Strategies to deliver cost-effective performance in the face of exploding design complexity are discussed. The role of STA to address variability, aging, IR drop/max frequency issues and 3D implementation are also discussed among other topics.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


The Semiconductor Cycle Snowballs Down the Food Chain – Gravitational Cognizance

The Semiconductor Cycle Snowballs Down the Food Chain – Gravitational Cognizance
by Robert Maire on 09-02-2022 at 6:00 am

Wiley Coyote Semiconductor Crash 2022

-Where are we in the chip cycle? Why is it different this time?
-No one rings a bell to indicate the top or bottom of a cycle
-Could the lack of self-awareness lead to a worse downturn?
-Who will weather the cycle better & come out on top

Gravitational Cognizance
“A cartoon character will not fall until they realize they should be falling”

We wasted too much of our ill spent youth watching cartoons. One of our favorites was Wile E. Coyote. The unique physics was very repeatable, the character in question would find him or herself with no visible means of support but not succumb to gravity until they recognized their position or another character pointed it out to them.

Typically, Wile E. Coyote would run full speed off a cliff but not fall until he noticed his predicament.

This reminds us very much of where the semiconductor industry is today. The industry has been running so fast and focused on speed that it hasn’t yet realized that the basis that supports the industry has gone away, that is that demand has dropped and will see further declines.

We have been talking about the industry being in a down cycle for months now. Memory prices have dropped (usually one of the first signs) inventories have grown, lead times are down. More importantly, demand for semiconductor rich electronic devices is dropping.

However, some semiconductor and semiconductor equipment companies are still reporting great earnings, record breaking earnings in some cases. This makes it very difficult to talk about a down cycle when you are still making big bucks.

The speed at which the industry has been running has driven so much momentum into the industry that gravitational cognizance has been delayed.

Still living on backlog and fear

In many cases the industry is living on backlog or non cancelable orders placed near the height of the cycle despite the fact that product is in inventory or readily available. In other cases, customers are so fearful (like the auto industry) that they continue to order even though they already have enough simply because they don’t want a repeat of the shortages.

Semiconductor equipment is worst in this regard as no one dares to get off the queue waiting in line for litho tools lest shortages start up again.

Realization may hit home when the channel is fully stuffed

In the past we have seen instances where there are crates of semiconductor equipment piling up on the receiving dock because it can’t be installed quickly enough or there’s no room. In one past case there was a parking lot full of crates.

Wafers sit in the channel at OSATs waiting to be packaged and tested. Manufacturers, like Micron, start to hold product off the market to support pricing.

Momentum could cause a huge overshoot of capacity

Given the absolutely huge momentum the industry has had for several years, it is not unreasonable to think that we could wind up with one of the largest cases of excess capacity the industry has seen in many cycles.

A lot has been said about the industry being more conservative in their spending and more cautious than the bad old days of cycles past but the rate of equipment orders over the last year or more has been anything but cautious.

Where is the snowball in the downhill food chain?

We are still at the early stages of a down cycle as not everyone agrees, admits or recognizes reality. We are concerned about processor demand for hyperscalers, data centers and memory in consumer devices but there is not a full fledged capitulation. We have seen virtually no impact in the equipment makers financials other than supply chain issues primarily related to COVID, which have been relatively minor. So we are still at the snowball stage where the issue has not yet grown to snowman size to encompass the entire industry.

Many so called analysts are still very bullish or have a lot of buys or have become even more positive as valuations have slipped. From a stock perspective we have not yet hit, and are still far away from capitulation.

Maybe the bell ringer indicating the bottom of the cycle is the last bullish analyst capitulating (ignoring those who never change their ratings….).

Who will weather the down cycle best?

We think TSMC remains one of the more defensive names in the group of foundries or chip makers in general. They are far and away at the top of the heap and can control and dominate pricing such that they control pricing for every other foundry in the market. Other foundries live under the price umbrella of TSMC.

When TSMC is out of capacity or raises prices enough, chip customers are forced to go elsewhere to get their chips made even though TSMC is always their first choice. The bottom line is that TSMC’s overflow business goes to competitors. When TSMC has excess capacity, a lot of that business will come back to them, leaving those down the foundry food chain with much lower utilization and profits.

In semiconductor equipment , ASML is always the last piece of equipment you would ever cancel given the crazy long lead times. Most process tools, such as deposition and etch tools are more of a “turns” business where you can simply reorder what you have canceled without much delay.

China business is an added “unknown”

Its unclear what the status of tool and chip shipments to China will be given the worsening relations. Given that China has been the biggest customer of most equipment companies means that this is a significant variable that looks to be getting worse in the near term. Though not hugely impactful today it could make a big difference when equipment companies are scrambling for orders or need to find new homes for cancelations or delayed product.

Are there “time bombs” in wooden crates in the field?

Lam had noted that they had several billion dollars worth of unfinished tools that were shipped on an incomplete basis to customers while waiting on parts. This situation is quite different from ASML that shipped completed yet untested tools to get them to customers faster.

What happens when all those tools are completed and installed???

We recall a situation where the Chinese LED industry had a lot of MOCVD tools sitting in crates that were going unused.

The CHIPS act, throwing gasoline on a glut bonfire?

As we have mentioned in previous notes the timing of the CHIPS Act is nothing short of poetic. Micron will likely cut capex in half and Intel has already announced a likely slowing of Ohio and other projects.

Could we get into a situation of “use it or lose it” where chip makers feel forced to spend CHIPS money where they otherwise wouldn’t, through prudent financial analysis. Basically throwing free or cheap money at the industry even though its not needed because we already have excess capacity (although maybe not in the right country).

We may need a redirect of the CHIPS act given that circumstances have changed substantially since the project was conceived

The stocks

Overall we see a lot more downside beta than upside beta in the semi industry right now. Its hard to come up with some variables that could break significantly to the upside and most of the variables seem to be degrees of downside potential.

We see no good reason to get involved with value traps. The last thing we want to hear is some analyst saying that a stock is trading at a 52 week low with a huge dividend. At this point I am certainly not concerned about a dividend play when my principle is at significant risk , there is no offset benefit.

Certainly macro uncertainty is a big portion of the problem and it doesn’t look like macro issues are getting better soon. Semiconductors remain the tip of the economic spear and will see outsize impact from any macro gyrations.

The other issue is that we don’t have any good sense as to how long or how deep? Could overall demand for chips keep the slow down short lived and minor? Which way will all the variables fall?

Long term demand seems absolutely great but things could get even uglier short term as we have yet to see a bottom in our view.

About Semiconductor Advisors LLC
Semiconductor Advisors is an RIA (a Registered Investment Advisor), specializing in technology companies with particular emphasis on semiconductor and semiconductor equipment companies. We have been covering the space longer and been involved with more transactions than any other financial professional in the space. We provide research, consulting and advisory services on strategic and financial matters to both industry participants as well as investors. We offer expert, intelligent, balanced research and advice. Our opinions are very direct and honest and offer an unbiased view as compared to other sources.

Also read:

KLAC same triple threat headwinds Supply, Economy & China

LRCX – Great QTR and guide but gathering China storm

Intel & Chips Act Passage Juxtaposition