SNPS1670747138 DAC 2025 800x100px HRes

Reality Checks for High-NA EUV for 1.x nm Nodes

Reality Checks for High-NA EUV for 1.x nm Nodes
by Fred Chen on 04-26-2023 at 6:00 am

Reality Checks for High NA EUV for 1.x nm Nodes

The “1.xnm” node on most roadmaps to indicate a 16-18 nm metal line pitch [1]. The center-to-center spacing may be expected to be as low as 22-26 nm (sqrt(2) times line pitch). The EXE series of EUV (13.5 nm wavelength) lithography systems from ASML feature a 0.55 “High” NA (numerical aperture), targeted toward enabling these dimensions. The only justification for this projected resolution is that it exceeds about one-third wavelength/NA. Some reality checks are in order to confirm the realism of this expectation.

1. Plane of Incidence Rotation Across Slit

The plane of incidence is known to rotate across the EUV exposure arc-shaped slit [2]. Consequently, the optimized illumination distribution for a given pattern actually rotates across slit, potentially giving unoptimized results at slit positions toward the edge compared to the center.

Figure 1. The optimized source (red) for a 25.5 nm center-to-center array needs to be trimmed down (blue) to be safe against rotation across slit.

As shown in Figure 1, trimming the optimized source allows it to be safe against slit rotation effects, but this also reduces pupil fill, i.e., the range of illumination angles divided by full range of possible angles. Below 20% pupil fill, the EUV illumination system itself begins absorbing the EUV energy, which is not desired not only due to system wear, but also due to reduced throughput [3].

2. Low NILS

The 1.x nm node may be expected to feature 22-25 nm center-to-center vias with sizes <10 nm. Such small vias (already less than the Rayleigh resolution) at relatively wide spacings will have a low normalized image log-slope (NILS) without a slower exposure [4]. Phase-shift masks need to be designed for EUV use, but this is still under development.

3. Polarization

As line pitches shrink to 18 nm and below, polarization begins to become important since the angle between interfering waves is larger [5,6]. Moreover, the multilayer in an EUV system preferentially reflects in the TE polarization, i.e., perpendicular to the plane of reflection; this is (mostly) perpendicular to the scan direction) [6]. This will degrade the NILS of lines which are aligned with the scan direction, i.e., horizontal lines.

4. Thinner Resist

Resist thickness is expected to be reduced to below 30 nm in order to be used with High-NA EUV systems. This is due to the reduced depth of focus [7]. Even with improved depth of focus, however, aspect ratio of smaller features is another likely limiting factor. A 10 nm wide feature with a 20 nm resist thickness already has an aspect ratio of 2:1. A reduced resist thickness must be compensated by an inversely proportional higher absorption coefficient in order to preserve absorbed photon density for a given dose.

5. Electron Blur and Stochastics

Finally, focusing EUV to a smaller spot would be no use in the presence of over 3 nm blur [8]. However, the randomness of secondary electrons [9,10] prevents blur from being a fixed number, ultimately leading to the possibility of stochastic defects [11].

All the above factors should serve as reminders that advancing lithography is no longer a simple matter of reducing wavelength and/or increasing numerical aperture.

References

[1] International Roadmap for Devices and Systems (Lithography), 2022 edition.

[2] M. Antoni et al., “Illumination optics design for EUV lithography,” Proc. SPIE 4146, 25 (2000).

[3] F. Chen, High NA EUV Design Limitations for sub-2nm Nodes, https://www.youtube.com/watch?v=IgYJfLyDYos

[4] F. Chen, Phase-Shifting Masks for NILS Improvement – A Handicap for EUV?, https://www.linkedin.com/pulse/phase-shifting-masks-nils-improvement-handicap-euv-frederick-chen

[5] H. Levinson, “High-NA EUV lithography: current status and outlook for the future,” Jpn. J. Appl. Phys. 61 SD0803 (2022).

[6] F. Chen, The Growing Significance of Polarization in EUV Lithography, https://www.linkedin.com/pulse/growing-significance-polarization-euv-lithography-frederick-chen; F. Chen, Polarization by Reflection in EUV Lithography Systems, https://www.youtube.com/watch?v=agMx-nuL_Qg

[7] B. J. Lin, “The k3 coefficient in nonparaxial l/NA scaling equations for resolution, depth of focus, and immersion lithography” J. Micro/Nanolith. MEMS MOEMS 1(1) 7–12 (April 2002).

[8] T. Allenet et al., “Image blur investigation using EUV-Interference Lithography,” Proc. SPIE 11517, 115170J (2020), https://www.dora.lib4ri.ch/psi/islandora/object/psi%3A38930/datastream/PDF/Allenet-2020-Image_blur_investigation_using_EUV-interference_lithography-%28published_version%29.pdf

[9] Q. Gibaru et al., Appl. Surf. Sci. 570, 151154 (2021), https://hal.science/hal-03346074/file/DPHY21090.1630489396_postprint.pdf

[10] H. Fukuda, J. Micro/Nanolith. MEMS MOEMS 18, 013503 (2019).

[11] F. Chen, Secondary Electron Blur Randomness as the Origin of EUV Stochastic Defects, https://www.linkedin.com/pulse/secondary-electron-blur-randomness-origin-euv-stochastic-chen

This article first appeared in LinkedIn Pulse: Reality Checks for High-NA EUV for 1.x nm Nodes

Also Read:

Can Attenuated Phase-Shift Masks Work For EUV?

Lithography Resolution Limits: The Point Spread Function

Resolution vs. Die Size Tradeoff Due to EUV Pupil Rotation


How to Enable High-Performance VLSI Engineering Environments

How to Enable High-Performance VLSI Engineering Environments
by Kalar Rajendiran on 04-25-2023 at 10:00 am

License Operations Figure

Very Large Scale Integration (VLSI) engineering organizations are known for their intricate workflows that require high-performance simulation software and an abundance of simulation licenses to create cutting-edge chips. These workflows involve complex dependency trees, where one task depends on the completion of another task and collaboration among team members is vital for successful project completion. The design processes involve different stages such as architectural design, exploration, implementation, and verification. Each stage requires specialized tools and licenses, making license management and resource planning a critical factor for successful VLSI projects.

To optimize design processes for maximum performance and efficiency, engineering teams must adopt a carefully curated tool chain that addresses critical success factors. These factors include collaboration, efficient sharing of compute and license resources, clear visibility of progress and project status, and reproducibility of results.

Altair recently hosted a webinar in which Stuart Taylor, Senior Director, Product Management presented an approach on how engineering teams can achieve the above. Stuart highlights various tools from Altair such as Altair® Monitor™ and Altair® FlowTracer™ that could be used to implement the methodology he presents. The webinar titled “How to Enable High-Performance VLSI Engineering Environments” includes a demo of some of these tools and is available for viewing on-demand. The following are excerpts from the webinar.

License Resource Planning

License resource planning involves both qualitative and quantitative usage analysis of licenses. Qualitative analysis involves understanding which licenses are being used for what purpose, while quantitative analysis involves analyzing license utilization. Engineering teams must understand which licenses are being used, how much, and for what purpose. This understanding will help teams plan for future license requirements and optimize license usage.

A license management details report can provide a graphical dashboard displaying license capacity, utilization, and denials. The report can also include a forecast of future license requirements based on current usage trends. This report will help engineering teams plan for future license requirements and avoid license denials.

License Operations

Each stage of the VLSI design process requires different licenses and tools and as such license management is critical for engineering organizations. License management includes tracking the current usage of licenses, expiration dates, and license availability. For example, a license may expire during a critical phase of the design process, causing delays and impacting project timelines. Therefore, keeping track of license expiration dates and renewing licenses in advance is critical.

Operational Efficiency

One of the critical factors for operational efficiency is collaboration across different disciplines, geographies, and time zones. Collaboration tools that enable easy sharing of designs, code, and simulations are critical for efficient collaboration. Simple visual communication is also essential for complex workflows. Altair FlowTracer is an example of a tool that quickly communicates workflow status in a visual manner. This tool can provide a simple graphical representation of the VLSI design process, enabling team members to understand the current status of the workflow at a glance.

Collaboration

Collaboration is a key factor in VLSI engineering workflows since multiple engineers work on the same project simultaneously. Communication between team members is vital to ensure that everyone is working towards the same goal. It’s important to choose a tool chain that allows for easy communication and collaboration.

Efficient Sharing of Compute and License Resources

One of the major constraints for VLSI engineering teams is the availability of simulation licenses and compute resources. At the same time, an unlimited supply of all needed licenses is not economically sensible or even feasible. A well-curated tool chain can help optimize resource usage and reduce license costs. One solution is to use a cloud-based simulation platform that allows for the efficient sharing of compute resources and licenses. Cloud-based simulation platforms can be used to run simulations on a large scale without the need for expensive hardware, and can also provide access to the latest software versions.

Clear Visibility of Progress and Project Status

VLSI engineering projects involve many tasks that are interdependent and must be completed in a specific order. A well-curated tool chain can help provide clear visibility of project status and progress, allowing team members to see where they stand and what tasks need to be completed. A clear real-time view of the project progress and status is essential for project management and planning.

Reproducibility of Results and Concepts

Reproducibility is a crucial factor in VLSI engineering workflows since design concepts and simulation results need to be reproducible. Using a common framework for design methodologies, libraries and standards along with a carefully curated tool chain can help with reproducibility of results, ensuring that designs are manufacturable and meet requirements.

Summary

Altair provides a suite of tools that increase engineering productivity, optimize costs and accelerate chip development timeframes to achieve quick time to market. The tools optimize EDA environments and improve design-to-manufacturing process by reducing iteration cycles. The entire webinar is available for viewing on-demand.

Altair semiconductor design solutions are built to optimize EDA environments and to improve the design-to-manufacturing process, eliminate design iterations, and reduce time-to-market.

To learn more about Altair’s tools offering, visit here.

Also Read:

Optimizing Return on Investment (ROI) of Emulator Resources

Measuring Success in Semiconductor Design Optimization: What Metrics Matter?

Load-Managing Verification Hardware Acceleration in the Cloud


Configurable RISC-V core sidesteps cache misses with 128 fetches

Configurable RISC-V core sidesteps cache misses with 128 fetches
by Don Dingee on 04-25-2023 at 6:00 am

Gazzillion misses 2

Modern CPU performance hinges on keeping a processor’s pipeline fed so it executes operations on every tick of the clock, typically using abundant multi-level caching. However, a crop of cache-busting applications is looming, like AI and high-performance computing (HPC) applications running on big data sets. Semidynamics has stepped in with a new highly configurable RISC-V core, Atrevido, including a novel approach to cache misses – the ability to kick off up to 128 independent memory fetches for out-of-order execution.

Experience suggests a different take on moving data

Vector processing has always been a memory-hungry proposition. Various attempts at DMA and gather/scatter controllers had limited success where data could be lined up just right. More often than not, vector execution still ends up being bursty, with a fast vector processor having to pause while its pipeline reloads. Big data applications often introduce a different problem: the data is sparse and can’t be assembled into bigger chunks without expensive moves and lots of waiting. Conventional caching can’t hold all the data being worked on, and what it does hold encounters frequent misses – increasing the wait further.

Roger Espasa, CEO and Founder of Semidynamics, has seen the evolution of vector processing firsthand, going back to his days on the DEC Alpha team and followed by a stint at Intel working on what became AVX-512. Their new memory retrieval technology is Gazzillion™, which can dispatch up to 128 simultaneous requests for data anywhere in memory. “It’s tough to glean exactly how many memory accesses some other cores can do from documentation, but we’re sure it’s nowhere near 128,” says Espasa. Despite the difficulty in discovery, his team assembled this look at some competitive cores.

 

 

 

 

 

 

 

 

 

Three important points here. The first is that Gazzillion doesn’t eliminate the latency of any single fetch, but it does hide it when transactions get rolling and subsequent fetches overlap earlier ones in progress. The second is the vector unit in the Atrevido core is an out-of-order unit, which Espasa thinks is a first in the industry. Put those two points together; the result is whichever fetches arrive soonest will be processed next. Finally, the 128 figure is per core. It’s not hard to project this to a sea-of-cores strategy that would provide numbers of execution units with improved fetching needed for machine learning, recommendation systems, or sparse dataset HPC processing.

Way beyond tailorable, fully configurable RISC-V cores match the requirements

Most RISC-V vendors offer a good list of tailorable features for their cores. Atrevido has been envisioned from the ground up as a fully customizable core where everything is on the table. A customer interview phase determines the application’s needs, and then the core is optimized for power-performance-area (PPA). Don’t need a vector unit? No problem. Change the address space? Sure. Need custom instructions? Straightforward. Coherency, scheduling, or tougher needs? Semidynamics has carved out a unique space apart from the competition, providing customers with better differentiation as they can open up the core for changes – Open Core Surgery, as Espasa enthusiastically terms it. “We can include unique features in a few weeks, and have a customized core validated in a few months,” says Espasa.

 

 

 

 

 

 

 

 

An interesting design choice enables more capability. Instead of just an AXI interface, Semidynamics included CHI, allowing Atrevido to plug directly into a coherent network-on-chip (NoC). It’s also process agnostic. Espasa says they have shipped on 22nm and are working on 12nm and 5nm.

Upfront NRE in the interview and optimization phase also has another payoff. Semidynamics can deliver a core for an FPGA bitstream, allowing customers to thoroughly evaluate their customizations before committing a design to a foundry, saving time and reducing risk. Using Semidynamics expertise this way also speeds up exploration without the learning curve of customers having to become RISC-V architectural gurus.

This level of customization means Atrevido fits any RISC-V role, small or large, single or multicore. The transparency of the process helps customers improve their first-pass results and get the most processing in the least area and power. There’s more on the Atrevido announcement, how configurable RISC-V core customization works, and other Semidynamics news at:

https://semidynamics.com/newsroom

Also Read:

Semidynamics: A Single-Software-Stack, Configurable and Customizable RISC-V Solution

Gazzillion Misses – Making the Memory Wall Irrelevant

CEO Interview: Roger Espasa of Semidynamics


LAM Not Yet at Bottom Memory Worsening Down 50%

LAM Not Yet at Bottom Memory Worsening Down 50%
by Robert Maire on 04-24-2023 at 10:00 am

LAM RESEARCH Vantex external chamber lrg 300x300

-Lam reported in line results on reduced expectations
-Guidance disappoints as memory decline continues
-Memory capex down 50% but still sees “further declines”
-Lam ties future to EUV maybe not good idea after ASML report

Lam comes in above grossly already reduced expectations
and misses on guidance

We always find it amusing when companies greatly reduce expectations and guidance then try to act like it was a “beat” of numbers. Lam came in at revenues of $3.87B and EPS of $6.99 which was down 27% sequentially. Guidance was for $3.1B+-$300M versus street expectations of $3.47B and EPS guidance is for $5.00+_ $0.75 versus $5.63. Lam continues to guide down more than the street is looking for as conditions worsen.

Backorders back to “normal”

As supply chain problems have more or less cleared up as demand has fallen off, Lam has used up most all of its backlog and is now in a more “normal” backlog position.

The company still has $2B of deferred revenue to keep it warm at night from pre-payments on shipments to China or Japanese customers awaiting acceptance , so a buffer still remains, somewhat.

Is Lam still a memory shop if foundry is top revenue segment??

Memory business at Lam dropped to a low that we haven’t seen in a very long time as memory was 32% of overall business with foundry at 46%. Probably the bigger piece of the memory business is service, which also sequentially declined, as new tool shipments to memory customers are likely falling to relative near zero levels.

China tied Korea at 22% of business. Service was a huge 40% of business even with the decline as new tools sales obviously kept dropping.

Expecting “further declines” in memory

Lam made it clear that memory capex spend has not yet hit bottom as they commented that “further declines” in memory spending are expected. Memory capex spending was estimated by the company to be down 50% already as memory makers continue to reduce bit growth and bit output to try to shore up declining pricing.

It seems fairly clear to us that given the trajectory and momentum that memory will not recover before the end of the year and the strength of the recovery when it does eventually happen will be weak and slow.

Memory makers will have a ton of existing capacity to bring back online before they buy a single new piece of equipment or even think about expanding or adding new fabs.

In addition to the idled memory making equipment sitting turned off in fabs there are also technology shrinks that will add to capacity without the need to buy new equipment.

The bottom line is that with all the idled equipment, reduced output, and potential technology changes, memory makers could easily survive a year or two based on current demand trajectory without any incremental spending.

The bigger problem is that perhaps only Samsung will be financially able to spend after another year in the current loss environment.

Whenever the current down cycle is over , it will certainly not be memory that leads the way out.

China seems to be buying any chip equipment not nailed down

One of our other concerns that we continue to see is that China has been on a huge spending spree for non leading edge equipment. Its hard to figure out where all the equipment is going as it seems there aren’t that many fabs in China (that we know of).

It has all the makings of the famous toilet paper shortage as people bought in expectation of a shortage.

China seems to be buying any and all equipment they can as they likely fear that they will be cut off from even non leading edge tools. We saw this in ASML’s report this morning where 45-50% of DUV sales were into China.

This demand from China feels artificial and runs the additional risk of slowing because of increased sanctions or just running out of the stampede/herd mentality. This obviously adds to the risk of a longer/deeper downturn

Lam hitches wagon to EUV future (maybe not such a great idea right now…)

During the call, Lam went to lengths to show how it will benefit from the EUV transition and success. They claimed a 75% share of “EUV patterning” related technology (we assume etch) and also spoke about continued wins in dry resist technology.

While this certainly is good , the news from ASML this morning seemed to call into question how much of EUV sales would slow, be delayed or changed over the next year or two. As we previously pointed out this has a negative effect on tools associated with EUV including those made by Lam. It was probably too late for Lam to change their prepared script after the ASML call….

Is this a “second leg down” in the semiconductor down cycle??

As we have been saying and repeated this morning after ASML, we remain concerned that there is a second leg down in the current down cycle or at the very least we are certainly not at a bottom at which we feel comfortable buying chip equipment stocks at already inflated valuations for a down cycle.

Lam clearly called for further declines in memory and guided to further lower revenues in their business. There was no talk on the call about being at or near a bottom in business. The amount of China business is a double edged sword in that it helps soften the blow of the downturn but creates an additional risk in exposure to a politically unstable market.

The stocks

We have zero motivation to buy Lam or any other equipment stock after the run that they have had. We would expect some sort of rationalization of valuation after earnings season and investors and analysts figure out that we haven’t yet hit bottom and there is further unknown downside.

Macro economic conditions certainly don’t give us any comfort either.
From a Lam specific perspective, memory will likely be the last to recover and very slowly at that.

With today’s set up from both ASML and LRCX we would expect that KLAC will have similar comments and AMAT as well though a month later.
We can’t imagine anyone making positive commentary about the industry trajectory any time soon.

Bullish analysts who called the bottom a bit too soon will likely be out defending their position tomorrow or defending that Lam “beat” their numbers (if you can describe being down 27% sequentially a “beat”)

Bottom line is that “it ain’t over till its over” (Yogi Berra) and its not over yet…..

The light at the end of the tunnel could be an oncoming train….

Also Read:

ASML Wavering- Supports our Concern of Second Leg Down for Semis- False Bottom

Gordon Moore’s legacy will live on through new paths & incarnations

Sino Semicap Sanction Screws Snugged- SVB- Aftermath more important than event


Synopsys Accelerates First-Pass Silicon Success for Banias Labs’ Networking SoC

Synopsys Accelerates First-Pass Silicon Success for Banias Labs’ Networking SoC
by Kalar Rajendiran on 04-24-2023 at 8:00 am

Image to Depict Optical SoC

Banias Labs is a semiconductor company that develops infrastructure solutions for next-generation communications. Its target market is the high-performance computing infrastructure market including hyperscale data center, networking, AI, optical module, and Ethernet switch SoCs for emerging high-performance computing designs. These SoCs require high-speed Ethernet designs and low-latency solutions to provide increased system performance and accelerate time-to-market. The company has developed an optical DSP SoC on 5nm process technology to address the requirements of this market.

An optical DSP SoC is a specialized type of system-on-chip (SoC) designed for use in high-speed optical communication systems. In addition to the DSP, the optical DSP SoC typically includes high-speed interface IP blocks, such as Ethernet PHY IP, PCIe IP, and DDR memory controllers. These types of SoCs enable high-speed data transfers at low latencies for real-time signal processing. They are also designed to minimize power consumption, making them ideal for applications that require efficient operation with reduced thermal issues. With the advantages come challenges too. The specialized requirements of optical communication systems make designing an optical DSP SoC more challenging than designing a regular SoC.

Implementation Challenges

The challenges revolve around the complexity of the design, the tight power and performance requirements, and the need to meet various industry standards. The integration of multiple IP blocks including the DSP processor, Ethernet PHY IP, and other custom blocks requires careful design and validation. Additional high-speed interfaces such as PCIe and DDR add further to the complexity of the design. The high-speed interfaces and multiple IP blocks in the system can create signal distortion, crosstalk, and electromagnetic interference, which can impact system performance and reliability. Signal and power integrity analysis and optimization must be performed early in the design cycle to ensure that the system can meet its performance and reliability requirements. Finally, meeting time-to-market requirements can be challenging. The high-performance computing infrastructure market is rapidly evolving, and SoC development teams need to deliver their designs quickly to stay ahead of the competition.

Getting to First Pass Silicon Success

Overcoming the above mentioned challenges requires a comprehensive approach. One of the critical components of high-performance, low-latency solutions is the Ethernet PHY IP. The Ethernet PHY IP is responsible for the physical layer interface between the SoC and the Ethernet network. The IP must support high-speed Ethernet interfaces, including 10G, 25G, 40G, 50G, 100G, 200G, 400G, and 800G, and provide low latency and low power consumption. Additionally, the IP must support various standards, including IEEE 802.3 and Ethernet Alliance. Another important component is the EDA design suite. The EDA design suite must provide a comprehensive solution for designing and verifying the SoC, including power optimization, performance analysis, area optimization, and yield analysis. To the extent, the EDA design suite includes advanced features, such as artificial intelligence (AI) and machine learning (ML), the better for enhanced productivity and reduced time-to-market.

Synopsys Accelerates First Pass Silicon Success

Synopsys offers solutions that address the unique challenges of developing SoCs for the high-performance computing infrastructure market. The company provides a comprehensive IP solution that includes a routing feasibility study, packaging substrate guidelines, signal and power integrity models, and thorough crosstalk analysis. This is imperative to address the signal and power integrity challenges faced when developing an optical DSP SoC. Synopsys’ 112G Ethernet PHY IP offers low latency, flexible reach lengths, and maturity on 5nm process technology, making it an ideal solution for hyperscale data center, networking, AI, optical module, and Ethernet switch SoCs. In addition, Synopsys offers an EDA Design Suite that delivers high-quality results with optimized power, performance, area, and yield. Synopsys’ AI-driven EDA Design Suite provides solutions to boost system performance and accelerate time-to-market, making it an essential component of a successful solution for the high-performance computing infrastructure market.

Summary

Synopsys provides high-performance, low-latency solutions that accelerate the development of advanced Ethernet switch and networking SoCs. To learn more about Synopsys’ comprehensive IP solutions, their comprehensive EDA Design Suite and their AI-Enhanced EDA Suite, visit the following pages.

Synopsys’ comprehensive IP solutions

Synopsys’ comprehensive EDA Suite

Synopsys’ AI-driven EDA Design Suite

Also Read:

Multi-Die Systems: The Biggest Disruption in Computing for Years

Taking the Risk out of Developing Your Own RISC-V Processor with Fast, Architecture-Driven, PPA Optimization

Feeding the Growing Hunger for Bandwidth with High-Speed Ethernet

Takeaways from SNUG 2023


More Software-Based Testing, Less Errata

More Software-Based Testing, Less Errata
by Bernard Murphy on 04-24-2023 at 6:00 am

Palladium Protium

In verification there is an ever-popular question, “When can we stop verifying?” The intent behind the question is “when will we have found all the important bugs?” but the reality is that you stop verifying when you run out of time. Any unresolved bugs appear in errata lists delivered with the product (some running to 100 or more pages). Each bug accompanied by a suggestion for an application software workaround and commonly a note that there is no plan for a fix. Workarounds may be OK for some applications but crippling in others, narrowing market appeal and ROI for the product.

A better question then is how to catch more bug escapes (those bugs that make it silicon and are documented as errata) in the time allocated for verification.

Errata root-causes

I’m not going to attempt an exhaustive list but among deterministic digital root-causes three stand out. Bugs that are almost but not quite unreachable, bugs that are simply missed, and bugs resulting from incompleteness in the spec. The best defense against the first class is formal verification. Ross Dickson and Lance Tamura (both Product Management Directors at Cadence) suggest that defenses against the second and third classes are greatly strengthened by running more software testing against the pre-silicon design model.

Industry surveys indicate that 3 out of 4 designs today require at least one silicon respin. Running software on real silicon (even if not good silicon) seems like an easy way to run extensive software suites to catch most errata, right? That’s a slippery slope; respins are very expensive, reserved for what absolutely cannot be caught pre-silicon. And early silicon is not guaranteed be functional enough to run all the software required to expose errata bugs. It is more practical and cost-effective to trap potential errata pre-silicon.

You might think that errata would commonly result from complex sequence problems. In fact many are surprisingly mundane, arising from simple two factor or exception conditions. Here are a few cases I picked at random from openly available errata:

  • When a receive FIFO overruns it becomes unrecoverable
  • A bus interface hangs after master reset
  • After disabling an FPU another related function remains in high power mode

I’m sure the verification teams for these products were diligent and tested everything they could imagine in the time they had available, but still these bugs escaped.

Spec problems

The spec class of issues is especially problematic. Specs are written in natural language, augmented by software test cases and more detailed requirements. All of which attempt to be as exact as possible but still have holes and ambiguities.

Software and hardware development teams work in parallel from the same specs but in different domains. The software team uses an ideal virtual model of the hardware to develop software which will eventually run on the target hardware. At the same time, the hardware team builds a hardware model to implement the spec. Both faithfully design and test against their reference. When they hit a place where the spec is incomplete, they must make a choice. Assume A or assume B, or both or neither, or maybe it doesn’t matter?

Unsurprisingly this doesn’t always work out well. Best case, the hardware and software teams have a meeting, ideally with architects, to make a joint decision. Sometimes, when the choice seems inconsequential and schedule pressure is high, a choice is made locally without wider consultation. A good example here is bit settings in configuration registers. Maybe a couple of these are presumed to be mutually exclusive, but the spec doesn’t define what should happen if they are both set or both cleared. The hardware team chooses a behavior which doesn’t match the software team’s expectation. Neither team catches the inconsistency when running against their own reference models. A problem emerges only when the real software runs on the real hardware.

For ChatGPT fans, ChatGPT and equivalents are not a solution. Natural language in inherently ambiguous and incomplete. To the extent that ChatGPT will identify a hole, it must still make a choice to plug that hole. Absent documented evidence, that choice will be random (best-case) and may be inconsistent with customer requirements and use-cases.

Software-based verification/validation

Granting that no solution can catch all bug-escapes, software-based testing is a pretty good augment to hardware-based testing. As an example, driver and firmware teams develop and debug their code against the virtual model to the point they can find no errors. When a hardware prototype becomes available, they run against that model to confirm that a task can be completed in X cycles, as expected. Routinely, they also catch bugs that escaped hardware verification.

OK, so the verification team adds more tests to catch those cases but there are two larger lessons here. Software testing is inherently complementary to hardware testing. The software team has no interest in debugging the hardware; they just want to make sure their own code is working. They test with a different mindset. This style of testing can and does often expose spec inconsistencies.

Escapes due to unexpected two or more factor conditions or exceptions are more likely to be caught through extensive stress testing. Software-based testing is top-down, the easiest place to experiment with stress tweaks (such as throwing in a reset or overloading a buffer). This might require some hardware/software team collaboration but nothing onerous.

More software-based testing needs hardware assist

You’re not going to run software-based testing on a hardware model without acceleration, the more of it the better if you really want to minimize errata. Prototyping to run as fast as possible, closely coupled with emulation to debug problems, both extended with virtual models for context. You should check out Cadence Protium, Palladium, and Helium. Cadence also offers demand-based cloud access to Palladium.

Also Read:

What’s New with Cadence Virtuoso?

AI Assists PCB Designers

AI in Verification – A Cadence Perspective

Speculation for Simulation. Innovation in Verification


Podcast EP156: A Chat With Shankar Krishnamoorthy About Strategy and Outlook for EDA Development at SNUG

Podcast EP156: A Chat With Shankar Krishnamoorthy About Strategy and Outlook for EDA Development at SNUG
by Daniel Nenni on 04-21-2023 at 10:00 am

This is another special edition of our podcast series. SemiWiki staff writer Kalar Rajendiran spoke with Shankar Krishnamoorthy, General Manager, Electronic Design Automation Group for Synopsys at the recent SNUG meeting,

Shankar discusses how Synopsys is focusing on hyperconvergence and implementation of AI across the entire design flow. The market drivers that are making these changes critical are also discussed, along with a view of future EDA development and the outlook for EDA startups.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


ASML Wavering- Supports our Concern of Second Leg Down for Semis- False Bottom

ASML Wavering- Supports our Concern of Second Leg Down for Semis- False Bottom
by Robert Maire on 04-21-2023 at 8:00 am

Semiconductor False Bottom

-ASML weakness is evidence of deeper chip down cycle
-When ASML sneezes other chip equip makers catch a cold
-Will backlog last long enough? Will EUV demand hold up?
-“Unthinkable” event, litho cancelations, could shock industry

ASML has in line quarter but alarm bells ring on wavering outlook

ASML reported Euro6.7B in revenues and Euro4.96 in EPS which was more or less in line with expectations. There had been reports coming out of Asia of order slowdowns and cancelations coming out of TSMC and potentially others which had sent the stock down ahead of earnings today.

Those rumors seem at least partially true if not fully true as ASML talked about order book re-arrangements and softening outlook.

ASML’s backlog which has been viewed as solid as a rock extends out to mid 2024 but now appears to be seeing some weakening in the second half of 2024. Right now the order book is not full for the second half of 2024 but management expects (hopes) it will fill.

A defensive conference call

The tone of the conference call was not the normal bullish bravado of a market dominating monopoly but sounded much more defensive about weakening prospects and defending their outlook which was perhaps more concerning than the comments themselves.

Thinking the “unthinkable” litho cancelations & pushouts

It has been long thought in the industry that no chip maker in their right mind would cancel or delay a litho tool for fear of getting back on the end of a very long line later on and being in a much worse position.

That fear seems to have gone away as there is clearly movement in ASML’s order book with tools pushed out and other customers pulled in the fill the otherwise empty slots.

We would have also expected ASML to have already been sold out for 2024 by now but they are only booked halfway through the year with some uncertainty about the second half.

As with the overall chip industry itself, trailing edge technology appears to be holding up better as DUV demand seems good (maybe better in some ways than EUV). One of the issues is that much of the DUV demand is coming out of China which puts even that demand at risk due to embargo issues

In short the order book has softened from a virtual rock to quivering Jello. ASML will be fine but other equipment makers will fare far worse. When ASML sneezes other equipment companies catch a cold.

ASML will likely skate through the down cycle with enough of a backlog to make it to the other side without seeing their earnings and financials take a hit. Their backlog and monopolistic position will help protect the company as we doubt there will be much actual downside impact before the down cycle turns. So they remain the strongest, safest ship in the current semiconductor storm.

Other equipment makers not so much

If you don’t buy litho tools you need less other tools, less deposition, less etch, less yield management etc; etc;. Litho tools are the locomotive that pulls the semiconductor equipment train along with the caboose being assembly and test.

After ASML the company that typically has almost as strong backlog is KLAC whose yield management tools are needed to support all those new litho tools driving to smaller feature sizes. KLACs backlog in some tools is multi year in nature and their business model and “steady Eddy” performance is based on working from a backlog position. We would expect similar softness on KLACs backlog, likely worse than ASML as you don’t need the KLAC yield management tools if you don’t have or delayed the ASML litho tools.

The same obviously goes for both AMAT and LRCX who typically run with even less backlog than KLAC, usually more of a “turns” business during “normal” times.

Semiconductor makers are voting with their feet (capex budgets)

Its clear that chip makers, such as TSMC, Intel & Samsung and others , must feel that demand is not getting any better any time soon if they are willing to delay critical litho tools. This suggests a deeper, longer semiconductor downturn than currently or previously thought. If you think the industry will “bounce back” you are not going to delay a new litho tool. This has very ominous repercussions across the industry as it belies the view of a quick recovery.

Is this a “second leg down” in the semiconductor down cycle?

Our long held view of the semiconductor industry is to look at things through two distinct components of supply on one side and demand on the other.

In our view, it is very clear that the “first” leg down in the current down cycle was primarily supply side driven as the industry built capacity like crazy after it was caught short through the Covid and supply chain crisis. The industry built and built, with obviously reckless abandon until we “overshot” the needed supply and now found ourselves in an oversupply condition that started the current down cycle.

Meanwhile, the demand side has softened and perhaps the downward pressure on demand has somewhat accelerated along with global macro economic concerns.

It feels to us that there is a high likelihood of a “second leg” to the current chip cycle that is driven more by weakening demand than the first half which was driven by oversupply.

This could potentially be worse as supply issues tend to be easier to fix than demand issues as you can control supply if you are a chip maker as we have seen with the memory market and Micron and Samsung taking capacity and product off line to support pricing.

The problem is that there isn’t that much that the industry can do to stimulate demand for chips. Lowering pricing on chips used in cars doesn’t stimulate demand.

The stocks

Chip stocks have been on a roll since the beginning of the year and in our view have become prematurely “overheated”. Many investors have falsely thought that we were at a bottom in the industry and it was time to buy back in as it could only get better from here.

That thought process was not unreasonable as in prior down cycles we saw a bottom after a few quarters and it was a signal to buy and it turned out well. That may not be the case if it was more of a “false bottom” created by supply cut backs that then fell apart when demand weakened creating a further drop.

If the current down cycle were just a supply side issue, chip makers would not delay/cancel litho plans so this is clear evidence that chip makers are bracing for a longer, deeper downturn that has more demand side concerns on top of the prior supply side concerns.

2023 is clearly not a recovery year. We think that typical, unsubstantiated hopes for a second half recovery in 2023 will likely fade as chip companies report and analysts figure it out. The bigger question now becomes when/if in 2024 we will start to see a recovery.

Fab projects and equipment will be pushed out much further, especially in the very over supplied memory space. The main point of light in the industry remains non leading edge semis which has been saving many in the industry including DUV for ASML.

ASML is certainly not a good beginning of earnings season for chips and likely will send a much needed chill through the chip stock market.

About Semiconductor Advisors LLC
Semiconductor Advisors is an RIA (a Registered Investment Advisor),
specializing in technology companies with particular emphasis on semiconductor and semiconductor equipment companies. We have been covering the space longer and been involved with more transactions than any other financial professional in the space. We provide research, consulting and advisory services on strategic and financial matters to both industry participants as well as investors. We offer expert, intelligent, balanced research and advice. Our opinions are very direct and honest and offer an unbiased view as compared to other sources.

Also Read:

Gordon Moore’s legacy will live on through new paths & incarnations

Sino Semicap Sanction Screws Snugged- SVB- Aftermath more important than event

Report from SPIE- EUV’s next 15 years- AMAT “Sculpta” braggadocio rollout


Design IP Sales Grew 20.2% in 2022 after 19.4% in 2021 and 16.7% in 2020!

Design IP Sales Grew 20.2% in 2022 after 19.4% in 2021 and 16.7% in 2020!
by Eric Esteve on 04-21-2023 at 6:00 am

Top5 Royalty 2022 BIG updated

Design IP revenues had achieved $6.67B in 2022, after $5.56B in 2021, or 20.2% growth after 19.4% in 2021 and 16.7% in 2020. IPnest has released the “Design IP Report” in April 2023, ranking IP vendors by category (CPU, DSP, GPU & ISP, Wired Interface, SRAM Memory Compiler, Flash Memory Compiler, Library and I/O, AMS, Wireless Interface, Infrastructure and Misc. Digital) and by nature (License and Royalty).

The main trends shaking the Design IP in 2022 are very positive for most of the IP vendors, especially for 4 of the Top 5: ARM, Synopsys, Imagination and Alphawave growing by more than the market, respectively 24.5%, 22.1%, 23.1% and 94.7%. Rambus and Alphawave benefit from their recent IP vendor acquisition, PLDA, AnalogX and Hardent for the first, OpenFive for Alphawave, but their organic growth was already great. In summary, the Top 10 IP vendors have grown by 24.6%, when all others by 5.3%, this can be seen as an effect of consolidation, as a Top vendor will make proportionally more design win than “others”.

Synopsys, Alphawave and Rambus growth confirm again in 2022 the importance of the wired interface IP category (with 26.8% growth) aligned with the data-centric application, hyperscalar, datacenter, networking or AI. But the good performance of ARM and IMG proves the come back of the smartphone industry and the emergence of automotive as new growth vector for Design IP.

Looking at the 2016-2022 IP market evolution can bring interesting information about the main trends. Global IP market has grown by 94.8% when Top 3 vendors have seen unequal growth. The #1 ARM grew by 66.5% when the #2 Synopsys grew by 194% and Cadence (#3) by 203%. Market share information is even more significant. ARM move from 48.1% in 2016 to 41% in 2022 when Synopsys enjoy a move from 13.1% to 22% and Cadence is passing from 3.4% to 5.4%.

This can be synthetized with the comparison of 2016 to 2022 CAGR:

  • ARM CAGR 8.9%
  • Synopsys CAGR 19.7%
  • Cadence CAGR 20.3%

When the global IP market has seen 2016 to 2022 CAGR of 11.8%.

The strong information is that the Design IP market has enjoyed 11.8% CAGR for 2016-2022! Zooming on the categories (Processor, wired Interface, Physical, Digital), market share 2017 to 2022 evolution clearly shows interface category growth (18% to 24.9%) at the expense of processor (CPU, DSP, GPU) declining from 57.6% to 49.5%. When Physical and Digital are almost stable, as it can be seen on the above picture.

Being very synthetic, Design IP markey is split in quarter:

  • Processor (CPU, DSP, GPU) weigth one half (two quarter)
  • Wired Interface weight one quarter
  • Digital and Physical one quarter in total

IPnest has also calculated IP vendors ranking by License and royalty IP revenues:

Synopsys is the clear #1 by IP license revenues with 29.7% market share in 2022, when ARM is #2 with 25.2%. Alphawave, created in 2017, is now ranked #4 just behind Cadence, showing how high performance SerDes IP is essential for modern data-centric application (Alphawave is leader for PAM4 112G SerDes available in 7nm, 5nm and 3nm from various foundries, TSMC, Samsung and Intel-IFS). Analyze written last year stay valid!

The 2022 ranking for Royalty shows ARM’s dominance with 63.8% market share, not a surprise if we consider their customer installed base and their strong position in the smartphone industry. Imagination Technologies (IMG) position of #3 is consistent. Interesting to notice, both companies are expected to IPO in 2023…

With 20% YoY growth in 2021 and 2022, the Design IP industry is simply confirming how incredibly healthy is this niche within the semiconductor market and the past 2016 to 2022 CAGR of 11.8% is a good metric!

IPnest has also run a 5-year forecast (not yet published) for Design IP, to pass $10B in 2025 and predict a future CAGR (2021 to 2026) of 16.7%. Optimistic? This year-to-year 2022 growth is on-line with this prediction…

Eric Esteve from IPnest

To buy this report, or just discuss about IP, contact Eric Esteve (eric.esteve@ip-nest.com)

Also Read:

Interface IP in 2021: $1.3B, 22% growth and $3B in 2026

Stop-For-Top IP Model to Replace One-Stop-Shop by 2025

Design IP Sales Grew 19.4% in 2021, confirm 2016-2021 CAGR of 9.8%

Chiplet: Are You Ready For Next Semiconductor Revolution?


S2C Helps Client to Achieve High-Performance Secure GPU Chip Verification

S2C Helps Client to Achieve High-Performance Secure GPU Chip Verification
by Daniel Nenni on 04-20-2023 at 6:00 am

S2C Prototyping 2023

S2C, a leading provider of FPGA-based prototyping solutions, has helped a client achieve high-performance secure GPU chip verification. With the help of S2C’s Prodigy prototyping solution, the client was able to start software development and hardware-software co-design early, leading to accelerated time-to-market for their entire chip product and enabling them to seize early opportunities in the market.

A Graphics Processing Unit, or GPU chip for short, is a specialized processor designed to handle image and video data. Its primary purpose is to perform many parallel computations to process massive data more efficiently. Due to this characteristic, GPUs are highly suitable for graphics rendering, video encoding, and decoding tasks. In recent years, GPUs have also found widespread use in areas like deep learning, scientific computing, and cryptography because of their significant speed improvements in computation.

One of the major challenges for GPU vendors is the intense competition in this field. S2C’s customers was a new entrant to the GPU market, and they tackled this challenge by applying a shift-left strategy to accelerate the GPU chip’s time-to-market and seizing the market’s early opportunities. The client’s design cycle spanned 18 months from concept design, development, and verification, tape-out delivery to chip sample illumination, which included their own GPU core IP, processor architecture optimization, compilers, verification models, software drivers, and system compatibility. The entire process was completed in one go.

S2C prototyping solution included a diverse selection of daughter cards such as PCIe, HDMI, DDR, and GPIO that made it incredibly convenient to build a GPU verification system. Our client opted for S2C’s Prodigy Virtex UltraScale VU440 Logic Systems, which consisted of one Quad platform and one Dual platform, and achieved an impressive FPGA utilization rate of 71%.

As the client stated, “S2C’s Prodigy prototyping solution enabled us to achieve high-performance secure GPU chip verification, which helped us to gain an advantage in the competitive GPU market. With S2C’s support, we were able to accelerate our time-to-market, which was  critical to our success.”

For the new IC design verification, the IC verification team plans to use Prodigy S7-19P, which is based on the Xilinx Virtex UltraScale+ VU19P FPGA. The S7-19P prototyping platform provides 1.6 times more logic and delivers a 30% performance boost compared to its predecessor, making it an excellent choice for hype-scale design verification.

S2C is a leading global supplier of FPGA prototyping solutions for today’s innovative SoC and ASIC designs, now with the second largest share of the global prototyping market. S2C has been successfully delivering rapid SoC prototyping solutions since 2003. With over 600 customers, including 6 of the world’s top 15 semiconductor companies, our world-class engineering team and customer-centric sales team are experts at addressing our customer’s SoC and ASIC verification needs. S2C has offices and sales representatives in the US, Europe, mainland China, Hong Kong, Korea and Japan.

Also Read:

Ask Not How FPGA Prototyping Differs From Emulation – Ask How FPGA Prototyping and Emulation Can Benefit You

A faster prototyping device-under-test connection

Stand-Out Veteran Provider of FPGA Prototyping Solutions at #59DAC

Multi-FPGA Prototyping Software – Never Enough of a Good Thing