wide 1

Lithography Resolution Limits: The Point Spread Function

Lithography Resolution Limits: The Point Spread Function
by Fred Chen on 03-21-2023 at 6:00 am

Lithography Resolution Limits The Point Spread Function

The point spread function is the basic metric defining the resolution of an optical system [1]. A focused spot will have a diameter defined by the Airy disk [2], which is itself a part of the diffraction pattern, based on a Bessel function of the 1st kind and 1st order J1(x), with x being a normalized coordinate defined by pi*radius/(0.5 wavelength/NA), with NA being the numerical aperture of the system. The intensity is proportional to the square of 2J1(x)/x. The intensity profile is the point spread function, since it is the smallest possible defined pattern that can be focused by a lens (or mirror). The full-width at half-maximum (FWHM) is closely estimated by 0.5 wavelength/NA. DUV patterns are often much smaller than this size (down to ~0.3 wavelength/NA) and are thus required to be dense arrays and use phase-shifting masks [3].

In the context of EUV lithography, there are 0.33 NA systems and 0.55 NA systems with 20% central obscuration. The latter requires a modification of the point spread function by subtracting the point spread function corresponding to the obscured portion. For a 20% central obscuration, this means subtracting 0.4 J1(0.2x)/x, i.e., the intensity is proportional to the square of [2J1(x)/x – 0.4 J(0.2x)/x]. The point spread functions for 0.33 NA and 0.55 NA EUV systems are plotted below.

Point spread functions for 0.33 NA and 0.55 NA EUV systems

The 0.55 NA system has a narrower FWHM, ~12.5 nm vs ~21 nm for 0.33 NA. However, the larger NA goes out of focus faster for a given defocus distance due to larger center-to-edge optical path differences [4]. Moreover, experimentally measured EUV point spread functions [5] indicated much reduced contrast than expected from a ~22 nm FWHM point spread function for a 13.5 nm wavelength 0.3 NA system. This can be attributed to aberrations but also significantly includes relatively long-range effects specific to the resist, which can be attributed to photoelectrons and secondary electrons resulting from EUV absorption [6].

As indicated earlier, spot sizes smaller than the point spread function are possible only for dense pitches, with a lower pitch limit of 0.7 wavelength/NA. For random logic arrangements on interconnects, however, pitches have to be much larger, and so line cuts, for example, are still limited by the point spread function. On current 0.33 NA EUV systems, for example, it can be seen that the point spread function already covers popularly targeted line pitches in the 28-36 nm range. So, in fact, the edge placement from overlay and CD targeting, compounded by the spread of the secondary electrons [6,7], looks prohibitive. No wonder, then, that SALELE (Self-Aligned Litho-Etch-Litho-Etch) has been the default technique, even for EUV [8-11].

References

[1] https://en.wikipedia.org/wiki/Point_spread_function

[2] https://en.wikipedia.org/wiki/Airy_disk

[3] Y-T. Chen et al., Proc. SPIE 5853 (2005).

[4] A Simple Model for Sharpness in Digital Cameras – Defocus, https://www.strollswithmydog.com/a-simple-model-for-sharpness-in-digital-cameras-defocus/

[5] J. P. Cain, P. Naulleau, and C. Spanos, Proc. SPIE 5751 (2005).

[6] Y. Kandel et al., Proc. SPIE 10143, 101430B (2017).

[7] F. Chen, Secondary Electron Blur Randomness as the Origin of EUV Stochastic Defects, https://www.linkedin.com/pulse/secondary-electron-blur-randomness-origin-euv-stochastic-chen

[8] F. Chen, SALELE Double Patterning for 7nm and 5nm Nodes, https://www.linkedin.com/pulse/salele-double-patterning-7nm-5nm-nodes-frederick-chen

[9] R. Venkatesan et al., Proc. SPIE 12292, 1229202 (2022).

[10] Q. Lin et al. Proc. SPIE 11327, 113270X (2020).

[11] Y. Drissi et al., “SALELE process from theory to fabrication,” Proc. SPIE 10962, 109620V (2019).

This article first appeared in LinkedIn Pulse: Lithography  Resolution Limits: The Point Spread Function

Also Read:

Resolution vs. Die Size Tradeoff Due to EUV Pupil Rotation

Multiple Monopole Exposures: The Correct Way to Tame Aberrations in EUV Lithography?

Application-Specific Lithography: Sub-0.0013 um2 DRAM Storage Node Patterning


Checklist to Ensure Silicon Interposers Don’t Kill Your Design

Checklist to Ensure Silicon Interposers Don’t Kill Your Design
by Dr. Lang Lin on 03-20-2023 at 10:00 am

Image1

Traditional methods of chip design and packaging are running out of steam to fulfill growing demands for lower power, faster data rates, and higher integration density. Designers across many industries – like 5G, AI/ML, autonomous vehicles, and high-performance computing – are striving to adopt 3D semiconductor technologies that promise to be the solution. The tremendous growth in 2.5 and 3D IC packaging technology has been driven by high-profile early adopters delivering high bandwidth and latency products.

CPU and Computer chip concept

Benefits of 2.5 and 3D Technology

This trending technology meets the demands of enclosing all functionality in one sophisticated IC package, enabling engineers to meet aggressive high-speed and miniaturization goals. In 3D-IC packaging, dies are stacked vertically on top of each other (e.g. HBM), while 2.5D packaging places bare die (chiplets) next to each other. The chiplets are connected through a silicon interposer and through-chip vias (TSVs). This makes for a much smaller footprint and eliminates bulky interconnects and packaging which can significantly impede data rate and latency performance. Heterogenous integration is another benefit of silicon interposers, enabling engineers to place memory and logic with different silicon technologies in the same package, reducing unnecessary delays and power consumption. Integrating different chips designed in their most appropriate technology nodes provides better performance, cost, and improved time to market when compared to monolithic SOC designs on advanced technology nodes. Monolithic SOCs take longer to design and validate, contributing to increased cost and time to market.

The implementation of silicon interposers allows for more configurable system architectures but also poses additional multiphysics challenges like thermal expansion and electromagnetic interference along with fewer design and production issues.

Challenges of 2.5 and 3D Design

Silicon interposers is a successful and booming advancement in IC packaging technology. This technology will soon replace the traditional methods of chip design. Combining different functional blocks and memory within the same package provides high speed and improved performance for advanced design technologies. But the new considerations with interposers impose unfamiliar challenges and designers must understand the power integrity, thermal integrity and signal integrity interactions between the chiplet dies, the interposer, and the package. System simulation becomes an integral factor for the expected performance of the IC package.

Interposers act as a passive layer with a coefficient of thermal expansion that matches that of the chiplets, which explains the popularity of silicon for interposers. Nevertheless, it doesn’t eliminate the possibility of thermal hot spots and joule heating problems within the design. Interposers are supported by placing them on an ordinary substrate with a different thermal expansion coefficient, which contributes to increased mechanical stress and interposer warpage. That’s where the designer should be worried about the reliability of the system as this stress can easily crack some of the thousands of microbump connections.

Silicon interposers provide significantly denser I/O connectivity allowing higher bandwidth and better use of die space. But as we know, nothing comes for free. Multiple IPs in the same package require multiple power sources, constituting a complex power distribution network (PDN) within the package itself. The PDN runs throughout the entire package and is always vulnerable to power noise leading to power integrity problems. Analyzing the voltage distribution and current signature of every chip in the IC system with an interposer is important for ensuring power integrity.  Routing considerable amounts of power through the vertical connections between elements creates more problems for power integrity. These include TSVs and C4 bumps, as well as tiny micro-bumps, and hybrid bonding connections. Last but not least, many high-speed signals are routed among the chips and interposer which can easily fall victim to electromagnetic coupling and crosstalk. Electromagnetic signal integrity, also for high-speed digital signals, must be on your verification list when designing an IC package with interposer. This technology is a cost-effective, high-density, and power-efficient technique but is still susceptible to EM interference, thermal, signal and power integrity issues.

Figure2: Block diagram of Multiphysics analysis of multi-die system

Power Integrity:  

Power is the most critical aspect of any IC package design. Everything around the package design is driven by the power consumed by chips within the IC package. Every chip has a different power requirement which leads to requirements for the power delivery network. The PDN also has a critical role in maintaining the power integrity of the IC package by minimizing voltage drop (IR-drop) and avoiding electromigration failures. The best way to achieve power integrity is to optimize the power delivery network by simulating the fluctuating current at each IC and the parasitic of passive elements that make up the PDN. It becomes more complicated with an interposer since chips are connected through the interposer. Power and ground trails routed through the interposer impose new challenges when analyzing power integrity. But it is not the only issue. Electromigration issues come hand in hand with PI problems. The current density in each piece of geometry must be modeled and should be below the maximum limit supplied by the foundry. Joule heating of the microbumps and wires has a significant impact on the maximum allowable current density, which implies a degree of thermal simulation for maximum accuracy.

Ansys Redhawk-SC and Totem, can extract the most accurate chip power model to understand the power behavior of chips in a full-system context. If you don’t yet have the chip layout model at the prototyping stage, create an estimated CPM (chip power model) using Ansys Redhawk tools to anticipate the physics at the initial level. Thermal and power analysis shouldn’t be a signoff step, but an ongoing process because making last-minute changes in the design might not work.

Figure3: Power Integrity Analysis using Ansys Redhawk-SC Electrothermal

Thermal Integrity:  It is extremely important to understand the thermal distribution in the interposer design to regulate thermal integrity. Just power and signal integrity might not save your design from thermal runaway or local thermal failure. with multiple chips close together in a 2.5D package the hotter chiplet might heat up the nearby chiplets and change their power profile, possibly leading to yet more heating. Heat is dissipated from the chips to the interposer and further through TSVs to the substrate, which heats up the entire package. To avoid stress and warpage due to the differential thermal expansion, designers should understand the thermal profile of every chip and interposer in the design. These maps will give insight into the thermal distribution across the IC package, allowing the designer to determine thermal coupling among chips through the interposer.

Power dissipation is, of course, driven by activity. Ansys PowerArtist is an RTL power analysis tool that is integrated with in RedHawk-SC Electrothermal to generate the most accurate chip thermal models (CTMs) based on ultra-long, realistic activity vectors produced by hardware emulators. By assembling the entire 3D-IC system including chip CTM, interposer, package, and heat sink, Ansys RedHawk-SC Electrothermal gives the designer an accurate thermal distribution and an understanding of the thermal coupling between chiplets and the interposer. Monitoring temperature gradients needs to start early in the IC package design. The sooner the better. The complete front-to-back flow with gives a clear insight into the thermal distribution over time for the entire package, making your design more reliable.

Figure 4: Different parameter extractions for Silicon Interposer Design

Signal Integrity:  In the IC package, high-speed signals are transmitted from one die to another through an interposer at very high bit rates. The signals are closely spaced and also relatively long (compared to on-chip routing), which makes them vulnerable to electromagnetic interference (EMI) and coupling (EMC). Even digital designers need to follow high speed design guidelines to maintain signal integrity. The only way to control the EMC/EMI is with fast, high-capacity electromagnetic solvers that extract a coupled electromagnetic model including chiplets, signal routing through the interposer, and system coupling effect. With Ansys RaptorHand HFSS easy to analyze all these elements in a single, large model and meet the desired goal of a clean eye diagram. HFSS and Ansys Q3D can also be used to extract RLC parasitics and provide visualization of the electromagnetic fields and scale up to system level extraction beyond the interposer.

Learn more about challenges and solutions for 3D-IC and interposers.

Semiconductor Design and Simulation Software | Ansys

Ansys RedHawk-SC Electrothermal Datasheet

Thermal Integrity Challenges and Solutions of Silicon Interposer Design | Ansys

Also Read:

HFSS Leads the Way with Exponential Innovation

DesignCon 2023 Panel Photonics future: the vision, the challenge, and the path to infinity & beyond!

Exponential Innovation: HFSS


Samtec Lights Up MemCon

Samtec Lights Up MemCon
by Mike Gianfagna on 03-20-2023 at 6:00 am

Samtec Lights Up MemCon

Every conference and trade show that Samtec attends is better for the experience. Samtec has a way of bringing exciting and innovative demos and technical presentations to any event they attend. I personally have fond memories of exhibiting next to Samtec at an early AI Hardware Summit at the Computer History Museum in Mountain View, CA. At the time I was at eSilicon, and we had developed an eye-popping long-reach communication demo with our SerDes and Samtec’s cables. We ran that demo with a cable that connected our two booths – very long reach in action. I don’t think I’ve ever seen a demo span more than one trade show booth since then. The subject of this post is Samtec’s attendance at MemCon, which is also being held at the Computer History Museum. Samtec overall, and Matt Burns, technical marketing manager in particular will be working their magic on March 28 and 29 this year. Let’s see how Samtec lights up MemCon.

MemCon, Then and Now

Thanks to Paul McLellan and his Breakfast Bytes blog, I was able to get some early history of MemCon. Those who have been at the semiconductor and EDA game for a while will remember Denali, an early IP company that focused on memory models. Denali decided to get some more visibility for the company and its offerings, so around 2001 they held the first MemCon at the Hyatt Hotel in the Bay Area. So, this was the birth of the show. The historians among us will also fondly remember the Denali Party, probably the best social event ever held at the Design Automation Conference.

Today, MemCon is managed by Kisaco Research. I have some personal experience with this organization. While at eSilicon, we were one of the early participants at the previously mentioned AI Hardware Summit. Under their leadership, Kisaco Reseach grew this event from a humble and small beginning to one of the premier events in AI for the industry. All this from a location in London. Their reach is substantial, and they are working their magic for MemCon as well.

Expected audience at MemCon

Memories have become a critical enabling technology for many forward-looking applications. Some of the areas of focus for MemCon include AI/ML, HPC, datacenter and genomics. The list is actually much longer. The expected audience at MemCon covers a lot of ground. This is clearly an important conference – registration information is coming.

Samtec at MemCon

At its core, Samtec provides high-performance interconnect solutions for customers and partners. Samtec’s high-speed board-to-board, high-speed cables, mid-board and panel optics, pecision RF, flexible stacking, and micro/rugged components route data from a bare die to an interface 100 meters away, and all interconnect points in between. For the memory and storage sector, niche applications require niche interconnect solutions and that is Samtec’s specialty.

You can learn more about what Samtec does on their SemiWiki page here.

If you’re headed to MemCon, definitely stop by the Samtec booth. You will find talented, engaging staff and impressive demonstrations. Samtec’s own Matt Burns will also be presenting an informative talk on Wednesday March 29 at MemCon:

2:10 PM – 2:35 PM

How Flexible, Scalable High-Performance Interconnect Extends the Reach of Next Generation Memory Architectures

So, this is how Samtec lights up MemCon. If you haven’t registered yet for the show, you can register here. Use SAMTECGOLD15 at check-out to save 15%.


Podcast EP148: The Synopsys View of High-Performance Communication and the Role of Chiplets

Podcast EP148: The Synopsys View of High-Performance Communication and the Role of Chiplets
by Daniel Nenni on 03-17-2023 at 10:00 am

Dan is joined by John Swanson, who is the HPC Controller & Datapath Product Line Manager in the Synopsys Solutions Group. John has worked in the development and deployment of verification, integration, and implementation tools, IP, standards, and methodologies used in IP-based design for over 25 years at Synopsys.

Dan explores the future of high-performance computing with John. What is required for success, and what challenges are faced by designers and applications to get to 1.6T Ethernet leveraging 224 GbE, including FEC, cabling, and standardization.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


CTO Interview: Dr. Zakir Hussain Syed of Infinisim

CTO Interview: Dr. Zakir Hussain Syed of Infinisim
by Daniel Nenni on 03-17-2023 at 6:00 am

Zakir Hussain Infinisim

Zakir Hussain is a co-founder of Infinisim and brings over 25 years of experience in the Electronic Design Automation industry. He was at Simplex Solutions, Inc. (acquired by Cadence) at its inception in 1995 through the end of 2000.  He has published numerous papers on verification and simulation and has presented at many industry conferences.  Zakir obtained his Masters degree in Mechanical Engineering and a PhD in Electrical Engineering from Duke University.

What is the Infinisim backstory?
Infinisim, Inc is a privately funded EDA company founded by industry luminaries with over 50 years of combined expertise in the area of design and verification. Infinisim customers are leading edge semiconductor companies and foundries that are designing high-performance SoC, AI, CPU and GPU chips.

Infinisim has helped customers achieve unprecedented levels of confidence in design robustness prior to tape-out. Customers have been able to eliminate silicon re-spins, reduce chip design schedules, and dramatically improve product quality and production yield.

What market segments are you targeting?
The simple answer is leading edge complex SoCs, multi giga hertz CPU, GPU, and domain specific chips (bespoke silicon) with high core counts but we look at three distinct capabilities that our tools provide:

SoC Clock Analysis
Our leading edge clock analysis solution helps customers accurately verify timing, detect failures, and optimize performance of the clock.

Clock Jitter Analysis
Our specialized jitter analytics solution helps customers accurately compute power supply induced jitter of clock domains.

Clock Aging Analysis
Our Clock Aging Analysis helps customers accurately determine the operational lifetime of power-sensitive clocks.

For SoC clock analysis, the leading edge mobile SoC market is a great example where power consumption is critical. For aging, Automotive and other mission critical markets (especially at 6nm and below) where the product lifespan is 5 or more years. For Jitter, high frequency and high performance chips like CPUs, GPUs and large bespoke silicon with lots of cores that require strict accuracy.

What keeps your customers up at night? 
Tape-out confidence keeps everyone up at night! The cost of tape-out is very high below 7nm so errors cannot slip by. For example: timing related rail to rail failures and duty cycle distortion (DCD). Also jitter and aging analysis.

Customers are always looking for a competitive advantage over what everyone else is doing. Tightening margins on performance, power, and area is a risky proposition without a sign-off proven clock analysis tool. Clocks are critical, some designs are all about the clocks and guard banding your way out of complexity hurts the competitive positioning of your product. Especially if your competitor is already working with Infinisim.

What makes your product unique?
The founding Infinisim team had many years experience with IR drop analysis doing very large power grids. Fast Spice did not work since you had to chose between accuracy and speed so a custom spice engine was developed. Rather than approach the general SPICE market, the Infinisim tool set and methodology was developed specifically for clock tree analysis. This is a critical difference between Infinisim and general purpose EDA tools.

Clock is a unique problem and requires a unique tool. Infinism has a special purpose simulator designed specifically for SoC clock analysis, clock jitter analysis, and clock aging analysis. Speed AND capacity AND full SPICE accuracy is the focus so there is no trade off like traditional simulators.

What’s next for the company?
Three things:

1) We are working closely with customers on increasing accuracy, speed, and capacity for the new FinFET nodes. Infinisim is a sign off tool at 14nm down to 5nm and 3nm is in process. The complexity of chips is increasing so this will be a never ending challenge for clocks.

2) We are working closely with customers and foundries on GAA processes which will require a new set of capabilities. FinFET models are public domain and very accessible. GAA models are proprietary and will be tied closely to foundries versus EDA tool companies.  GAA models are much more complicated with more equations due to changing conductance, capacitance, and more non linear effects at the device level.

3) We are collaborating with customers and cloud providers on a cloud based Infinisim solution.

How do customers engage with Infinisim?
Customers generally approach us with a clock problem. Since Infinisim is a single solution evaluations are fairly easy using a targeted approach on customer circuits. For more information or a customer engagement you can reach us at http://infinisim.com/.

Also Read:

Clock Aging Issues at Sub-10nm Nodes

Analyzing Clocks at 7nm and Smaller Nodes

Methodology to Minimize the Impact of Duty Cycle Distortion in Clock Distribution Networks


Must-attend webinar event: How better collaboration can improve your yield

Must-attend webinar event: How better collaboration can improve your yield
by Daniel Nenni on 03-16-2023 at 10:00 am

YieldHub Webinar

In today’s rapidly evolving semiconductor industry, the demand for high-quality and reliable semiconductors at a reasonable cost is increasing. This is why world-class yield management has become more and more important for fabless semiconductor companies and IDMs.

In a must-attend event, yieldHUB will be hosting a webinar in partnership with SemiWiki that will cover why good collaboration is essential in yield management. This will be relevant for both startups and large-scale fabless semiconductor companies and IDMs.

yieldHUB’s experts will provide valuable insights into how your team can work together seamlessly by sharing data and insights to optimize yield and improve production processes.

Teams should be able to share data, collaborate on projects and make data-driven decisions quickly. Collaboration is a key in technology development because it enables individuals to work together effectively, combining their skills, knowledge, and resources to create innovative, high-quality, and high-yielding products.

One of the challenges that yieldHUB’s experts will talk about is the expertise that is often distributed across the world within large-scale companies. When there is disrupted communication, this can have an impact on yield.

Register Here

The webinar is scheduled for March 28, 10am (PST), and it will cover the following topics:

  1. What is collaboration in the context of yield management?
  2. Best collaboration practices in yield management.
  3. How about security?
  4. Why Yield Management Systems (YMS) should evolve.
  5. Examples of companies that have used collaboration to their advantage.

The webinar is suitable for anyone involved in the yield management process within the semiconductor industry.

Attendees will have the chance to submit questions prior to the webinar and get to know two of yieldHUB’s leaders during this presentation. Register now and take the first step towards improving your collaboration efforts.

Register Here

About yieldHUB
yieldHUB was founded by Limerick resident John O’Donnell who studied electrical engineering at UCC and spent more than 17 years at a leading semiconductor company before starting yieldHUB. Fast forward to today and he’s running a company with a platform that’s used by thousands of product and test engineers around the world.

yieldHUB is an all-in-one software platform that was designed by engineers for engineers. It helps semiconductor companies by cleansing data at scale and producing insights that can detect flaws in wafers (which are then cut into tiny microchips) during the manufacturing process.

 Also Read:

It’s Always About the Yield

The Six Signs That You Need a Yield Management System

yieldHUB – Helping Semiconductor Companies be More Competitive


Accellera Update at DVCon 2023

Accellera Update at DVCon 2023
by Bernard Murphy on 03-16-2023 at 6:00 am

logo accellera min

I have a new-found respect for Lu Dai. He is a senior director of engineering at Qualcomm, with valuable insight into the ground realities of verification in a big semiconductor company. He is on the board of directors at RISC-V International and is chairman of the board of directors at Accellera, both giving him a top-down view of industry priorities around standards. Good setup for a talk with Lu at DVCon’23, to get an update on Accellera progress over the last year. The executive summary: work on a CDC standard is moving fast, there are some updates to IP-XACT (IEEE returning the standard to Accellera for update), IPSA (the security standard) is now moving towards IEEE standardization, and safety and UVM/AMS are still underway.

Lu also talked a little about Accellera/IEEE collaboration. Collaboration is valuable because IEEE standards are long cycle (5-10 years) and ultimately definitive in the industry, whereas Accellera can iterate faster to a 90% convergence in a smaller group, leaving the last 10% for IEEE cleanup. Obviously valuable when a standard is first released but also in updates. On major updates IEEE often returns control to Accellera for spec definition/agreement. When ready, Accellera passes the baton back to IEEE and the Accellera working group folks join the IEEE working group for a smooth transition.

PSS

PSS is gaining significant traction for system level testing, witness applications from most tool vendors. Active standard development is now on the proposed 2.1 release. The big news here is that they are dropping support for C++. Apparently, the demand for C++, originally thought to be a good idea (maybe for SystemC?), just isn’t there. Demand for C support continues strong, however. Since this is a big change, the working group isn’t yet sure if they should rename the release 3.0. Still in debate.

There are other plans for 2.1/3.0, including more on constrained random control coverage. Lu didn’t want to share more than that. I bet as a verification guy he knows more, so probably still under wraps.

Functional safety and security standards

The objective of these standards is similar, to ensure interoperability between different vendor solutions, from IP level design up to SoC level design. And in the case of safety, to enhance propagation of constraints/ requirements from OEM/Tier1 needs down to the design, and constraints added in the design back up to the ultimate consumers of the functionality. (Perhaps that principle will also apply at some point to security standards, but I guess we need to walk before we can run.)

IPSA is underway to IEEE standardization as mentioned earlier. The functional safety standard is still in development. Lu told me that he expects a white paper update around the middle of the year, followed soon after by a draft standard.

CDC and IP-XACT

The goal for CDC is to standardize constraints and other meta-data between vendor platforms. Lu made the interesting point that all the vendor CDC products do a good job, but interoperability is a nightmare. That is important because no tool can do full chip CDC on the big designs. The obvious answer to the full chip need is hierarchical analysis, but IPs and subsystems come from multiple internal and external suppliers who don’t necessarily use the same CDC tools.

CDC products are mature and users have been complaining long enough that the working group apparently knows exactly what they have to do and have set an aggressive schedule for their first release. Lu expects this one to cycle fast. There might be some deficiencies in the first release, such as a lack of constructs for re-convergent paths, but the bulk of the constraints should be covered.

For IP-XACT, Lu expects most updates to be in reference models and documentation. In a quick scan through slides from the DVCon tutorial, I saw several improvements to register map and memory map definitions. I wouldn’t be surprised if this was also in part a response to divergences between vendor solutions. Or perhaps too many vendor extensions? I also saw support for structured ports for cleaner mapping from SystemVerilog for example.

UVM AMS

This standardization effort is a little more challenged. The standard depends on progress both in UVM and in SystemVerilog AMS extensions. For UVM, the working group has made pretty good progress. More challenging has been syncing with IEEE on AMS SystemVerilog language requirements. This appears to be an administrative rather than a technical problem. System Verilog, IEEE 1800, is an established standard and IEEE updates such standards every 5 or 10 years. The working group AMS proposals for System Verilog were maybe a little too ambitious for IEEE deadlines and a scale down effort took long enough that it missed the window.

I’m sure no-one wants to wait another 5 years, yet vendors are unlikely to update their support until they know the standard is official. Lu tells me there are a number of ideas in discussion, including using the 1880.1 standard, originally intended for AMS but never used. We will just have to wait and see.

Membership and Recognition

Lu had an interesting update here on growing participation from Chinese companies. China has participated actively for standards like 4G and 5G, but EDA/semiconductor company participation in standards has not been a thing. Until this year.

Lu’s read is that Chinese companies take the long view. Embargos come and go but design must continue. Those companies will have to work within a standards-compliant ecosystem, so they feel need to be active in understanding and helping define standards.

Huawei has been an associate member for a while. New associate additions in EDA include Univista and X Epic. A semiconductor associate addition is ZEKU, Oppo’s semiconductor subsidiary. If you’re not familiar with Oppo, their product line includes Vivo smartphones, very popular in India and Europe and now starting to appear in the US.

Also of note, in this DVCon, Accellera honored Stan Krolikoski by establishing an annual scholarship for EE/CS undergrads. Lu acknowledged, this has an additional benefit in promoting coursework on standards at the undergraduate level. Accellera also presented the Technical Excellence award posthumously to the late Phil Moorby of Verilog fame. Well deserved.

Lots of good work, more good stuff to anticipate!


Sino Semicap Sanction Screws Snugged- SVB- Aftermath more important than event

Sino Semicap Sanction Screws Snugged- SVB- Aftermath more important than event
by Robert Maire on 03-15-2023 at 10:00 am

China Chip Embargo

-Reports of further tightening of China SemiCap Restrictions
-Likely closing loopholes & pushing back technology line
-Dutch have joined, Japan will too- So far no Chinese reaction
-SVB is toast but repercussions may be far worse

Reports of tightening semiconductor sanctions on Friday

It was reported byBloomberg of Friday that the Biden administration was further tightening restrictions on semiconductor equipment that can be sold to China.

It was also reported that the number of tools needing a license, which would likely not be granted , could double. And all of this could happen in a couple of short weeks by April 1st (no joke!)

If we take this at face value, we could take the impact previously reported by companies and basically double it. It could potentially be even worse as if the rules push the technology barrier further into past technology nodes, it is likely that both more customers as well as more equipment types will be covered by the sanctions.

Closing loopholes and work arounds

One of the issues with current sanctions is that there are some loopholes and work arounds that need to be stopped. It is not 100% clear that stopping EUV stops the Chinese at 14NM. With pitch splitting and multiple patterning and a number of unnatural acts and tricks you can coax some limited , low yielding wafers below prior limits.

This suggests that you have to push the limits back deeper into past technology nodes in order to reduce the ability to do a work around or other trick even if low yielding.

Our best guess is that this likely will push back a substantial portion of technology into the age of DUV and 193 immersion technology. On the lithography side, this is relatively clear but on metrology and dep and etch it will be a bit harder to define.

Concerns of re-labeling and altered specifications

One of our chief concerns about the sanctions is that much of the definition of what could and could not be sold rested with the equipment manufacturers.
Could an equipment manufacturer just re-label an existing tool or de-rate its true specifications in order to get an export license? Or better yet, just change the software to neuter the tool with the software fix to re-enable it being easy to implement or sell later. Tesla sells cars with the same battery pack that are software limited to reduced mileage but can be upgraded later through a software switch.

Metrology tools which have a higher software component than dep, etch or litho tools have had many different software options and “upgrades” available that enhanced performance with little to no change in the hardware.

We were concerned a bit by Applied saying that it had $2.5B of impact but said it was working to reduce the impact to only $1.5B by “working” with the customer. How exactly does that work? Could tools have be de-rated, re-labeled just have their specs reduced?

Just drop the hammer and restrict China to 8 inch tools

In our view, the easiest, simplest, most fool proof way of limiting technology is to limit tools sold to 8 inch (200MM) tool sets. That immediately pushes China back to 193 dry litho tools as ASML never made 8 inch immersion tools.
That would cause a relative hard stop at 90NM which would be hard to get around even with multiple patterning and pitch splitting.

Its pretty easy to tell a 200MM tool from a 300MM tool so harder to re-label or derate more advanced tools. Most of the fabs in China are 200MM anyway and most consumer applications can use that generation of device with smart phones and PCs being the exception. Older tools have been selling like hotcakes anyway as noted by Applied in their just reported quarter where 50% of business was non-leading edge (maybe not 8 inch).

China hasn’t responded to the October sanctions so tighten the screws further

We think one of the reason’s the administration is acting now is that there has been essentially no response from China to the October sanctions so why not go ahead and tighten them further given that there seemed to be little chance or ability to respond.

Its not like China has helped us out with the Ukraine/Russia issue and seems to be helping out Russia more, so why be nice. Of course it ratchets up Taiwan issues even further but its not like that has been improving in any event.

Will third parties exit China?

We can only imagine that some of the tightening is aimed at companies like Samsung, or SK or even Intel that have operations in China. The newer sanctions may restrict even more the ability to sell into non-Chinese semiconductor operations located in China.

We would also imagine that the administration has to make sure we don’t see “straw” buyers of tools or cross shipping from third countries. This could help push more fabs back to the US or maybe to India or Europe. These are all good things given the huge percentage of fabs and spend that have been concentrated in China over the last few years

The Dutch have joined the blockcade, Japan will follow suit

It was also announced last week that the Dutch have officially joined the blockade of China. Even though everyone instantly thinks of ASML, we would remind everyone that ASMI , the long lost father of ASML, makes critical ALD tools that are used in many advanced and multiple patterning applications. Adding them to the blockade makes things even more difficult.

Japan has already been doing a “soft” blockade by not trying to replace American tools not shipping to China. In our view its only a short matter of time before they officially join the bandwagon. At that point its all over. There will not be a lot that could be done to get around the three top makers of semiconductor tools all banding together. It would be decades, even with blatant rip-offs, copying and thefts before China could get even a fraction of the tools needed.

SVB- An old fashioned run on the bank in an internet app generation
The overnight implosion of Silicon Valley Bank overwhelmed the news about the new China sanctions, and for good reason. After reading through most of the information that has come out it appears that it is not a heck of a lot more than an old fashioned “run on the bank” or liquidity crisis. This is certainly not reassuring nor is it meant to be, in fact the probability of it happening to other banks is quite high and we have already seen at least one other bank, Signature Bank follow in the implosion of SVB.

Run on the bank

Banks obviously invest and lend out deposits such that only a small fraction of deposited cash is available at any one time for withdrawal and transfer. SVB with $200B plus in assets saw $42B in cash walk out the door in one day, Thursday (over 20% of deposits) such that they were negative $1B at end of day. There aren’t many banks today that could lose 20%+ of their assets in a single day and its amazing that SVB actually did.

This was essentially all depositor panic as the bank couldn’t liquidate assets fast enough to keep up and was forced to try to fire sale and sell stock as well to raise funds.

This was not because as some politicians said it was a tech bank or some sort of tech conspiracy. It was not like the S&L crisis of years ago where investments went bad and the bank simply did not have enough underlying money due to mismanagement. In the S&L crisis, depositors on got back 50 cents on the dollar of non-FDIC money. It is expected that SVB depositors will get 100% because the assets are actually there and worth something when properly, slowly liquidated.

The fed has said Sunday that everyone will get their money.

We would lay more of the blame with the velocity and “twitchiness” of money, and banking apps which is further amplified by lightning fast VC money and tech investors/depositors under pressure.

To unpack that statement we would first point to the fact that with internet apps and access we can move enormous amounts of money easily and without any friction for little more than a whim. With my phone I can move substantial money between multiple banks and brokerage firms without thinking. I don’t have to get in my car and drive down to the bank and wait in line. So the ability for a bank to lose 20% or more of deposits in one day has been enabled by technology ease.

Banks have been paying half a percent when I can and did move my money to get 4% or better elsewhere and it just took me a few clicks.

Secondarily, people have the attention span of a ferret on speed as well as the associated overly rapid reaction, weather to a real or perceived threat. When rumors of SVB started, it was transfer my money first, ask questions later.

There apparently were social media posts in the VC community about SVB issues. In addition Peter Thiel withdrew all his money just before the collapse. We are sure word of all this ran through the VC and tech community in the valley way faster than a wildfire and at internet speeds. This community runs at light speed anyway and obviously has the ability to move money at light speed. Tech has been both under pressure and concerned a lot about money of late which was further tinder for the wildfire and tsunami of money flow.

Basically some sparks started a hyper-speed chain reaction in an already stressed, twitchy, tech community that reacted too quickly for the bank to respond.

The result is SVB – “Silicon” Valley Bank is dead. SVB certainly did take risks and was not as diversified as other banks but that was not the root cause of the issue. Nor was it simple failed investments. They are not 100% clean either with rumors of last minute bonuses and insider stock sales.

SVB will leave a gaping hole in the valley. They did deals others wouldn’t touch. They earned a lot of fierce loyalty from tech companies that remembered who helped them when they were struggling. It is the loss of an icon and the name “Silicon” likely has special resonance for semiconductor related companies who started and still live in the valley. Its a knife in the heart of the tech industry.

But it could easily happen to other banks. Could Chase or Citibank tolerate an exodus of 20% of their depositors in one day? Chase has $3.6T in assets. Could they even come close to having $720B in assets fly out of their servers in the blink of an eye in a day? I very much doubt it. The fiberoptic cables linking them to other recipient banks would melt.

There are no laws or restrictions in existence today that would prevent a repeat of SVB and would prevent a lightspeed fueled, social media catalyzed run on a bank in an exposed sector.

SVB’s bigger issue is collateral damage

SVB is dead and soon to be buried. The FDIC will clean up the mess and bury the body and everybody will get on with business. But in the meantime there will be a parade of companies who are exposed to SVB who will put out press releases of how they are or aren’t impacted. Starting with half a billion at Roku. There will without doubt be a lot of chip and chip equipment companies exposed. We will probably live through a week of press releases and public assurances and telling people to remain calm.

It certainly just exposes how vulnerable things are.

The stocks

The thought of new China sanctions that could double the impact of the loss of China sales and should weigh on the stocks if anyone can get past the SVB news.

Its obviously highly negative for the entire semiconductor equipment group.

We have been saying for some time now that we were not at the bottom and there was still downside and this is yet another example of it.
And don’t forget….memory still sucks….

Add to the new China sanctions the SVB issue, which luckily happened in front of a weekend and you are setting up the upcoming week to look ugly and volatile despite all the well meaning assurances.

A couple of more Signature Banks or Rokus and it could get a lot uglier.
We won’t know the full extent of the fallout of either the new China sanctions nor the SVB fallout for some time. SVB will likely resolve more quickly as the fed has to quell panic. The new China sanctions will take some time to be announced then disseminated and analyzed. Its likely we won’t get an idea about the new impact on semiconductor equipment companies until they start to announce Q1 results in April.

As we have said for quite a while now, we remain highly cautious/negative on the group as a whole and feel that much of this news may cause the recent rally to reverse or at the very least slow. We had suggested that the rally was a dead cat bounce or false bottom and this is likely the evidence that supports that.

About Semiconductor Advisors LLC
Semiconductor Advisors is an RIA (a Registered Investment Advisor),
specializing in technology companies with particular emphasis on semiconductor and semiconductor equipment companies. We have been covering the space longer and been involved with more transactions than any other financial professional in the space. We provide research, consulting and advisory services on strategic and financial matters to both industry participants as well as investors. We offer expert, intelligent, balanced research and advice. Our opinions are very direct and honest and offer an unbiased view as compared to other sources.

Also Read:

Report from SPIE- EUV’s next 15 years- AMAT “Sculpta” braggadocio rollout

AMAT- Flat is better than down-Trailing tool strength offsets memory- backlog up

KLAC- Weak Guide-2023 will “drift down”-Not just memory weak, China & logic too

Hynix historic loss confirms memory meltdown-getting worse – AMD a bright spot


Scaling the RISC-V Verification Stack

Scaling the RISC-V Verification Stack
by Bernard Murphy on 03-15-2023 at 6:00 am

RISC V verification stack

The RISC-V open ISA premise was clearly a good bet. It’s taking off everywhere, however verification is still a challenge. As an alternative to Arm, the architecture and functionality from multiple IP providers looks very competitive, but how do RISC-V providers and users ensure the same level of confidence we have in Arm? Arm run 1015 cycles of verification per core, with years of experience baked into core and system level regression suites. Equally problematic is verifying the custom instructions you can add in RISC-V. How can a core or system builder measure up to the Arm-level of confidence? A part of the answer, according to Breker, is much higher levels of automation. Makes sense, but what can be automated? Breker start with a verification stack, with layers from early liveness testing in IPs all the way up to system level performance and power profiling.

Core Verification – Part I

Maybe you don’t think you need help testing a RISC-V core, but if you’re just starting out with this architecture, or you’d like to accelerate test suite development (test suite generation is the biggest time hog in the 2022 Wilson survey on verification), or you’d just like to add an independently generated suite of tests to make sure you covered all the bases, Breker’s FASTApps might be worth a look.

Remember how the Breker technology works. Building on one or more PSS (or UVM)-compliant test scenario models, the technology generates usage-centric graphs, then automatically builds a suite of test cases as paths traced through those graphs. These include stimulus, coverage models and expected results. Scenarios can be single-threaded or multi-threaded, even on a single core. The Apps are a layer over this fundamental test synthesis capability. These build important tests for load store integrity, random instruction testing, register-to-register hazards, conditionals and branches, exceptions, asynchronous interrupts, privilege level switching, core security, exception testing (memory protection and machine-code integrity), virtual memory/paging and core coherency.

A noteworthy point here is that custom instructions added to the core become new nodes in these graphs. When you synthesize test suites, custom instructions are added naturally to test suites during scenario development. They will be covered as comprehensively as any other instruction, to the extent possible given the nature of those instructions.

Tests developed through the Breker technology are portable across simulation, emulation, and prototyping platforms and from IP verification to system level verification, maximizing value in the overall test plan. They even have a UVM handle for those allergic to PSS 😊.

SoC Verification

The same approach can be extended to system-level verification apps, here the upper 3 levels of the stack. Breker is already well-known for their dynamic coherency verification, a major consideration at the system level. To this they have added dynamic power management checking. Think of the state machine for a power domain controlling startup and shut down. That can be mapped to a graph, then graphs for each such domain can be squashed together, allowing test synthesis to explore all kinds of interesting scenarios across power switching.

For security verification, scenarios can be defined to test that access to different parts of the memory map. At a more detailed level, test suites can cover system level interrupts, typically even more layered, complex and asynchronous than at the IP level. More possibilities, tests for atomic instructions, in the presence of interrupts for example. System memory virtualization and paging tests. And so on.

What about performance bottlenecks, in the network, in interrupt handling, in external memory accesses, in all the places performance can be dragged down? The best way to attack this problem is by running a lot of synthetic tests concurrently. Like all those tests Breker built for you. That’s a great way to increase confidence in your coverage.

Core Verification – Part II

Core developers know how to deliver coverage for the first 4 levels in the stack, but how do they test system-level correctness, the upper 3 levels? Improving coverage here is just as important. Arm especially has excelled at delivering high confidence for integration integrity. If your RISC-V is being developed in parallel with a system application and use cases, obviously that system will be at least one of your platforms. If you don’t yet have a system target or you want to extend system testing further, you might consider the OpenPiton framework from Princeton. This is a framework to build many-core platforms, offering an excellent stress test for RISC-V system verification.

Running system integrity tests against a core isn’t overkill. I attended a talk recently touching on issues found in system software. Software that has been proven to work correctly on a virtual model of hardware often uncovers bugs when run on an emulated model. A significant number of those bugs are attributable to spec ambiguity, where the virtual model developer and the hardware developer made seemingly harmless yet different decisions.  Difficult to point a finger at who was at fault, but either way expensive disconnects emerge late in design. The Breker solution also allows firmware to be executed early in the verification process, on designs where the processor has not yet been incorporated. You might not catch all these problems in the upper 3 layers of the verification stack, but robust system testing will catch more than you would otherwise.

Worth it?

I really like the approach. No verification technology can do more than dent the infinite verification task, but some can do it more intelligently than others, especially at the system level. Breker provides an intuitively reasonable way to scale the RISC-V verification stack with meaningful coverage. Some blue-chip and smaller customers of Breker also seem to think so. Among customers willing to be cited, Breker mentions both the GM of the SiFive IP business unit and the CEO of Nuclei technology (a RISC-V IP and solution provider). You can learn more about Breker FASTApps HERE.


JESD204D: Expert insights into what we Expect and how to Prepare for the upcoming Standard

JESD204D: Expert insights into what we Expect and how to Prepare for the upcoming Standard
by Daniel Nenni on 03-14-2023 at 10:00 am

JESD204D SemiWiki Image

Join our upcoming webinar on JESD204 and get insights into what we predict the upcoming JESD204D standard will contain, based on many years of  experience working with JESD204.

Our expert speaker, Piotr Koziuk, has over a decade of experience with JESD204 standards and is a member of the JEDEC Standardization Committee. He will share his prediction of what could be the features of the JESD204D and explain potentially how the new architecture will improve the Bit Error Rate (BER) through Reed Solomon Forward Error Correction (RS-FEC) and new framing and data encoding patterns.

Also briefly touch upon eXtreme Short Reach (XSR) for Die-to-Die or 2.5D Chip-to-Chip stacking applications originating from the underlying 112G OIF Serdes Specifications and cover how the standard will potentially target various reach classes for PAM4 and NRZ encoding and the higher line rates.

Don’t wait to register for this must-attend webinar, happening on April the 11th and 12th.

Register for April 11 th – 11 AM EAST, 8 AM PST – USA

Register for April 12 th – 5 PM China Time, 6 PM Japan & Korea

About Comcores
Comcores is a Key supplier of digital IP Cores and solutions for digital subsystems with a focus on Ethernet Solutions, Wireless Fronthaul and C-RAN, and Chip to Chip Interfaces. Comcores’ mission is to provide best-in-class, state-of-the-art, quality components and solutions to ASIC, FPGA, and System vendors. Thereby drastically reducing their product cost, risk, and time to market. Our long-term background in building communication protocols, ASIC development, wireless networks and digital radio systems has brought a solid foundation for understanding the complex requirements of modern communication tasks. This know-how is used to define and build state-of-the-art, high-quality products used in communication networks.

To learn more about this solution from Comcores, please contact us at sales@comcores.com or visit www.comcores.com

Also Read:

WEBINAR: O-RAN Fronthaul Transport Security using MACsec

WEBINAR: Unlock your Chips’ Full Data Transfer Potential with Interlaken

Bridging Analog and Digital worlds at high speed with the JESD204 serial interface