RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

A Fast Checking Methodology for Power/Ground Shorts

A Fast Checking Methodology for Power/Ground Shorts
by Tom Dillinger on 12-01-2020 at 10:00 am

Figure 4

The most vexing problem for physical implementation engineers is debugging errors due to power-ground “shorts”, as reported by the layout-versus-schematic (LVS) physical verification flow.  The number of polygons associated with each individual grid is large – an erroneous connection between grids results in a huge number of segments reported by LVS analysis.  These connection issues can arise from a number of sources – e.g., an error in coding the P/G grid segments or signal track locations in the setup file for a block-level place-and-route step, a miscalculation in a via array generation script, or an incorrect metal fill algorithm.

Mentor, a Siemens Business, recently published an app note describing a straightforward methodology for detecting P/G shorts, utilizing a couple of routines released with their Calibre tools.  The goal is to enable a quick turnaround method focused on P/G physical design issues.  The intent is to exercise this flow often – e.g., running at block level after an iteration of place-and-route, or running at the full-chip SoC level with (potentially) partial block data.  Although the error dataset for a P/G short can be large, fortunately, the errors tend to be systematic.  The fixes to the P&R grids, routing tracks, and/or the insertion algorithms are typically relatively easy to re-code.  By simplifying the task of P/G connectivity verification – “shifting left”, in common parlance – debugging the final LVS results with full-chip SoC data will be expedited.

Using Layer-Purpose Pairs to their Fullest

The Mentor flow utilizes the feature of GDS-II or OASIS physical data to represent each object with both its fabrication layer designation plus an additional attribute – a “purpose” (or datatype).  Together, these are known as the (layer-purpose) pair for the object.  The figures below illustrate the proposed flow.

Figure 1.  Overview of P/G connectivity checking flow

At a high level, the physical design data is exported from a block’s P&R results in LEF/DEF format, and mapped to specific layer-purpose pairs for subsequent LVS connectivity checks.

From the LEF/DEF data, it is necessary to identify the power and ground nets from the SPECIALNETS section of the DEF file, as shown in the figure below.

Figure 2.  Example of the SPECIALNETS section in the DEF file from a P&R flow

The Mentor utility code then assigns the (layer, purpose) pair attribute to all data on each metal level, as shown below.

Figure 3.  Mapping table for different classes of metal nets and pins, using layer-purpose pairs

Parenthetically, it is pretty common for complex custom layouts to use lots of additional layer definitions in the design composition palette – e.g., MnVDD, MnGND, MnVDDIO, MnGNDIO – each with different visual setting for color and stipple pattern.  The layout designer utilizes these distinctions when drawing to try to avoid connection errors.  The same Mentor approach applies here, as well, where the individual palette layers would be mapped to the (layer, purpose) pairs in the table above.

Once the exported physical design data is separated into (layer, purpose) subsets, the Mentor Calibre SVRF verification runset exercises the appropriate “shorting” and “bridging” connectivity checks, as depicted below.

Figure 4.  Example of connectivity checks for P/G shorts and bridges

Additionally, there is an interactive Calibre debug feature which helps to quickly identify the cause of the connectivity error.  A shape that is identified as part of the error can be “isolated” in Calibre – still present in the physical data model, but assigned a different electrical net identifier.  An interactive LVS short check routine can be rerun to investigate how the connectivity error is affected.

The Mentor Calibre app note on LVS P/G shorts verification can be accessed at the following link.  The interactive debug method for virtual isolation of a selected shape prior to re-running LVS is shown in this online video demonstration (link).

Also Read:

Mentor Offers Next Generation DFT with Streaming Scan Network

Mentor User2User Virtual Event 2020!

ASIC and FPGA Design and Verification Trends 2020


Noose tightens on SMIC- Dead Fab Walking?

Noose tightens on SMIC- Dead Fab Walking?
by Robert Maire on 12-01-2020 at 6:00 am

SMIC US Embargo

-US Administration to “blacklist” SMIC- Cutting off ALL US help
-A slow death versus a quick death (unlike Jinhua)
-There is enough time on way out door to leave scorched earth
-Reports in the Media about SMIC being “Blacklisted”

It has been widely reported that SMIC will be added to the US “Blacklist” of Chinese companies that you can’t do any business with, not even invest in their stock. The investment part has little impact as investors will have more than enough time to get out of the stock but the blacklist on equipment sales will be the equivalent of a death sentence for SMICs future Moore’s Law progress.

Trump to add China’s SMIC and CNOOC to defense blacklist “No license for You”

For a while it sounded like there was a small loophole big enough to fit a lot of equipment through…a “license”. Well, a blacklist “Trumps” a license. We were very dubious of companies abilities to get a license even though companies acted like a license was no big deal. Licenses for SMIC are obviously moot now. The bigger question is when the blacklist will go into effect.

If it goes into effect immediately, which it should, then the result is clear. If the implementation of the blacklist is delayed for several months (like the stock sale) then the blacklist is toothless as SMIC would have a chance to stock up.

A slow death versus a quick execution

Unlike Jinhua, which was executed virtually overnight and before it was even born as a working fab, SMIC would be essentially starved to death by denial of new technology.

SMIC could stay somewhat “frozen in time” at roughly 14NM technology at best. Some might suggest that they could try multipatterning or other methods to get past but that requires more dep and etch which they won’t have enough access to except from AMEC ( a local Chinese equipment maker) or perhaps for a short while from Japanese competitors such as TEL and Hitachi or ASMI which can’t supply a complete solution.
Its slow starvation.

US equipment companies will have to pack up and get out of Dodge

Like Jinhua, SMIC will also blacklisted from services, spares and upgrades that keep a fab running. Sooner or later tools will eventually go down and need spare parts which will no longer be forthcoming from the US.

In Jinhua we saw US company reps get on a plane out of town the next day. Perhaps it may not be as bad at SMIC which may try to hire local service people, now unemployed, but without parts and support its just a matter of time before things stop working altogether.

The reality is that relatively quickly the fab will become non functional, yields will go down and progress will reverse itself.

Collateral impact at other Chinese fabs- two ways

We remain concerned that business with other Chinese fabs is at grave risk.
First, they could be next in line for the blacklist, embargo/license as the US works its way down from Chinas leading fabs.

We think the US would likely strike at leading edge memory fabs in China as they are also potentially “dual use” and also are a similar threat to US companies like Micron (which itself has been a target of China).

The second way we think US companies will be impacted is that Chinese companies will triple down on looking for any other source of equipment, European, Japanese, Korean and domestic Chinese other than US supplied, thinking that they are next on the blacklist.

Older, 8 inch Chinese fabs less impacted

It is curious that its a little known fact that most equipment companies are currently selling more 8 inch equipment today than when 8 inch equipment was the only game in town and state of the art. 8 inch has become very popular.

Chinese 8 inch fabs, which represent the majority, likely will be able to get equipment and spares through the secondary market, although prices have skyrocketed and supplies have dried up.

If you are sitting on an old fab somewhere, its likely worth gold on the secondary market. Not every chip needs to be bleeding edge. In fact given the high percentage of consumer goods, TVs, appliances, IOT etc, 8 inch will service that market very nicely. Selling 8 inch tools is probably pretty good right now

Boxing in Biden

If its one thing the current administration always has time for and makes a priority, its being spiteful and revengeful. In our recent notes we have talked at length about scorched earth on the way out the door. Its clear that the main motivation for the SMIC blacklist is less focused on hurting China and more focused on “Boxing in” the Biden administration and handing off a mess to deal with.

There is strong Bi-partisan support against China in congress so that the incoming Biden administration will not make undoing the SMIC blacklist a priority and in fact the blacklisting may become permanent by default, so as not to be seen as weak on China for the 2024 race.

In addition, the current administration doesn’t have to deal with any of the fallout from the blacklist as that will be the next administrations problem.

Its unclear that this is the final gasp. There is still a lot of time left on the clock to do more damage. We would not be surprised to see more last minute moves focused on keeping promises made or leaving behind more landmines and damage.

The Stocks

We think much of the negative news is likely already in the semiconductor equipment stocks and they have been like teflon anyway.

We don’t see the SMIC “blacklist” as significantly incremental nor negative enough to seriously negatively impact the stocks that have had such strong upward momentum.

Much of SMICs numbers have likely already been taken out of most companies business plans as well so we will not see a significant impact on revenues or earnings.

The only thing we see that could have impact is a larger/wider move against China in trade that could result in a broader reprisal or broader impact on Chinese customers of US equipment companies.

While we expect further moves by the outgoing administration we don’t expect them to rise to the level of igniting a full blown trade war as that would reflect negatively on them rather than the incoming administration.

Also Read:

China Semiconductor Bond Bust!

Is Apple the Most Valuable Semiconductor Company in the World?

2021 will be the year of DRAM!


PLDA Brings Flexible Support for Compute Express Link (CXL) to SoC and FPGA Designers

PLDA Brings Flexible Support for Compute Express Link (CXL) to SoC and FPGA Designers
by Mike Gianfagna on 11-30-2020 at 10:00 am

PLDA Brings Flexible Support for Compute Express Link CXL to SoC and FPGA Designers

A few months ago, I posted a piece about PLDA expanding its support for two emerging protocol standards: CXL™ and Gen-Z™.  The Compute Express Link (CXL) specification defines a set of three protocols that run on top of the PCIe PHY layer. The current revision of the CXL (2.0) specification runs with the PCIe 5.0 PHY layer at a maximum link rate of 32GT/s per lane. There are a lot of parts to this specification and multiple implementation options, so a comprehensive support package will significantly help adoption. This is why PLDA brings flexible support for compute express link (CXL) to SoC and FPGA designers.

The Options

The three previously mentioned protocols that make up CXL are:

  • CXL.io: which is very similar to traditional PCIe and is responsible for discovery, configuration, and all the other things that PCIe is responsible for
  • CXL.cache: which gives CXL devices coherent, low latency access to shared host memory
  • CXL.mem: which gives the host processor access to shared device memory

CXL defines three types of devices that leverage different combinations of these protocols depending on the use case.

Three types of CXL devices

As shown in Figure 1 above, a Type 1 device combines CXL.io + CXL.cache channels. Typical Type 1 devices may include PGAS NICs (with shared global address space) or NICs with atomics. Figure 2 illustrates a Type 2 device combining all three channels. Type 2 devices may include accelerators with memory such as GPUs and other dense computation devices. Figure 3 shows a Type 3 device with CXL.io and CXL.mem channels. A typical Type 3 device may be used for memory bandwidth expansion or memory capacity expansion with storage-class memory.

The goal of CXL is to maintain memory coherency between the CPU memory space and memory on attached devices, which improves performance and lowers complexity and cost. CXL.cache and CXL.mem support this strategy. To implement CXL into a complex SoC, an interface will be required to transfer packets between the user application and the protocol controller. Various interconnect technologies are available:

AMBA AXI: is a parallel high performance synchronous, high frequency, multi-master, multi-slave communication interface which is mainly designed for on-chip communication. It has been widely used across the industry and for many projects. The AMBA® AXI™ protocol is typically chosen to reduce time to market and ease integration.

CXL-cache/mem Protocol Interface (CPI): CPI allows mapping of different protocols on the same physical wires. The spec is a public-access protocol which has been defined by Intel and totally fits with the CXL spec. It is designed for CXL.cache and CXL.mem and allows mapping of CXL.cache and CXL.mem on the same wires. It is a lightweight low-latency protocol.

AMBA CXS: is a streaming protocol that enables the transmission of packets with high bandwidth between theuser application and the protocol controller. Via the CXS interconnect, the designer can bypass the controller’s transaction layer which can reduce the latency. CXS specifications have been designed by Arm, to be implemented seamlessly with Arm-based System-on-Chip solutions.

Each of these interfaces has its own benefits and use cases.

Below are some implementation examples:

CXL implementation examples
  • Option 1 (Figure 4): The designer chooses CPI for cache & mem channels. This is the most generic option providing the lowest latency and highest flexibility. It allows designers to implement custom memory and cache management that may be independent from the CPU architecture
  • Option 2 (Figure 5): The designer chooses CPI for the cache channel and AMBA AXI for the mem channel. This option allows for custom cache management while configuration and memory management are managed by the CPU subsystem via the NoC. It can be an interesting option for prototyping CXL.mem on an SoC or FPGA with built-in AMBA AXI interconnect
  • Option 3 (Figure 6): The designer chooses CXS. This option is specific to Arm-based SoCs and allows seamless connection to the Arm CoreLink Coherent Mesh Network interconnect and Arm CPU subsystems. It allows support for coherent communication via CXL (to the CPU), and CCIX (to the accelerators)

PLDA has designed a highly flexible IP to meet all the needs of CXL implementation scenarios in a complex SoC or FPGA.  Flexibility is a fundamental part of the DNA of PLDA, and the company has deep domain expertise in PCIe. So, XpressLINK-SOC naturally fits in the roadmap to support designers who need to implement CXL in a complex design. This parameterized soft IP product supports all the device types and many interconnect options.

XpressLINK-SOC supports:

  • The AMBA AXI Protocol Specification for CXL.io traffic
  • Either the Intel CXL-cache/mem Protocol Interface (CPI), the AMBA CXS Interface or the AMBA AXI Protocol Specification for CXL.mem
  • Either a CPI interface or the AMBA CXS Protocol Specification for CXL.cache traffic

You can probe further about XpressLINK and XpressLINK-SOC on the PLDA website to see how PLDA brings flexible support for compute express link (CXL) to SoC and FPGA designers.


ML plus formal for analog. Innovation in Verification

ML plus formal for analog. Innovation in Verification
by Bernard Murphy on 11-30-2020 at 6:00 am

innovation min

Can machine learning be combined with formal to find rare failures in analog designs? ML plus formal for analog – neat! Paul Cunningham (GM, Verification at Cadence), Jim Hogan and I continue our series on research ideas. Here an idea from analog simulation sampling. Feel free to comment.

The Innovation

This month’s pick is HFMV: Hybridizing Formal Methods and Machine Learning for Verification of Analog and Mixed-Signal Circuits. This paper appeared in the 2018 DAC. The authors are from Texas A&M University.

The authors propose a novel approach to analog verification. This uses a machine learning model built over a relatively small number of simulation samples. Then applying SMT-based formal analysis for model refinement.

The paper analyzes three analog circuits with O(20) transistors. Each transistor has multiple process or electrical parameters with statistical variances, making for an O(60) dimensional analysis.

The approach iteratively generates a set of parameter sample points with a goal to span a target n-sigma confidence of circuit correctness. At each iteration, they refine an ML-based failure model based on simulating the sample points from the previous iteration. They then linearize this ML model and pass it on into an SMT solver. To then pick a next batch of sample points to simulate for the next iteration. You can think of the SMT solver conceptually “inverting” the ML model. Picking parameter sample points the model believes have a high probability of causing the circuit to fail.

The authors present results showing their approach achieves 4-sigma confidence with 100X less simulations than Monte Carlo. And 10X less simulations than a modern importance-based sampling technique (SSS).

Paul’s view

What a fun paper! Combine ML with SMT to find n-sigma process variation bugs in analog circuits. The results data also looks compelling, finding 8-sigma failures with 10X less simulations than SSS, a respected importance sampling method for high dimensionality problems.

That said, I don’t think that SSS and HFMV are solving quite the same problem: SSS is focused on picking an efficient and representative set of sample points to measure the failure rate with some target confidence level. HFMV is using ML and SMT to intelligently search for sample points which will cause the circuit to fail.

The magic for me in HFMV is the use of SMT to invert the ML failure model – i.e., to find a process variation point for all the transistors in the circuit which, when applied to the ML failure model, results in a circuit failure. I worry a bit that the SMT formulation given in the paper is being satisfied by picking solutions where all the transistors in the circuit are at the extremes of their process variation points: the paper refers to bounding SMT based on an n-dimensional hyper-cube which I believe means to bound each transistor’s process variation independently to 8-sigma, rather than bounding the composite multi-dimensional distribution of all transistors’ process variations to 8-sigma, which would be an n-dimensional hyper-sphere.

A failure where multiple transistors are at 8-sigma is way beyond 8-sigma for the circuit as a whole and so may not be an interesting failure to detect. I wonder if it would be possible to add an additional constraint to the SMT expression to make it pick solutions inside of the appropriate hyper-sphere denoted by 8-sigma for the whole circuit? Failures found inside this hyper-sphere would be true 8-sigma failures for the circuit.

Overall, a very innovative and thought-provoking paper.

Jim’s view

This reminds me of Solido. Using Monte Carlo with ML to find the corners without having to do huge numbers of simulations. Back then our challenge was to make sure we were covering those corners with confidence. I talked to some of my expert analog buddies to get their take. Their view was that this is nice, but are they able to do it in less time than other methods? They thought it would take a deeper dive to figure out the added benefit in this method. Which is intriguing but not yet investable in my view.

My view

I would like to see more analysis and characterization of the failures they found. First validating each failure through some independent method of analysis – to make sure none are false positives. Second to discuss how this distribution maps against a say a multi-variate normal distribution, back to Paul’s point.

You can read the previous Innovation in Verification blog HERE.

Also Read

Cadence is Making Floorplanning Easier by Changing the Rules

Verification IP for Systems? It’s Not What You Think.

How ML Enables Cadence Digital Tools to Deliver Better PPA


Applied Materials Will Regain Semiconductor Equipment Lead From ASML in 2020

Applied Materials Will Regain Semiconductor Equipment Lead From ASML in 2020
by Robert Castellano on 11-29-2020 at 10:00 am

2020 WFE Share

On December 2, 2019, I posted a SemiWiki article entitled “ASML Will Take Semiconductor Equipment Lead from Applied Materials in 2019.”Since losing its dominance for the first time since 1990 in 2019, Applied Materials is poised to lose its retake the 2020 lead in the semiconductor equipment market. ASML led the global wafer front end market in 2019 on the strength of its shipments of pricy EUV lithography equipment, according to The Information Network’s report “The Global Semiconductor Equipment: Markets, Market Shares, Market Forecasts.” But a lackluster Q1 2020, when quarter-on-quarter revenue growth decreased 31%, was largely responsible to the company losing market share to Applied Materials for 2020.

The chart below shows individual market shares for the top five individual equipment companies.  Applied Materials increased its market share to 16.4% in 2020 from 15.9% in 2019. ASML, which held a 16.9% share in 2019, will drop  to a 15.4% share in 2020.

Applied Materials competes directly with several companies:

  • ASML in metrology/inspection
  • Lam Research in deposition and etch
  • Tokyo Electron in deposition and etch
  • KLA in metrology/inspection

Lam’s market share will increase to 10.8% in 2020 from  10.6%in 2019, due to the company’s high exposure to memory, and in particular NAND, which is being impacted by low ASPs and high inventory overhang. Tokyo Electron’s market share will increase to 12.3% in 2020 from 11.7% in 2019 because of the company’s dominance in photoresist processing and dielectric etch systems.

KLA’s market share will increase to 6.2% in 2020 from 5.4% in 2019. Metrology/inspection equipment is critical to assuring high yields during semiconductor manufacturing, particularly as new technology nodes are reached. Metrology systems are used to measure parameters such as thin film thickness or linewidths, and inspection systems are used to detect defects and monitor abnormalities in production.

Based on an overall WFE market in 2020 of $70 billion, the top five companies will have gained market share in 2020, while the remaining smaller competitors as a whole will decrease market share of the overall market from 39.6% in 2019 to 38.8% in 2020.


CD-Pitch Combinations Disfavored by EUV Stochastics

CD-Pitch Combinations Disfavored by EUV Stochastics
by Fred Chen on 11-29-2020 at 6:00 am

CD Pitch Combinations Disfavored by EUV Stochastics

Ongoing investigations of EUV stochastics [1-3] have allowed us to map combinations of critical dimension (CD) and pitch which are expected to pose a severe risk of stochastic defects impacting the use of EUV lithography. Figure 1 shows a typical set of contours of fixed PNOK (i.e., the probability of a feature being Not OK due to a stochastic defect [1,2]) in CD-pitch space. In this case, the stochastic failure is a microbridge, i.e., a failure of a trench (space) to open up at a given location.

Figure 1Stochastic failure (microbridge) probability contours in CD-pitch space. The dotted line represents the half-pitch limit, which has substantially lower failure probability. Based on Figure 11 in [1].

The results of Figure 1 are used to draw PNOK trends for fixed 18 nm CD in Figure 2. The half-pitch trend is also added from the same reference.

Figure 2. Stochastic failure (microbridge) probability increases for smaller half-pitch, but at fixed CD, it increases with pitch. Half-pitch trend is based on Figure 19 in [1].

A half-pitch in excess of 18 nm is needed here to guarantee PNOK less than 1e-9. The number of photons is always proportional to dose, but for features attached to dense half-pitch lines, such as line ends and contact/via hole edges, it is also proportional to pitch^2. On the other hand, for fixed CD, photon number is effectively reduced at larger pitches, because the number of separate coherent images (which are qualitatively different) increases with pitch (Figure 3). As a reminder, a lower photon number results in a larger % standard deviation, resulting in naturally randomly occurring local underexposure and overexposure at the feature edge [4]. It should be noted that Figures 1 and 2 only refer to a particular resist exposure condition with a particular dose for targeting the particular CD. A higher dose would reduce the photon shot noise but increase the number of secondary electrons and heating.

Figure 3. As pitch increases, more light is diverted from two-beam to three-beam images.

The example in Figure 3 for 0.33 NA and 0.2/1.0 annular illumination, a condition for bidirectional layouts over an extended pitch range, shows that increasing pitch toward 60 nm leads to a nearly equal 3-way splitting of the dose among three different coherently illuminated images. This is expected to aggravate the stochastic effect.

When the CD is equal to the half-pitch, the one-sided 3-beam image converges to the 2-beam image, so the 3-way dose split becomes 2-way, offering relief against the stochastic degradation. This explains why PNOK improves by CD approaching half-pitch. However, as the pitch decreases, a smaller portion of the pupil accounts for the 2-beam image, while a larger portion produces no image at all (Figure 4). This aggravates the PNOK degradation at smaller half-pitches.

Figure 4. As the pitch decreases, fewer pupil source points support two-beam imaging; instead most points provide no image at all, only background light. For a given mode of illumination, this aggravates the occurrence of stochastic failures, as the fraction of photons producing images has decreased.

The EUV mask also aggravates the stochastic issue, through the effect of shadowing. As shown in Figure 5, the 30 nm pitch is highly susceptible to the significant shadowing difference between illumination points on opposite sides of the pupil [5], whereas for 60 nm pitch, 2-beam imaging is less susceptible due to closer proximity of the points. The difference of shadowing between the two sides divides photons into two groups for the 30 nm pitch, depending on whether they exhibit the larger or smaller shadowing.

Figure 5. 30 nm pitch imaging requires source points which are separated quite far apart, resulting in a large difference of shadowing on the mask. On the other hand, for 60 nm pitch, two-beam imaging uses points which are closer, and thus less affected by significant shadowing differences. Thus, for 30 nm pitch, photons are effectively divided into two groups, depending on which side they are from.

Implications for EUV patterning

The implication is that layouts with a range of line pitches cannot be supported by EUV except with possible assistance of multipatterning. An easily visualized case is targeting the same CD for 1X, 2X, and 3X pitches. For example, when the 1X pitch is 30 nm, the 2X and 3X pitches of 60 nm and 90 nm respectively can result in dividing the photons primarily among 3 and 8 different coherently produced images, respectively (Figure 6). The stochastics for the 1X 30 nm pitch case is already expected to be bad enough [3], so the 2X and 3X cases would therefore be too severe to allow them to be imaged in a single exposure along with the 1X pitch. Note that changing the illumination from annular to dipole only strengthens the case against EUV single exposure, as forbidden pitches arise [6,7].

Figure 6. 30 nm, 60 nm, and 90 nm pitches split EUV photons into different numbers of coherent images.

Instead, a better alternative would be to have the 2X pitches achieved by blocking alternate lines (e.g., 30 nm width on 60 nm pitch), while the 3X would be achieved by DUV blocking (30 nm on 90 nm pitch). This entails a total of three exposures, one EUV exposure for 30 nm pitch lines (or even a DUV exposure with SAQP due to stochastic risk), one EUV block for 2X, and one DUV block for 3X.

Figure 7. 30 nm, 60 nm, and 90 nm pitch lines cannot be in the same EUV exposure due to stochastic risk, as revealed in Figure 6. The 30 nm pitch lines may first be printed by EUV (or SAQP), followed by a second EUV exposure to block out every other line for 60 nm pitch, and then a third, DUV exposure at 90 nm pitch to block out every three lines.

Thus, we see that small CD-large pitch combinations are worst for stochastic failures in EUV lithography. In fact, this implies that cutting could be subject to stochastic failure risk. Larger half-pitches are the preferred case. Hence, EUV may even follow DUV’s path to self-aligned double patterning (SADP) or even self-aligned quadruple patterning (SAQP).

References

[1] P. de Bisschop and E. Hendrickx, “On the dependencies of the stochastic patterrning-failure cliffs in EUVL lithography,” Proc. SPIE 11323, 113230J (2020).

[2] P. de Bisschop and E. Hendrickx, “Stochastic printing failures in EUV lithography,” Proc. SPIE 10957, 109570E (2019).

[3] J. Church et al., “Fundamental characterization of stochastic variation for improved single-expose extreme ultraviolet patterning at aggressive pitch,” Proc. SPIE 11323, 113230O (2020).

[4] https://www.linkedin.com/pulse/stochastic-behavior-optical-images-impact-resolution-frederick-chen

[5] S. Das et al., “E-beam inspection of single exposure EUV direct print of M2 layer of N10 node test vehicle,” Proc. SPIE 10959, 109590H (2019).

[6] https://semiwiki.com/lithography/283578-a-forbidden-pitch-combination-at-advanced-lithography-nodes/

[7] https://www.rit.edu/~w-lith/research/imagetheory/SPIE_5040-36_full.pdf

Related Lithography Posts


Webinar: 5 Reasons Why Others are Adopting Hybrid Cloud and EDA Should Too!

Webinar: 5 Reasons Why Others are Adopting Hybrid Cloud and EDA Should Too!
by Daniel Nenni on 11-27-2020 at 6:00 am

Rescale SemiWiki

With the complexity of transistors at an all time high and growing foundry rule decks, fabless companies consistently find themselves in a game of catch up. Semiconductor designs require additional compute resources to maintain speed and quality of development. But deploying new infrastructures at this current speed is a tall order for IT professionals tasked with supporting development and verification teams. When these  resources can’t keep up, engineers become compute constrained rather than compute empowered..

The semiconductor industry is not alone in the struggle to adopt new technologies that can accelerate the pace of  science and engineering breakthroughs. For that reason, cloud solutions are increasingly being implemented to empower R&D in a way never before seen. Breakthroughs in aerospace design, new drugs and vaccines, alternative energy solutions and much more are now being realized on cloud or hybrid cloud infrastructures. Because of security and IP concerns, EDA companies have primarily maintained on-premise data centers for their compute needs. However, that preference is changing due to manufacturers such as TSMC endorsing cloud. The industry has also seen a rise in startups entering the industry that do not have infrastructure of their own and are turn to the cloud to compete.

So let’s look at the main benefits of expanding EDA to a hybrid cloud environment. Join Rescale’s webinar to further explore how hybrid cloud will drive new levels of performance and efficiency in semiconductor. Register here.

Security

As companies look to move workloads to the cloud, the primary area of focus is how to protect sensitive information and IP. Recent research by Cloud Vision states , two thirds of companies consider this the main roadblock in adopting cloud. In light of this, major cloud providers have put substantial focus and investment to reduce risks and safeguard datacenters from any breach. As you can imagine, with companies like AWS, Microsoft and Google, no expense is spared to ensure they deliver a secure environment. As proof of these security measures, public cloud will experience 60% fewer security incidents compared to typical data centers this year. For organizations that require full stack compliance and security, platforms such as Rescale cover end to end workflows across the hardware and software layers with the highest of industry standards. . Even going as far as obtaining industry leading certifications to meet the strictest compliance requirements.

Agility

Never in our history has technological agility been more important than 2020. Facing a pandemic was the ultimate test of our systems and most companies found themselves not prepared. Being cutoff from typical on-premise infrastructure caused delays across the industry. VPNs became overwhelmed as engineers struggled to access the data and resources needed to continue development and run verification. The need to enable remote teams is not the only consideration. Systems need to have the flexibility to scale with phases of projects and production deadlines . For these reasons, hybrid cloud far out performs traditional infrastructures. It’s accessible anywhere you can find a wifi connection and compute resources scale as needed. The Rescale platform also offers remote desktop solutions and a wide variety of admin controls over budgets and permissions to keep operations running smoothly. With the stability and options of a multi cloud infrastructure and a variety of core types available on the platform, users can match the ideal core type to their workload and be confident in the stability of the infrastructure with a service level agreement that your job will run.

Impact and Productivity

Enabling engineers to focus on design means better products at a quicker pace. IT leaders need to look at the ways in which engineers are distracted or slowed from their core responsibilities. Companies spend top dollar to secure engineering expertise and talent and they should be working on the portion of the business where they will make the biggest impact. Distractions can come in the form of queues, slow workflows, license issues and more. Rescale looks to solve these issues with an intelligent control plane and full stack approach. Having an intelligent control plane for both local and cloud hardware allows R&D the ability to divert workloads to the best infrastructure based on performance and cost. A simple user interface with robust automation allows them to easily setup runs without relying on IT. And if they do come across a challenge, the Rescale support team is stacked with HPC and simulation experts that average a 15 min response time. All of this combines to allow engineers to be hyper focused on what they do best.

Speed to Market

A major component of gaining competitive advantage is to be first to market with a new product. This allows you to gain brand recognition, build customer loyalty and secure market share before competitors are even in play. A hybrid cloud approach enables semiconductor companies to dial up the number of iterations and accelerate speed to answer. Additionally, verification is expedited with the virtually unlimited resources available. When coupled with automated workflows, templates and continuous optimization from the Rescale platform, companies can make substantial improvements.

pSemi used Rescale to substantially speed up their development process, “We were able to use Rescale’s cloud platform to highly parallelize our simulations and bring the simulation time down from 7 days to 15 hours. We’ve demonstrated a 10x speed improvement on numerous occasions in our EM simulations using Rescale…”

The next wave of semiconductor advancements will be powered by hybrid cloud. The foundries have already started to adopt the technology. It is poised to revolutionize the industry by empowering engineers like never before and reaching new levels of performance and efficiency. Join Semiwiki and Rescale as we take a deeper look into the benefits of hybrid cloud and Rescale’s intelligent control plane approach. Register now!

About Rescale
Rescale is the leader in enterprise big compute in the cloud. Rescale empowers the world’s transformative executives, IT leaders, engineers, and scientists to securely manage product innovation to be first to market. Rescale’s hybrid and multi-cloud platform, built on the most powerful high-performance computing infrastructure, seamlessly matches software applications with the best cloud or on-premise architecture to run complex data processing and simulations.


Low Power SRAM Register Files for IoT, AI and Wearables

Low Power SRAM Register Files for IoT, AI and Wearables
by Tom Simon on 11-26-2020 at 10:00 am

SRAM register files

SRAM is the workhorse for on-chip memories, valued for its performance and easy integration with standard processes. The needs of wearable, IoT and AI SOCs have put a lot of pressure on the requirements for all on-chip memories. This is perhaps most evident in the area of power. AI chips that rely heavily on SRAM register files are being developed for use in cameras and voice recognition systems that must be always-on. Wearables and IoT devices need SRAM register files for secure communications, analog tuning and rapid real time processing. A white paper by Tony Stansfield, CTO at sureCore Limited, discusses how they have developed a process independent power optimized architecture for delivering register file IP.

To meet the traditional needs of SRAM, the emphasis had focused on area and performance optimization. The newer applications now driving the market call for a shift toward placing a much higher priority on power. The key to lowering power requirements are found in bit cells and peripheral circuits performing read and write operations. SureCore lays out in their white paper the methods they use in their SRAM architecture to achieve significantly low dynamic and static power.

An important aspect they mention is that register files are much more deeply integrated into logic circuits than larger SRAM blocks. As a result, they ideally need to use the same supply lines as the surrounding logic circuits. Because SRAM bit cells are more voltage sensitive than CMOS logic, an effective power reduction strategy must address this. The approach that sureCore took involves redesigning the bit cells so they can run at lower voltages. They then go further and change their bit cell so that it has separate read and write bit lines. This reduces the need for voltage assist and improves performance. Monte Carlo and High-Sigma simulation are used by sureCore to verify that the bit cells will operate at the necessary PVTs.

Using their experience in building low power SRAM blocks, they apply many of the same techniques to their register file design. To reduce capacitance on long wires, such as bit lines, they use hierarchical partitioning. To minimize activity on long wires they apply wide pre-decoding of addresses. Their patented ‘cascode pre-charge sense’ circuits help reduce voltage line swing, which helps reduce active and leakage power. The paper also goes into details on how they take advantage of the separate read and write lines to further optimize circuit performance.

The sureCore paper cites third party benchmarking that shows just how much power can be saved with their IP. They suggest that dynamic power savings of around 75% can be realized. Also, because their IP can operate over a wide range of voltages, low power modes can be more easily used to minimize power.

SRAM register files

In many rapidly expanding markets power saving has moved up to the number one priority. Even when faced with areas versus power trade-offs, power is now often the overriding design criteria. The sureCore SRAM IP, especially for use in SRAM register files, provides significant power savings through their well thought out architecture. The white paper makes interesting reading and offers more detail about their offering. It is available for download on their website.

About sureCore

sureCore is the Low Power leader that empowers the IC design community to meet their aggressive power budgets through a portfolio of innovative, ultra-low power memory design services and standard products. sureCore’s low-power engineering methodologies and design flows helps you meet your most exacting memory requirements with customized low power SRAM IP and low power mixed signal design services that create clear marketing differentiation. The company’s low-power product line encompasses a range of down to near-threshold silicon proven, process-independent SRAM IP.

Also Read:

CEO Interview: Paul Wells of sureCore

WEBINAR: Addressing Verification Challenges in the Development of Optimized SRAM Solutions with surecore and Mentor Solido

Low Power Design – Art vs. Science


Folding at Home. The Ultimate in Parallel Acceleration

Folding at Home. The Ultimate in Parallel Acceleration
by Bernard Murphy on 11-26-2020 at 6:00 am

COVID spike protein min

You may have heard of Folding at Home. It’s a very creative way that a bioengineering team, based at Washington University in St Louis, are modeling the process of protein folding. Greg Bowman, an associate professor of biochemistry and biophysics at the university directs the project and presented at Arm DevSummit this year. Proteins mediate pretty much everything to do with life, including pathologies. They’re long chains of amino acids, which fold up into energetically favorable shapes within milliseconds. The shape is critical. If a good protein folds incorrectly you get a disease, maybe Alzheimer’s. Conversely the infamous protein spike on the COVID virus hides the site that binds to a cell, to protect it from antiviral therapies. Then it opens up as it approaches a tasty cell target. Proteins in his words are molecular machines, dynamically changing structure as needed.

The COVID spike protein structure

Computational challenge

Modeling this behavior to search for effective therapies had been extremely difficult even on supercomputers. A single protein may contain a thousand or more atoms. And the COVID spike contains three proteins. Trying to model the evolution of folding to a favorable energy state had been limited to nanoseconds to microseconds. Not nearly long enough to simulate a complete folding. Also the process is stochastic. Which would require many simulations to build a range of samples and then model the dynamic behavior of these structures. Useful simulations could take hundreds or thousands of years to complete.

The Folding@Home innovation took advantage of a known characteristic of folding. It’s not a continuously changing dynamic system. Instead the protein evolves in small steps between local energy minima before moving on to a next configuration. It spends the majority of its time in these minima, where nothing interesting happens. So simulation can be decomposed into a Markov (statistical) sequence where computation is modeling these short transitions.

Massive parallelism through citizen science

This observation makes massive parallelism a realistic method of attack. Just have each engine model a transition. Or a bunch of transitions in sequence.  Orchestrating the whole process from a central site. But again, there’s no budget for a research group to push this into the cloud. And anyway, it’s not clear that any one cloud, no matter how big, could provide enough parallelism for this problem.

Instead this team created Folding@Home to take advantage of all the millions of home computers around the world. Enthusiasts can sign up to become a part of the folding network. Their software will run in the background on your system when you’re not busy doing something else. You run a little bit of the Markov sequence, then maybe another little bit and so on.

What makes Folding@Home so impressive is how many adopters have signed up. Greg showed a map with lots of support across the world. He talked about tens of thousands of Folding@Home volunteers before COVID hit. That rapidly grew to (now) over a million. He estimates that they are now computing at 5X the performance of the world’s fastest supercomputer (he cites IBM Summit), becoming the first compute system to break the Exaflops barrier.

Modeling COVID spike behavior

With this computational power they are now able to see the spike protein tip opening to reveal the binding site that the virus would use to attach to a cell. Remember, this is a level of modeling that isn’t even possible on a supercomputer. Of course a lot more work has to happen to get to a therapy from that simulation, but now they can provide experimenters with a much more detailed view of what’s happening. From which they can plan much more targeted attacks.

Why did Greg present this at the Arm conference? Because the software is also available on Android and will no doubt become available on other Arm-based platforms. And what better use could any of us imagine for using all that compute technology around the world?

You can learn more about Folding@Home HERE.

 

 


China Semiconductor Bond Bust!

China Semiconductor Bond Bust!
by Robert Maire on 11-25-2020 at 10:00 am

China Semiconductor Bond Bust

– Tsinghua $198M Bond Bust
– Good for memory: Samsung Micron LG Toshiba –
– Not good for chip equipment
– Could China Credit Crunch hit more than foundry embargo?
– Damage to China memory positive for other memory makers
– Not good for chip equip if customers can’t get money

China’s most prestigious leader of the effort to become dominant in semiconductors suffered an embarrassment of defaulting on $198M in bonds that were due Nov 17th. While seemingly a drop in the bucket of overall debt and the fact that they were in the midst of negotiating their way out it still sends shivers through China’s debt market and sent the bonds plummeting.

Tsinghua is not the only state backed Chinese firm with bond troubles which makes the concerns all the more worrisome.

Chinese tech group joins list of companies to default on bond issue

NAND in Wuhan and DRAM in Chongqing
Tsinghua already has a NAND factory in otherwise famous Wuhan and is planning a DRAM fab in Chongqing. They are a spinoff subsidiary of the prestigious Bejing University. They are perhaps the shining star of China’s semiconductor aspirations. Though SMIC has been around a long time it seemed Tsinghua had more potential. Tsinghua Unigroup default tests China’s chipmaking ambitions

Good for non Chinese memory makers like Samsung & Micron, LG & Toshiba

Being in the memory market and having the specter of China entering your market after watching China annihilate the LED & solar cell markets was likely quite chilling. China obviously doesn’t care about profitability (at least not in the beginning) and could easily trash pricing and destroy the commodity memory market just like the commodity LED and solar markets before it.

If I were in Boise I might have a little schadenfreude about the Chinese bond market right now, not unlike TSMC and SMIC.

Anything that slows down China’s aspirations in the memory market is likely positive for other competitors.

Equipment vendors likely between a rock and a hard place in China
Checking accounts receivable.

Semiconductor equipment makers may not be as happy about the bond default and subsequent credit downgrades.

We would bet a lot of money that the equipment makers are likely owed a whole lot more than $198M in equipment purchases and are looking at many times that in future orders and business. So their exposure far exceeds the bondholders.

Unlike the bondholders, equipment makers don’t want to stop shipping to their biggest, best and fastest growing market, that is China.

If equipment makers stop shipping due to credit risk/downgrades or fear of not getting paid then Tsinghua will avoid doing business with them at all costs (its not like there aren’t trying to avoid American equipment already given what happened to their cousins at SMIC).

Equipment vendors have to keep shipping with the hope that the Chinese government will be the backstop, or the company figures it out.

We can only imagine that some CFO’s have to be checking their Tsinghua related accounts receivable exposure.

Credit is all about faith
Too big/important to fail?

Lest anyone forget, the credit market is all about faith. Faith in getting paid back on the loan. The 2008/2009 market collapse was a collapse in the credit market. Faith in ever getting repaid went to zero.

The semiconductor industry is very highly capital intensive and very fickle in cyclical profitability. In addition the would be Chinese chip makers are likely finding out that the semiconductor market is much, much harder than the LED and solar markets which were relative pushovers.

The cost of an LED “fab” and complexity of process is not even a rounding error as compared to making a 128 level NAND chip.

It is likely that getting to yield, meaning getting to revenue, let alone profitability will likely be a lot longer and a lot harder than many in China likely anticipated after the cakewalk in LED and solar.

This means that many Chinese firms could have miscalculated when they would have been able to pay back debt and could find themselves in a cash crunch needing to extend credit terms out some more years/months.

We don’t know what caused Tsinghua’s issue but breaking the faith was not good as their bonds fell all the way down to 68 cents on the dollar at one point. (we don’t think equipment vendors would like to take 68 cents on the dollar owed them).

In the end, Tsinghua, like some US financial firms in 2008/9, is too big/important to fail and the Chinese government will step in at some point. The question is when and how and who will get hurt in the collateral damage

Could the US administer a “Coup de Grace”?
Part of the outgoing, “Scorched earth” policy

It is abundantly clear that the outgoing administration has embarked on a scorched earth policy for various reasons. Much of the scorched earth has been directed at international relations such as potentially attacking Iran, recalling troops and trying to make good on other campaign promises. Trade with China has been talked about as one such target.

The SMIC embargo, announced shortly before the election certainly was effective at hurting China’s chip ambition. Could the embargo be extended to memory, which is certainly capable of potential military “dual use technology” as a parting shot on the way out the door? Or maybe a blanket embargo? If there were a time to hurt China, the lame duck session is it.

The stocks

Most all semi stocks have been super hot as demand continues to be strong. The Tsinghua news is mildly positive for other memory makers as it will likely weaken and or slow China’s memory ambitions and ability to crush memory pricing.

It is likely not all that negative for equipment companies as they have even survived the SMIC embargo without so much as a scratch.

If anything, it may be a hidden positive as it will likely moderate memory spending which drives the notorious boom bust cycles in memory.

TSMC continues to be a huge winner. Micron seems in fine shape as well and would be happy to see Tsinghua go the way of Jinhua, even though we don’t think that will happen.

Equipment companies may see a hiccup or two in revenue recognition but not likely more than that unless things really go off the tracks, like the US upping the ante. While a possibility, we think the administration seems too pre-occupied with other fights with too little time left on the clock.

Also Read:

Is Apple the Most Valuable Semiconductor Company in the World?

2021 will be the year of DRAM!

Post Election Fallout-Let the Chips Fall / Rise Where They May