Bronco Webinar 800x100 1

TSMC’s Reliability Ecosystem

TSMC’s Reliability Ecosystem
by Tom Dillinger on 04-06-2022 at 10:00 am

AC accelerated stress conditions

TSMC has established a leadership position among silicon foundries, based on three foundational principles:

  • breadth of technology support
  • innovation in technology development
  • collaboration with customers

Frequent SemiWiki readers have seen how these concepts have been applied to the fabrication and packaging technology roadmaps, which continue to advance at an amazing cadence.  Yet, sparse coverage has typically been given to TSMC’s focus on process and customer product reliability assessments – these principles are also fundamental to the reliability ecosystem at TSMC.

At the recent International Reliability Physics Symposium (IRPS 2022), Dr. Jun He, Vice-President of Corporate Quality and Reliability at TSMC, gave a compelling keynote presentation entitled:  “New Reliability Ecosystem: Maximizing Technology Value to Serve Diverse Markets”.  This article provides some of the highlights of his talk, including his emphasis on these principles.

Technology Offerings and Reliability Evaluation

The figures above highlight the diverse set of technologies that Dr. He’s reliability team needs to address.  The reliability stress test methods for these technologies vary greatly, from the operating voltage environment to unique electromechanical structures.

Dr. He indicated, “Technology qualification procedures need to be tailored toward the application.  Specifically, the evaluation of MEMS technologies necessitates unique approaches.  Consider the case of an ultrasound detector, where in its end use the detector is immersed in a unique medium.  Our reliability evaluation methods need to reflect that application environment.”

For more traditional microelectronic technologies, the reliability assessments focus on accelerating defect mechanisms, for both devices and interconnect:

  • hot carrier injection (HCI)
  • bias temperature instability (NBTI for pFETs, PBTI for nFETs)
  • time-dependent dielectric breakdown (TDDB)
  • electromigration (for interconnects and vias)

Note that these mechanisms are highly temperature-dependent.

As our understanding of the physics behind these mechanisms has improved, the approaches toward evaluating their impact to product application failure rates have also evolved.

Dr. He commented, “Existing JEDEC stress test standards are often based on mechanism acceleration using a DC Vmax voltage.  However, customer-based product qualification feedback did not align with our technology qualification data.  Typically, the technology-imposed operating environment restrictions were more conservative.”

This is of specific interest to high-performance computing (HPC) applications, seeking to employ boost operating modes at increased supply voltages (within thermal limits).

Dr. He continued, “We are adapting our qualification procedures to encompass a broader set of parameters.  We are incorporating AC tests, combining Vmax, frequency, and duty cycle variables.”

The nature of “AC recovery” in the NBTI/PBTI mechanism for device Vt shift has been recognized for some time, and is reflected in device aging models.  Dr. He added, “We are seeing similar recovery behavior for the TDDB defect mechanism.  We are aggressively pursuing reliability evaluation methods and models for AC TDDB, as well.”

The figure above illustrates the how the Vt shift due to BTI is a function of the duty cycle for the device input environment, as represented by the ratio of the AC-to-DC Vt difference.  The figure also highlights the newer introduction of a TDDB lifetime assessment for high-K gate dielectrics, as a function of input frequency and duty cycle.

Parenthetically, Dr. He acknowledged that end application product utilization can vary widely, and that AC reliability testing makes some usage assumptions.  He indicated that TSMC works with customer to establish appropriate margins for their operating environment.

Reliability Evaluation of New Device Types

TSMC has recently added resistive RAM (RRAM) and magneto-resistive RAM (MRAM) IP to their technology offerings.

The unique physical nature of the resistance change in the storage device for these technologies necessitates development of a corresponding reliability evaluation procedure, to establish retention and endurance specifications.  (For the MRAM technology, the external magnetic field immunity specification is also critical.)

For both these technologies, the magnitude and duration of the write current to the storage cell is a key design parameter.  The maximum write current is a crucial reliability factor.  For the MRAM example, a high write current through the magnetic tunnel junction to set/reset the orientation of the free magnetic layer in the storage cell degrades the tunnel barrier.

TSMC collaborates with customers to integrate write current limiting circuits within their designs to address the concern.  The figure below illustrates the write current limiter for the RRAM IP.

TSMC and Customer Collaboration Reliability Ecosystem

In addition to the RRAM and MRAM max write current design considerations, Dr. He shared other examples of customer collaborations, which is a key element of the reliability ecosystem.

Dr. He. shared the results of design discussions with customers to address magnetic immunity factors – the figure below illustrates cases where the design integrated a Hall effect sensor to measure the local magnetic field.  The feedback from the sensor can be used to trigger corrective actions in the write cycle.

The customer collaboration activities also extend beyond design for reliability (DFR) recommendations.  TSMC shares defect pareto data with customers.  Correspondingly, the TSMC DFR and design-for-testability (DFT) teams will partner with customers to incorporate key defect-oriented test screens into the production test flow.

Dr. He provided the example where block-specific test screens may be appropriate, as illustrated below.

Power management design approaches may be needed across the rest of the design to accommodate block-level test screens.

The figure below depicts the collaboration model, showing how customer reliability feedback is incorporated into both the test environment and as a driver for continuous improvement process (CIP) development to enhance the technology reliability.

Summary

At the recent IRPS, TSMC presented their reliability ecosystem, encompassing:

  • developing unique reliability programs across a wide breadth of technologies (e.g., MEMS)
  • developing new reliability methods for emerging technologies (e.g., RRAM, MRAM)
  • sharing design recommendations with customers to enhance final product reliability
  • collaborating closely with customers on DFR issues, and integrating customer feedback into DFT screening procedures and continuous improvement process focus

Reflecting upon Dr. He’s presentation, it is no surprise that these reliability ecosystem initiatives are consistent with TSMC’s overall principles.

-chipguy

Also read:

Self-Aligned Via Process Development for Beyond the 3nm Node

Technology Design Co-Optimization for STT-MRAM

Advanced 2.5D/3D Packaging Roadmap


Synopsys Tutorial on Dependable System Design

Synopsys Tutorial on Dependable System Design
by Bernard Murphy on 04-06-2022 at 6:00 am

Dependability

Synopsys hosted a tutorial on the last day of DVCon USA 2022 on design/system dependability. Which here they interpret as security, functional safety, and reliability analysis. The tutorial included talks from DARPA, AMD, Arm Research and Synopsys. DARPA and AMD talked about general directions and needs, Arm talked about their PACE reliability analysis technique and Synopsys shared details on solutions they already have and aligning standards in safety and security.

DARPA on practical strategies for security in silicon

Serge Leef (PM DARPA) provided insight into the DARPA focus on scalable defense mechanisms in electronics, particularly trading off cost versus security. They’re targeting not the big semis or consumer electronics guys but rather mid-sized semis and defense companies, who are most interested in getting help. The ultimate goal in the project that Serge oversees is automatic synthesis of secure silicon (AISS). No-one ever accused DARPA of thinking small.

Synopsys, Northrop Grumman, Arm and others have been selected to drive this effort. A security engine, developed in a different part of the program will be integrated together with commercial IP and security aware tools. In a phased approach to full automation, initially systems will be composed around these various IP. A second phase optimization will be configured within a platform-based architecture. In the final phase they aim to be able to specify a cost function around power, area, speed and security (PASS). Allowing system teams to dial in the tradeoff they want.

As I said, an ambitious goal, but this is the organization that gave us the Internet after all 😀.

AMD on functional safety

Bala Chavali is an RAS architect at AMD (RAS is reliability, availability and serviceability/ maintainability). Her main thesis in this talk was the challenge in meeting objectives across multiple dependability goals, each with their own standards and expectations. She breaks these down into reliability, safety, security, availability and maintainability.

In part the challenges arise from disconnected standards, lifecycle requirements and required compliance work products across these multiple objectives. In part challenges come from lack of enough standards on IP suppliers, particularly around safety, security, and traceability.

Bala underscored the importance of unifying these objectives as much as possible to minimize duplicated effort. She sees value in aligning common standards efforts, for example in defining a generic dependability development lifecycle. This should leverage a wholistic analysis data set. Also a common data exchange language across applications (automotive, industrial, avionics, and across the system design hierarchy). Bala mentioned the Accellera functional safety working group (on which she serves) as one organization working towards this goal, also IEEE P2581 as another with a similar objective.

Arm Research on PACE reliability analysis

Reiley Jeyapaul, Senior Research engineer at Arm Research, talked about using formal tools for reliability estimation on a Cortex-R52 using their proof-driven architecturally correct execution (PACE) methodology. Their objective is to estimate the fraction of their design which is vulnerable to soft errors (random failures) to produce an architecture vulnerability factor (AVF). They suggest this as a model to derate the estimated FIT rate. This factor is needed since not all naïve vulnerabilities are real (if an error does not propagate), or may in some cases be only conditionally vulnerable. Arm will license PACE models to partners for their use in system vulnerability assessments.

Reiley provides detail on their formal technique and how they validated this method against exhaustive fault injection (EFI). Models are a little pessimistic, but not as pessimistic as the naïve model, and run dramatically faster than EFI. Sounds like a valuable capability for SoC designers.

Synopsys on automation for safety and security

Meirav Nitzan, director of functional safety and security solutions at Synopsys, closed the tutorial. She summed up capabilities offered by Synopsys for safety and security, in tools and IP and across both software and hardware design. I’m not going to attempt to summarize that long list. I think their selection for the DARPA AISS program is endorsement enough. I will call out a few points that I found interesting to my predominantly hardware audience.

We all know tools and IP suites in this area. To this they add SW test libraries for safety and software security analysis for 3rd part SW running on HW. They also provide virtual prototyping for developing secure SW. They use fault simulation for FMEDA analyses of course. I also found it interesting that they support fault modeling for malicious attacks, also sample fault simulation in emulation. Which I would guess would be valuable in testing vulnerability to probing attacks.

Meirav wrapped by reiterating the work IEEE P2581 is doing; to align security and safety requirements, a worthy goal for all of us. Learn more about Synopsys’ solutions for mission-critical silicon and software development HERE.

Also read:

Synopsys Announces FlexEDA for the Cloud!

Use Existing High Speed Interfaces for Silicon Test

Getting to Faster Closure through AI/ML, DVCon Keynote


The Importance of Low Power for NAND Flash Storage

The Importance of Low Power for NAND Flash Storage
by Tom Simon on 04-05-2022 at 10:00 am

Low Power for NAND Flash

Even though we all know that reducing power consumption in NAND Flash Storage is a good idea, it is worthwhile to take a deeper dive into the underlying reasons for this. A white paper by Hyperstone, a leading developer of Flash controllers, discusses these topics providing useful insight into the problem and its solutions. The paper is titled “Power Consumption in NAND Flash Storage.”

Low Power for NAND Flash

It almost goes without saying that the most obvious reason to reduce NAND flash storage is to reduce operating costs – lower power means lower utility bills. Furthermore, lower power can also mean packaging and cooling system savings which will reduce end product costs. Along with that there are several other equally important reasons.

Regarding the key issues for NAND flash devices, the Hyperstone white paper mentions thermal stress. Increasing complexity and higher performance translate into more thermal stress for semiconductors. The problem is especially aggravated when devices are located in industrial or automotive settings where heat is already an issue. One property of semiconductors that works to our disadvantage is when they operate at higher temperatures, they use more power – creating a positive feedback loop. The white paper points out that a temperature rise from 50C to 100C leads to a ten-fold increase in leakage current. Thermal stress can lead to multiple kinds of failures such as device breakdown, electromigration and physical fatigue.

Leakage current, an important issue for NAND flash devices, is static current that flows through devices when they are not active. Unfortunately, the percentage of chip power consumption from leakage is growing as chips move to smaller more advanced process nodes. Additionally, there is a tradeoff between performance and leakage power. Lower threshold transistors, which have more leakage, are necessary for higher performance. The Hyperstone paper describes some specialized techniques for applying bias voltages to transistors to vary their leakage current as needed based on time-dependent performance requirements.

Within the NAND flash controller, the different functional elements have their own power requirements which can change depending on workloads and operational modes. The white paper does a good job of delineating the major functional blocks that can affect power consumption. Likewise, the NAND flash itself requires different levels of power for different operations. These are also dependent on whether it is single level charge (SLC) or multi-level charge (MLC).

The Hyperstone white paper describes the power implications of operational modes for the host interface over PCIe, and also discusses the power characteristics of DRAM which is often used for cache purposes. Error correction can play a hard to predict role in power usage. Encoding error correction information is not compute intensive. However, when there are errors, especially multiple errors extensive computation can be involved. This can drive up power consumption.

As a result, getting objective power measurements under different workloads and operating speeds can be tricky. Benchmarking power consumption is required to make realistic determination of low power performance. All of the above factors and more come into play. Many decisions in the total system design influence the result. DC to DC power converters need to be properly selected to handle transitions from modes to active modes. Interfaces should fully support low power modes. Controller chips and their ancillary chips needs to include advanced and complex power saving features, such as clock gating and power gating. Hyperstone includes information about their use of CystalDiskMark for benchmarking SSDs with Hyperstone flash memory controllers. They even include several graphs of power performance over long intervals that show their relative performance versus a competitor.

Because power consumption is such an important consideration for SSDs, it is good to get a deeper insight into its sources and solutions. Hyperstone does a good job of covering all of this in their white paper. You can download the white paper and other relevant Flash Storage white papers on the website of Hyperstone.


Security Requirements for IoT Devices

Security Requirements for IoT Devices
by Daniel Nenni on 04-05-2022 at 6:00 am

IoT Product Lifecycles SemiWiki

Designing for secure computation and communication has become a crucial requirement across all electronic products.  It is necessary to identify potential attack surfaces and integrate design features to thwart attempts to obtain critical data and/or to access key intellectual property.  Critical data spans a wide variety of assets, including financial, governmental, and personal privacy information.

The security of intellectual property within a design is needed to prevent product reverse engineering.  For example, an attacker may seek to capture firmware running on a microcontroller in an IoT application to provide competitive manufacturers with detailed product information.

Indeed, the need for a security micro-architecture integrated in the design is not only applicable to high-end transaction processing systems, but also down to IoT devices, as well.

Vincent van der Leest, Director of Product Marketing at Intrinsic ID, recently directed me to a whitepaper entitled, “Preventing a $500 Attack Destroying Your IoT Devices”.  Sponsored by the Trusted IoT Ecosystem Security Group of the Global Semiconductor Alliance (GSA), with contributions from Intrinsic ID and other collaborators, the theme of this primer is that a security architecture is definitely a requirement for IoT devices.  The rest of this article covers some of the key observations in the paper – a link is provided at the bottom.

Types of Attacks

There are three main types of attacks that may yield information to attackers:

  • side-channel attacks: non-invasive listening attacks that monitor physical signals coming from the IoT device
  • fault injection attacks: attempts to disrupt the internal operation of the IoT device, such as attempting to modify program execution of an internal microcontroller to gain knowledge about internal confidential data or interrupt program flow
  • invasive attacks: physical reverse engineering of the IoT circuitry, usually too expensive a process for attackers

For the first two attack types above, current IoT systems offer an increasing number of (wired and wireless) interfaces providing potential access.  Designers must be focused on security measures across all these interfaces, from memory storage access (RAM and external non-volatile flash memory) to data interface ports to test access methods (JTAG).

The paper goes into more detail on the security architecture, Cryptographic Keys and a Digital Signature, plus Key Generation.

Examples of Security Measures

The first step to prevent IoT attacks is taking certain design considerations into account. Here are some examples of design and program execution techniques to consider:

  • firmware storage.

An obvious malicious attack method would be to inject invalid firmware code into the IoT platform.  A common measure is to execute a small boot loader program upon start-up from internal ROM (not externally visible), then continue to O/S boot and application firmware from flash.  As the intent of using flash memory is to enable in-field product updates and upgrades, this externally-visible attack surface requires a security measure.  For example, O/S and application code could incorporate a digital signature evaluated for authentication by the security domain.

  • runtime memory security

Security measures may be taken to perform integrity checks on application memory storage during runtime operation.  These checks could be evaluated periodically, or triggered by a specific application event.  Note that these runtime checks really only apply to relatively static data – examples include:  application code, interrupt routines, interrupt address tables.

  • application task programming

An early design decision involves defining which tasks within the overall application necessitate security measures, and different operating system privileges.  As a result, the application may need to be divided into small subtasks.  This will also simplify the design of the security subsystem, if the task complexity is limited.

  • redundant hardware/software resources

For additional security, it may be prudent to execute critical application code twice, to detect an attack method that attempts to inject a glitch fault into the IoT system.

  • attestation

An additional measure to integrate into the design is the means by which the security subsystem attests (through an external interface) to confirm the IoT product is initialized as expected.  If runtime security checks are employed, the design also needs to communicate that operational security is being maintained.

IoT Product Development and Lifecycle

The figure above provides a high-level view of the IoT product lifecycle.  During IC fabrication, initial configuration and boot firmware are typically written to the device.  Cryptographic keys would be included as part of the security subsystem.  During IoT system integration, additional security data may also be added to the SoC.

Summary

The pervasive and diverse nature of IoT applications, from industrial IoT deployments to consumer (and healthcare) products, necessitates a focus on security of critical economic and personal privacy assets.  The whitepaper quotes a survey indicating that security concerns continue to be a major inhibitor to consumer IoT device adoption.  A recommendation in the whitepaper is for the IoT product developer to pursue an independent, third-party test and certification of the security features.

Here is the whitepaper: Preventing a $500 Attack Destroying Your IoT Devices

Contributors:

Published by:

Also read:

WEBINAR: How to add a NIST-Certified Random Number Generator to any IoT device?

Enlisting Entropy to Generate Secure SoC Root Keys

Using PUFs for Random Number Generation


Designing a FinFET Test Chip for Radiation Threats

Designing a FinFET Test Chip for Radiation Threats
by Tom Simon on 04-04-2022 at 10:00 am

Charged Particles

Much of the technology that goes into aerospace applications is some of the most advanced technology that exists. However, these same systems must also offer the highest level of reliability in what is arguably an extremely difficult environment. For semiconductors a major environmental risk in aerospace applications are the effects of charged particles in the upper atmosphere and space.  Charged particles can lead to soft errors, where an IC is not permanently damaged, but its operation is temporarily disrupted leading to unpredictable results. Of course, ICs used in aerospace are hardened to limit the effects of charged particles, but their underlying process technology plays a big role in their susceptibility, especially in the case of FinFETs.

To lower the risk from charged particles, aerospace ICs have tended to use trailing process nodes. However, new nodes offer many attractive characteristics, such as reduced power and higher levels of integration. According to a Siemens EDA white paper, the European Space Agency (ESA) was interested in evaluating various FinFET nodes to determine which ones were the most promising candidates for use in their flight systems. The white paper titled “IROC tapes out aerospace SoC with Aprisa place-and-route” talks about ESA’s work with IROC Technologies to design test chips for evaluating the effects of charged particles on these advanced nodes.

IROC has broad capabilities for designing and testing ICs for quality and reliability. Yet, when ESA approached them to design self-contained test vehicles for FinFET processes they needed to put in place a complete FinFET SOC design flow for 16nm. The test chip design calls for standard cells, memories, complex IP and more. Along with the structures being tested the chip also had to include support circuitry to measure, record and analyze the performance of the test structures. The monitoring blocks had to be completely immune to radiation.

Like many companies looking for tools to set up their flow, IROC started looking at the tools qualified for the targeted TSMC N16 FinFET process. They were of course familiar with Calibre, but also saw that Siemens’ Aprisa place and route solution was included as well. IROC decided to proceed with using Aprisa for this project. Their initial decision, guided by Aprisa’s integration with other Siemens tools and tools from other vendors, was affirmed once they saw how intuitive and easy Aprisa was to set up and use. IROC used the reference flow provided by the Aprisa support team to go through their rapid learning process. One example of this smooth bring-up was the ease of setting up Aprisa to use the TSMC PDK for their process.

IROC started with the design partitioning step using a combination of custom and RTL based blocks. Aprisa let them generate the Liberty and layout views necessary for floor planning and chip assembly. Within a few days of setting up the flow they had completed first pass placement. The built-in timing analysis engine only required a few lines of Tcl to give them timing feedback that correlated well with their final sign off tools.

In the white paper IROC said they were pleased with the quick turnaround of the detail-route-centric design results provided by Aprisa’s multithreading capability. According to the white paper IROC saw high quality results that reduced the need for iterations to achieve design closure. In only three months IROC went from tool installation all the way through to a successful tapeout. For a FinFET process this is a remarkable achievement. They were able to use the silicon to perform the targeted reliability analysis for ESA. The aggressive schedule for the project meant that IROC made the decision to proceed with Aprisa without a traditional evaluation. Their assessment that the Aprisa would perform well for their project appears to have been well founded. No doubt the reputation that Siemens has earned for its EDA tools played a large factor in this decision. Aprisa is also certified down to TSMC N6 and is in the process of gaining certification for TSMC N5 and N4. The white paper is available for download on the Siemens EDA website.

Also read:

Path Based UPF Strategies Explained

Co-Developing IP and SoC Bring Up Firmware with PSS

Balancing Test Requirements with SOC Security


Bridging Analog and Digital worlds at high speed with the JESD204 serial interface

Bridging Analog and Digital worlds at high speed with the JESD204 serial interface
by Daniel Nenni on 04-04-2022 at 6:00 am

Comcore JESD204 Webinar SemiWiki

We are delighted to showcase our “Bridging Analog and Digital worlds at high speed with the JESD204 Serial Interface” webinar on April 20th, in case you missed the live webinar back in February 2022.

To meet the increased demand for converter speed and resolution, JEDEC proposed the JESD204 standard describing a new efficient serial interface to handle data converters. In 2006, the JESD204 standard offered support for multiple data converters over a single lane with the following standard revisions; A, B, and C successively adding features such as support for multiple lanes, deterministic latency, and error detection and correction while constantly increasing Lane Rates. The JESD204D revision is currently in the works and aims to once more increase the Lane Rate to 112Gbps with the change of lane encoding and a switch of the error correction scheme to Reed-Solomon. Most of today’s high-speed converters make use of the JESD standard and the applications fall within but are not limited to Wireless, Telecom, Aerospace, Military, Imaging, and Medical, in essence anywhere a high-speed converter can be used.

Watch the replay here

 

The JESD204 standard is dedicated to the transmission of converter samples over serial interfaces. Its framing allows for mapping M converters of S samples each with a resolution of N bits, onto L lanes with a F octet sized frames that, in succession, form larger Multiframes or Extended Multiblock structures described by K or E parameters. These frames allow for various placement of samples in high- or low-density (HD) and for each sample to be accompanied by CS control bits within a sample container of N’ bits or at the end of a frame (CF). These symbols, describing the sample data and frame formatting, paired with the mapping rules dictated by the standard, allow to communicate a shared understanding of how the transmitted data should be mapped and interpreted by both parties engaging in the transmission.

The 8b10b encoding scheme of JESD204, JESD204A and JESD204B paired with Decision Feedback Equalizers (DFEs) may not work efficiently above 12.5Gbps as it may not offer adequate spectral richness, for this reason, and for better relative power efficiency 64b66b encoding was introduced in JESD204C targeting applications up to 32 Gbps. JESD204D that is following in its footsteps with even higher line rates planned up to 112Gbps utilizing PAM4 PHYs demands a new encoding to efficiently encapsulate the Reed Solomon Forward Error Correction (RS-FEC) 10-bit symbol-oriented mapping.

Deterministic latency introduced in JESD204B allows for the system to maintain constant system latency throughout reset, and power up cycles, as well as re-initialization events. This is accomplished in most cases by providing a system reference signal (SYSREF) that establishes a common timing reference between the Transmitter and Receiver and allows the system to compensate for any latency variability or uncertainty.

The main traps and pitfalls of system design around the JESD204 standard would deal with system clocking in subclass 1 where deterministic latency is achieved with the use of SYSREF as well as SYSREF generation and utilization under different system conditions. Choosing the right frame format and SYSREF type to match system clock stability and link latency can also prove challenging.

Watch the replay here
Also read:

CEO Interview: John Mortensen of Comcores

WEBINAR: Unlock your Chips’ Full Data Transfer Potential with Interlaken

WEBINAR: O-RAN Fronthaul Transport Security using MACsec


EUV Resist Absorption Impact on Stochastic Defects

EUV Resist Absorption Impact on Stochastic Defects
by Fred Chen on 04-03-2022 at 10:00 am

EUV Resist Absorption Impact on Stochastic Defects 1

Stochastic defects continue to draw attention in the area of EUV lithography. It is now widely recognized that stochastic issues not only come from photon shot noise due to low (absorbed) EUV photon density, but also the resist material and process factors [1-4].

It stands to reason that resist absorption of EUV light, which is depth-dependent, will also have an important bearing on the occurrence of stochastic defects. EUV resists have initially been mainly chemically amplified, with a typical absorption coefficient of 5/um [5], meaning that exp(-5) ~ 0.67% represents the fraction of light that is transmitted by a 1 um thick layer of the resist. For a layer only 20 nm thick, on the other hand, 90% of the light is transmitted, meaning that only 10% of the light is absorbed. For a 40 nm thick resist layer, it would be interesting to compare the absorption in the top half vs. the bottom half (Figure 1).

Figure 1. EUV photon absorption in the top half (left) vs. the bottom half (right) of a chemically amplified resist. The threshold to print here is taken to be 24 photons absorbed per 2 nm x 2 nm pixel. The assumed dose (averaged over the displayed 80 nm x 80 nm area) is 60 mJ/cm2. The oval outline is for reference to visually assist observing the stochastic absorption profile.

The bottom half receives less light (90%) than the top because some light has already been absorbed, and it also absorbs a small fraction (10%) itself. Consequently, it is more likely for some areas in the bottom half of the resist layer to fall below the nominal threshold photon absorption density to print. This leads to the higher probability of stochastic defects forming from underexposure in the lower half of the resist.

Given that lower resist absorption aggravates stochastic effects, we would expect the higher absorption of metal oxide resists [5] to be much better. From Figure 2, we see something significantly different.

Figure 2. EUV photon absorption in the top half (left) vs. the bottom half (right) of a metal oxide resist. The threshold to print here is taken to be 24 photons absorbed per 2 nm x 2 nm pixel. The assumed dose (averaged over the displayed 80 nm x 80 nm area) is 60 mJ/cm2. The oval outline is for reference to visually assist observing the stochastic absorption profile.

The lower half of the resist layer receives significantly less light to begin with, so the absorbed photons ultimately define a smaller region at the bottom of the resist. Since the metal oxide resist is a negative-tone resist, the photon absorption determines where the resist remains after development. This means toppling of resist features (or missing resist features) from a narrower bottom than top (‘undercutting’) can be a stochastic defect specific to negative-tone resists, particularly metal oxide resists.

DUV photoresists would never have this problem in common use, even with a lower dose and lower absorption coefficient on the order of 1/um (Figure 3). It’s because the resist is thicker and the pixels are effectively larger.

Figure 3. ArF photon absorption in the top half (left) vs. the bottom half (right) of a chemically amplified resist. The threshold to print here is taken to be 1200 photons absorbed per 7 nm x 7 nm pixel. The assumed dose (averaged over the displayed 280 nm x 280 nm area) is 30 mJ/cm2. The oval outline is for reference to visually assist observing the absorption profile.

A fairer EUV vs ArF comparison would use realistically difficult scenarios. Figure 4 compares the photons per pixel absorbed in the bottom half of the resist for 40 nm square pitch EUV (0.5 nm x 0.5 nm pixel) with 40 nm metal oxide resist thickness vs. 80 nm square pitch ArF (1 nm x 1 nm pixel) with 100 nm chemically amplified resist thickness, using a negative tone hole pattern. The EUV image assumes an ideal binary mask with quadrupole illumination, while the ArF image assumes a 6% attenuated phase shift mask with cross dipole illumination.

Figure 4. EUV (left) vs. ArF (right) photon absorption in the bottom half of the resist. The EUV resist thickness is 40 nm, while the ArF resist thickness is 100 nm. The pixel size is 0.5 nm x 0.5 nm for the EUV case, and 1 nm x 1 nm for the ArF case. The threshold to print here is taken to be 1.6 photons/pixel for EUV and 3.3 photons/pixel for ArF, to target the half-pitch. The assumed dose (averaged over the pitch) is 60 mJ/cm2.

At the finer pixel size, the roughness of the edge becomes more apparent for both wavelengths. However, the EUV case has lots of spots in the background where the exposure is sub-threshold, which will lead to potential resist removal during development, whereas the ArF case is free from such spots. The reason for this, in fact, has to do with the higher contrast of phase-shift masks compared to binary masks. The bright background has closer intensity to the central dark spot in the binary case, or less contrast, giving more opportunity for the background noise variation to reach levels comparable to the dark spot.

Acid generation (in chemically amplified resists) and electron release (following EUV exposure) lead to smoothing effects, which are simulated here using 4x Gaussian smoothing (sigma=2 pixels). This leads to an effective resist blur of 2 times the pixel size, i.e., 2 nm for the EUV case and 4 nm for the ArF case.

Figure 5. EUV (left) vs. ArF (right) latent image in the bottom half of the resist, after 4x Gaussian smoothing (sigma=2 pixels). The EUV resist thickness is 40 nm, while the ArF resist thickness is 100 nm. The pixel size is 0.5 nm x 0.5 nm for the EUV case, and 1 nm x 1 nm for the ArF case. The threshold to print here is taken to target the half-pitch. The assumed dose (averaged over the pitch) is 60 mJ/cm2.

With smoothing, the general spottiness of the image is removed, leaving residual edge roughness, for both ArF and EUV cases (Figure 5). However, since the EUV case had more random background counts, the edge looks relatively rougher, and there is a tendency for defects occurring at and near the edge. These can also impact the effective edge placement.

To conclude, the higher absorption coefficient is not helping to avoid stochastic defects which are occurring near the bottom of the resist layer, so the higher dose to compensate lower absorption is still necessary.

References

[1] https://www.jstage.jst.go.jp/article/photopolymer/31/5/31_651/_pdf

[2]http://ww.lithoguru.com/scientist/litho_papers/2019_Metrics%20for%20stochastic%20scaling%20in%20EUV.pdf

[3] https://www.spiedigitallibrary.org/journals/journal-of-micro-nanopatterning-materials-and-metrology/volume-20/issue-01/014603/Contribution-of-EUV-resist-counting-statistics-to-stochastic-printing-failures/10.1117/1.JMM.20.1.014603.full?SSO=1

[4]http://euvlsymposium.lbl.gov/pdf/2014/6b1e6ae745cd40aba5940af61c0c908e.pdf

[5] http://euvlsymposium.lbl.gov/pdf/2015/Posters/P-RE-06_Fallica.pdf

This article was first published in LinkedIn Pulse: EUV Resist Absorption Impact on Stochastic Defects

Also read:

Etch Pitch Doubling Requirement for Cut-Friendly Track Metal Layouts: Escaping Lithography Wavelength Dependence

Horizontal, Vertical, and Slanted Line Shadowing Across Slit in Low-NA and High-NA EUV Lithography Systems

Pattern Shifts Induced by Dipole-Illuminated EUV Masks


A Blanche DuBois Approach Won’t Resolve Traffic Trouble

A Blanche DuBois Approach Won’t Resolve Traffic Trouble
by Roger C. Lanctot on 04-03-2022 at 6:00 am

A Blanche DuBois Approach Wont Resolve Traffic Trouble

Near the end of Tennessee Williams’ “A Streetcar Named Desire” the Blanche DuBois character, who has suffered a mental breakdown following an implied rape, tells the doctor and matron who have come to take her to the hospital: “Whoever you are – I have always depended on the kindness of strangers.” Sadly, this is the same mentality of municipalities turning to Waze to resolve traffic troubles.

More than 3,000 municipalities around the world – according to Waze – have turned to the popular navigation app to better understand their own traffic woes and resolve challenges. The irony here, of course, is that Waze often CREATES traffic problems by routing vehicles through neighborhoods abutting major highways – or even navigating users into dangerous areas such as favelas in Brazil, occupied territories in Israel, or forest fire zones in California.

Waze’s traffic insights are derived from crowd-sourced “probe” data from users of the app. The location data from users’ mobile phones traces the pace and direction of travel and users can report road hazards or the location of law enforcement vehicles for the benefit of other drivers.

The result – when calibrated to convert real-time data into predictive traffic models – is a compelling navigation tool that has put pressure on car makers that offer built-in navigation systems that often use outdated maps. Waze claims 140M users worldwide and that kind of ubiquity is hard to ignore.

It’s also hard to ignore the fact that Waze has yet to define a profit-generating business model. The sheer desperation of this effort is reflected in the increasingly distracting array of display advertising that shows up in the app while in use. Oddly, you can’t enter a destination into the app while driving, but the app can try to distract you with impossible to read advertisements while it is navigating.

The source of Waze’s strength, though, is also its greatest weakness. Waze has no reliable means to validate the user inputs regarding hazards and other observations – and on multiple occasions users have punked Waze into re-routing drivers away from particular neighborhoods or locations.

Waze has also been known to use its negative impact on local traffic patterns as a door opener with local municipalities. Waze creates the traffic problem – blamelessly! – and then begins collaborating with local traffic authorities to “hack” the app to overcome the routing snafu that is riling citizens.

In the end, Waze is still relying on the “kindness” of strangers and the company runs its Waze for Cities program almost like a protection racket – mesmerizing cities into relying on Waze to fix their traffic problems by using it as a communication conduit. In one such case, Sandy Springs, Georgia, officials joined forces with Waze to communicate confusing traffic patterns to drivers through the app. This alone on its face isn’t such a horrible idea, but in the context of failing to communicate the same information via 511 services or embedded navigation systems from Telenav, NNG, TomTom, and HERE or even via radio stations drives even more users to Waze.

The strangest thing about cities that have turned to Waze as a partner and a communications tool is that most cities have their own traffic information resources. In fact, cities have access to traffic cameras – thousands of them – mostly trained on predictable traffic hotspots.

In the U.S., the leading provider of nationwide traffic camera information is TrafficLand. TrafficLand is responsible for nearly half of all server-side traffic camera installations and is able to deliver still images or streaming video on demand. The company also performs digital analytics on the images it gathers.

With a proper front-end interface, built-in vehicle navigation systems could tap into TrafficLand content to allow drivers to make better-informed navigation decisions. There’d be no need to “trust” Waze, in this instance. Still images or video would confirm the reality of traffic conditions on the road ahead.

Better yet, what about auto makers as a source of front-facing camera data. Mercedes-Benz and the Netherlands’ Ministry of Infrastructure and Water Management have announced a two-year project whereby Mercedes-Benz will share anonymized sensor data from its vehicles with the agency for the identification of road safety hotspots and maintenance issues. The key difference here is the reliance on vehicle sensors – which include cameras – that generate verifiable and reliable inputs.

Between fixed cameras (TrafficLand) and vehicle-mounted cameras, local municipalities ought to be able to identify and resolve their traffic issues – without the assistance of Waze and its user-generated inputs. Cities can then share their data resources with any and all traffic information and navigation providers – not just Waze.

In fact, every city should have a data exchange program in place in which auto makers could participate. That could be a job for Telenav or NNG or HERE or TomTom. But please, not Waze. We shouldn’t rely entirely on strangers for our traffic insights when we can observe reality directly in real time.

Also read:

Auto Safety – A Dickensian Tale

No Traffic at the Crossroads

GM’s Super Duper Cruise


Cadence and DesignCon – Workflows and SI/PI Analysis

Cadence and DesignCon – Workflows and SI/PI Analysis
by Daniel Nenni on 04-02-2022 at 6:00 am

Clarity 3D solver 112Gbps

DesignCon 2022 is back to a live conference, from Tuesday, April 5th through Thursday, April 7th, at the Santa Clara Convention Center.

Introduction

DesignCon is a unique gathering in our industry.  Its roots incorporated a focus on complex design and analysis requirements of (long-reach) high-speed interfaces.  Technical presentations and vendor exhibits on the conference Expo floor spanned a wide variety of topics – e.g., SerDes Tx/Rx design methods;  PCB interconnect and dielectric materials;  cables and connectors for high-speed signals; EDA tools for PCB design, plus extraction and simulation of (strongly frequency-dependent) interconnect parameters.

World-renowned signal integrity (SI) and power integrity (PI) experts offered their insights into how to layout PCB busses and place power decoupling and DC blocking capacitors for improved SI/PI fidelity.

And, the conference Expo highlighted the latest in technical equipment for high-speed interface analysis (on reference boards), from oscilloscopes and signal generators to bit-error rate testers to spectrum analyzers and vector network analyzers.

As the complexity of system designs has grown, and as high-speed interface design has expanded to encompass short, medium, and long-range topologies, so too has DesignCon expanded.

The emphasis on the materials and properties of boards, cables, and connectors is still strong, to be sure.  The introduction of 56Gbps and 112Gbps datarate standards (and pulse-amplitude modulated signal levels) has added to the focus on model extraction accuracy.  The transition from simple IBIS Tx/Rx electrical elements to more complex IBIS-AMI functional plus electrical models has enabled comprehensive “end-to-end” simulations.

In addition to the evolution of these more demanding tasks, there are three key focus areas evident in this year’s DesignCon program.

  • advanced 2.5D/3D packaging technology necessitates “full 3D” electromagnetic model extraction

Traditional rules-of-thumb used for early PCB layout definition of high-speed interface signals no longer apply to today’s system-in-package integration.  Insertion loss guidelines in “dB per inch per GHz” have no significance in a 2.5D package with heterogeneous functionality, spanning very wide high bandwidth memory (HBM) busses to source-synchronous short-reach interfaces between die.  The nature of this advanced packaging technology requires a full 3D extraction model for accurate analysis of insertion, reflection, and crosstalk behavior.

  • tight integration between design and analysis tools is absolutely required

Traditional PCB and backplane-centric high-speed interface design didn’t offer many degrees of freedom – e.g., board layer materials and thicknesses, via topologies and types (through vias or backdrilled vias, for impedance control).

The development of a 2.5D package requires focus on interface planning, as an integral part of the initial design flow.  The sheer number of interface connections and the disparity in their clocking and IL/RL/Xtalk requirements necessitates a tight design and analysis optimization loop.  A PCB may be able to accommodate the insertion of a high-speed repeater (re-driver or re-timer) component later in the design cycle to address a failing signal spec.  A 2.5D package design offers no such flexibility – it has to be “first time right”.  The design and analysis workflow must offer fast, accurate results. In addition, there is typically very limited SI/PI expertise available to design companies – these workflows need to be available to many designers, based on a familiar EDA platform environment.

  • IP offerings are critical to accelerating product introductions integrating advanced interface standards

The pace at which new interface definitions are being introduced is rapid.  DesignCon has expanded to provide system developers with information on (silicon-proven, qualified) IP available for SoC integration.

Cadence at DesignCon

I recently had the opportunity to chat briefly with Sherry Hess at Cadence, to learn what new technologies Cadence will be presenting at this year’s DesignCon.  Indeed, their focus is on new workflows, improved 2.5D/3D modeling accuracy, and advanced IP.

Workflows      

Here are some of the workflow-related presentation sessions.

  • NO Exit Ramps Needed – Cadence’s System Design Workflow Delivers Seamless In-Design Analysis, Reducing Turnaround Time and Minimizing Risk
  • Mainstream Signal Integrity Workflow for PCI 6.0 PAM4 Signaling
  • Amphenol: 112G Connector and Board Design/Analysis Workflow
  • Meta (Facebook): MIPI-C Board and Camera Interface Design/Analysis Workflow
  • Microsoft: Interconnect Optimization of Wearables with an In-Design Analysis Workflow

Note that these presentations include collaborations with various other firms using these workflows, from component providers to systems companies.

Clarity 3D Solver Enhancements

The parametric model extraction of a 3D structure involves a tradeoff in accuracy versus computational resources.  The complex nature of 3D geometries requires an intricate finite element mesh, whether for a system-in-package or multiple packages on a combination of rigid and/or flex substrates.  The electromagnetic solver for the mesh is computationally demanding.

At DesignCon, Cadence will be demonstrating significant enhancements to their Clarity 3D solver, including:

  • a new distributed meshing algorithm, with significant reduction in simulation runtimes
  • a new machine-learning based algorithm for optimizing a “sweep” of design parameters
  • workflow integration with Cadence Allegro (and Allegro Package Designer), Integrity 3D-IC, and Virtuoso RF platforms

IP Strategy

It wasn’t long ago that 28Gbps was the emerging standard interface.  Cadence will also be presenting their IP development for 224Gbps.

  • The Future of 224G Serial Links

Appended below are DesignCon links of interest – at a minimum, if you are involved in functional and/or electrical interface design, from system-in-package to long-reach signaling, you should definitely get a FREE Expo pass.  And, be sure to stop by the Cadence Expo booth for product demonstrations and more technical information.

Cadence sessions at DesignCon:

https://events.cadence.com/event/df7b2870-50d2-48bd-846b-bd8e3c2ea7b2/summary

Cadence Clarity 3D Solver:

https://www.cadence.com/en_US/home/tools/system-analysis/em-solver/clarity-3d-solver.html

DesignCon 2022 Registration:

https://www.designcon.com/en/conference/passes-pricing.html

Also read:

Symbolic Trojan Detection. Innovation in Verification

Leveraging Virtual Platforms to Shift-Left Software Development and System Verification

Using a GPU to Speed Up PCB Layout Editing


Podcast EP69: Ayar Labs and the Future of Optical I/O

Podcast EP69: Ayar Labs and the Future of Optical I/O
by Daniel Nenni on 04-01-2022 at 10:00 am

Dan is joined by Hugo Saleh, senior VP of commercial operations and managing director of Ayar Labs, UK. Hugo discusses the technology and application of optical I/O, its use and impact now and in the future.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.