Banner 800x100 0810

Closing the Communication Chasms in the SoC Design and Manufacturing Supply Chain

Closing the Communication Chasms in the SoC Design and Manufacturing Supply Chain
by Kalar Rajendiran on 06-08-2022 at 6:00 am

Sondrel Soup to Nuts

In sports, we’re all familiar with how even a team with the best individual players for every role needs to be coordinated as a team to win a championship. In healthcare, a patient is better served with a well-trained primary physician to coordinate with the various medical specialists. The field of semiconductors involves a series of complex functional steps from architecting a chip to designing it all the way through to manufacturing, quality control and logistics. Driven by complexity increases, the semiconductor supply chain has gotten disaggregated over the last few decades.

The above transformation has led to specialization by individual players in the ecosystem and enabled rapid advances within each respective functional areas. Whether it is chip design, package design, package and test or even logistics for that matter, there are subcontractors that are involved. While the functional specializations have opened up tremendous opportunities for System-on-Chips (SoCs) to push the performance, power and area (PPA) benefits, they have also introduced some vulnerabilities. The modern SoC design and manufacturing supply chain can lead to communication chasms between different players and missteps in various phases of the process. In a recent press announcement, Sondrel describes how these communication chasms can ripple through the entire supply chain, creating delays and huge cost overruns.

With chip development costs running in the millions of dollars and time to market schedules ever so tight, even a single misstep can be disastrous for a chip company. Extending the sports and healthcare analogy, chip development needs an entity to oversee all phases of the entire process. This entity should possess not only deep consulting capabilities but also have in-house expertise to deliver complete turnkey services to transform designs into tested, volume-packaged semiconductor chips. That’s why Sondrel offers a complete turnkey service from concept to shipping silicon so there is no possibility of any communication chasms. Sondrel takes total responsibility for the smooth running of every stage and every subcontractor in the supply chain. In addition, Sondrel offers many market-specific, reusable and customizable, reference platforms, which it calls Architecting the Future. These enable it to rapidly develop differentiated chip products for its customers as designing does not start from scratch every time. It also offers chip development consulting services including chip architectural study reports.

Sondrel has been serving fabless and systems companies in this capacity for a long time. It serves the Automotive, AI at the Edge, 8K Video, Smart Homes/Smart Cities, Consumer Devices, and Wearables markets and more. Sondrel’s designs have been incorporated into mobile phones, game consoles, security systems, AR/VR systems, network switches and routers, cameras, computer systems and many more. With a long successful track record of customer products in various end markets, Sondrel has mastered a holistic approach to developing chips for its customers. It deploys its deep understanding of all aspects of chip development to produce designs optimized for PPA and time to market requirements.

Sondrel’s Offerings

Full Turnkey Service

Sondrel’s full turnkey service manages every stage of the process from chip concept to final silicon. Its Operations Team manages all downstream stages after the design is done. This includes liaising with the fabs through to selecting the most appropriate packaging OSAT, Test Development and logistics partner. Sondrel’s software team develops the required software to get to Board Support Package level. The software team is also experienced in producing software for drivers and validation tests for the whole SoC.  For more details about Sondrel’s Full Turnkey Service, please see here.

Architecting the Future® Reusable IP Platforms

Sondrel has developed a family of reference designs for major applications to help reduce risk and time to market for its customers. Customers can easily add their own IP as well as third party IP to rapidly create a differentiated solution. Then, Sondrel’s manufacturing service team provides a total unit cost estimate based on foundry, test, qualification and packaging choices. This is a very important part of viability analysis. For more details about the various Reusable IP Platforms offered, please visit here.

Architecture Study Service

The world of electronics is full of creative ideas and the biggest challenge for a customer is to understand which ones can be turned into realities. Of course, one would want to find out the commercial viability before starting the whole process of building a chip. Sondrel’s architects explore different options using abstracted what-if modelling to test alternatives and arrive at a candidate architecture. The modelling and analysis of key system behaviors validate that the architecture will behave as intended. With its experience designing hundreds of chips, Sondrel provides a detailed architectural study report that includes a high accuracy cost analysis for building a chip. For more details about its Architecture Study Service, visit here.

Also Read:

SoC Application Usecase Capture For System Architecture Exploration

Sondrel explains the 10 steps to model and design a complex SoC

Build a Sophisticated Edge Processing ASIC FAST and EASY with Sondrel


RISC-V embedded software gets teams coding faster

RISC-V embedded software gets teams coding faster
by Don Dingee on 06-07-2022 at 10:00 am

Nucleus RTOS

RISC-V processor IP is abundant. Open-source code for RISC-V is also widely available, but typically project-based code solves one specific problem. Using only pieces of code, it’s often up to a development team integrate a complete application-ready stack for creating an embedded device. A commercial embedded software development stack puts proven tools together and gets teams to application coding faster. For RISC-V embedded software development, a new port of Siemens’ Nucleus ReadyStart provides that complete solution.

Five areas RISC-V embedded software stacks should cover

Embedded software development is a bit different than enterprise software or EDA software development. There’s a host-target model, where the target machine is very different from the host where coding is done. Target devices are often resource-constrained in size, memory and storage, connectivity, or power consumption. Performance is usually real-time aware in some way, with deadlines that must be met. Sometimes, visibility to what’s going on inside a device is more challenging because a user interface isn’t part of its deployed configuration.

Those differences call for an RISC-V embedded software development stack optimized for the job. State-of-the-art tools like Nucleus ReadyStart set the bar for such an environment in at least these five areas:

  • Toolchain and debugging tools – A good stack would tie tools into an integrated design environment (IDE), with easy editing and project management features. A performance-optimized C/C++ compiler is a must. JTAG or BDM debugging provide local or remote (with a debug agent on the target) capability, with awareness of threads.
  • System-level trace and analysis tools – Being able to see performance bottlenecks in processes and threading is one capability. Another is the ability to see how power consumption relates to code activity, helping developers adjust efficiency.
  • RTOS kernel – Nucleus RTOS is a proven hard real-time operating system (RTOS) kernel for embedded devices. The RISC-V port brings the same small footprint and security features developers have come to expect. It is multicore enabled, has a power management API, and offers a path to safety certification for mission critical devices.
  • Connectivity protocols – For deeply embedded devices, support for USB and wireless protocols like Bluetooth and Zigbee enables many applications. Support for IPv4 and IPv6 helps add heavier protocols like Ethernet and Wi-Fi.
  • User interface framework – Not every embedded device has a user interface, but when one is called for a robust framework like Qt significantly reduces development and debug time. One innovation is footprint management capability, helping to reduce target code size by configuring which library modules are included for run-time.

For embedded devices at the edge, integration with the cloud is becoming more common.  An optional add-on to Nucleus ReadyStart for RISC-V is the Nucleus IoT Framework. It adds support for connecting devices to Amazon Web Services (AWS), Microsoft Azure, and MindSphere from Siemens.

The entire idea is providing foundational value

Many users are turning to RISC-V because they see open-source hardware as the future. For them, it may seem strange to turn to commercial software running on their solution. But the entire idea behind open sourcing is providing foundational value development teams can leverage to create project-specific value at higher levels of an application.

Yes, it may be possible for an embedded development team to piece together an open-source software development stack for RISC-V. It would take precious time and resources to create and maintain that effort. Teams without hard real-time requirements have more choices, for example, a RISC-V Linux distribution. Layering on requirements for multicore and threading, low power consumption, user interface, and connectivity might complicate that effort.

Meanwhile, competitors adopting a commercial solution like Nucleus ReadyStart would be up and running, with support from the teams at Siemens and the confidence gained from a solution deployed on millions of other devices. The Nucleus RTOS kernel is royalty-free, so there is no financial trade-off compared with deploying open-source RTOS or Linux offerings.

Most RISC-V hardware developers are using commercial EDA tools to create their designs, right? Why? Because there’s a huge amount of foundational value a team doesn’t have to recreate to design a chip successfully. The same thinking applies to tools for RISC-V embedded software development. The foundational value in Nucleus ReadyStart for RISC-V brings teams value they can build on right away in creating an embedded device application successfully.

For RISC-V developers evaluating embedded software options, a good place to start is with this short video demonstrating Nucleus RTOS for RISC-V.


Advanced Packaging Analysis at DesignCon

Advanced Packaging Analysis at DesignCon
by Tom Dillinger on 06-07-2022 at 10:00 am

meshing

The slogan for the DesignCon conference has been “where the chip meets the board”.  Traditionally, the conference has provided a breadth of technical presentations covering the design and analysis of high-speed communication interfaces and power integrity evaluations between chip, board, and system.

The recent DesignCon event at the Santa Clara Convention Center conveyed a noticeably different theme.  The emergence of 2.5D and 3D advanced packaging has necessitated the development of new tools and techniques for the extraction and channel simulation of the disparate interface topologies provided with these packages.

New classes of design requirements have emerged.  Whereas interfaces were typically denoted as long reach (LR), medium reach (MR), or short reach (SR), designers are now addressing the unique electrical requirements associated with very short reach (VSR), extra short reach (XSR), and ultra short reach (USR) connections.  Each of these interface types have stringent allowable signal loss and crosstalk constraints, at ever higher transmission frequencies.

In addition to the growing diversity of interface types, there is a growing demand for quick turnaround time for design and analysis iterations, while concurrently managing the increasing physical data volume.  The interconnect density available on these advanced packages combined with the options for (clock-forwarded) parallel and (NRZ, PAM) serial data transmission requires extensive focus on initial package route planning.  The need for a “shift left” approach to advanced packaging design closure needs fast analysis evaluation throughput.

At DesignCon, Feng Ling, CEO of Xpeedic Technology Inc., gave an insightful presentation entitled “High-Performance EM Simulation Solution for Advanced Packaging”.  He highlighted how their advanced packaging analysis platform development is addressing these capacity, accuracy, and throughput challenges.

To frame the problem, Feng used the figure below to illustrate the range of physical dimensions which the electromagnetic solver must encompass.

Examples of the types of elements in a 2.5D model to be extracted are shown below, from extremely dense parallel wires associated with HBM die stack signals to embedded routes in a 2.5D interposer to through silicon vias (TSVs) and package substrate traces.

Feng focused on three key features of the Xpeedic Metis EM extraction approach:

    • support for data input formats that encompass the disparity, in die, interposer, and package substrate design representations
    • an optimal coupling of boundary element method (BEM, aka “method of moments”, for linear piecewise-isotropic materials) and finite element method (FEM) solution algorithms for different domains
    • an optimized meshing strategy of the advanced packaging material properties and geometries – e.g., a combined rectangular and triangular surface decomposition

The figures below highlight these features:

To illustrate the efficiency and accuracy of the Metis EM solver, Feng presented comparisons of the computational resources and the insertion loss plus return loss versus frequency results for several advanced package elements (against a reference tool).

An example of these comparisons is illustrated below, for the case of an HBM channel on a 2.5D package:

    • CoWoS-R package from TSMC, with an organic interposer
    • co-planar signals in the channel in an interspersed supply-signal (GSGSG) configuration
    • 2um signal width with 3um spacing

Representative return loss (S-parameter S11) and crosstalk (S13) curves for Metis versus a reference tool are shown, along with the computational resources required.

Specifically, the Metis solver evaluation time efficiencies support the need for fast “pathfinding” design and analysis iterations, to achieve an optimal physical implementation that confirms signal loss budgets are met.

Advanced package design technology has enabled a diverse set of electrical interfaces to be integrated, with high interconnect density provided over short distances.  The target frequency of the data rates across these interfaces and the tight signal losses allowed necessitate accurate EM analysis.  The design complexity of these packages means that tools must support large dataset size and simultaneously provide fast analysis throughput for signal and power implementation planning.  The Metis extraction solution from Xpeedic addresses these requirements.

For more information on Xpeedic Metis, please follow this link.

PS. Perhaps DesignCon could update their slogan, “where heterogeneous chips integrated on an advanced package meets the board”.

Also read:

3D IC Update from User2User

Semiconductor Packaging History and Primer

Advanced 2.5D/3D Packaging Roadmap

 


The Electron Spread Function in EUV Lithography

The Electron Spread Function in EUV Lithography
by Fred Chen on 06-07-2022 at 6:00 am

Electron Spread EUV

To the general public, EUV lithography’s resolution can be traced back to its short wavelengths (13.2-13.8 nm), but the true printed resolution has always been affected by the stochastic behavior of the electrons released by EUV absorption [1-5].

A 0.33 NA EUV system is expected to have a diffraction-limited point spread function (minimum spot size) represented by an Airy disk [6] with a full width at half-maximum level of over 20 nm. On the other hand, the electron spread function, which represents how far the EUV-released electrons migrate before driving chemical reactions in the resist, is typically fit as an exponential function [3,7]. The convolution of the electron spread function with the optical point spread function, shown in Figure 1, mathematically adds up the effects of electron spread from each point of the optical point spread function.

Figure 1. The optical point spread function of a 0.33 NA 13.5 nm wavelength system (blue) is fit well with a Gaussian with sigma=8.5 nm (orange). The electron spread function is typically fit with an exponential function (gray) with lambda as a fitting parameter. Here lambda corresponds to a decay length of 3 nm. The resulting final spread function (yellow) has a peak that is not at zero radius but at some few nm radial distance.

The electron spread inevitably worsens the resolution of EUV compared to the optical-only expectation. The overlap of optical+electron spread functions degrades the ability to resolve the gap between two features, as the deposited energy in between is at least doubled compared to an isolated feature (Figure 2).

Figure 2. Overlap of optical+electron spread functions degrades the ability to resolve the gap between two features. Here the feature separation is 40 nm. (Note: there is a local minimum in the peaks at 0 and 40 nm, so the feature pitch does not change in the image.)

The degradation is aggravated further by the inevitable stochastic variation, which causes the value of lambda to be random. This causes significant gap CD variation (Figure 3).

Figure 3. Stochastically varying electron spread (random values of lambda) causes CD variation in closely paired features. The blue curve indicates the expected image from two points separated by 40 nm in an 0.33 NA 13.5 nm wavelength system. The orange curves are the expected images from three random values of lambda, which reflect random degrees of electron spread. The black dashed line indicates the threshold level for printing in the resist. The grey curves indicate the extent of electron spread, with r=0 being the location of photon absorption.

For a 40 nm separation, the gap CD in Figure 3 spans from 14 nm to 18.5 nm as the decay length (1/lambda) decreases from 3 nm to 1.25 nm. Since the CD impact is quite significant (>10%), it is definitely necessary for the application of EUV lithography to include stochastic electron spread functions into the algorithms for optimization and resolution enhancements such as OPC and SMO.

References

[1] https://www.linkedin.com/pulse/blur-wavelength-determines-resolution-advanced-nodes-frederick-chen/; https://semiwiki.com/lithography/303429-blur-not-wavelength-determines-resolution-at-advanced-nodes/

[2] https://www.linkedin.com/pulse/adding-random-secondary-electron-generation-photon-shot-chen/; https://semiwiki.com/lithography/311874-adding-random-secondary-electron-generation-to-photon-shot-noise-compounding-euv-stochastic-edge-roughness/

[3] https://www.linkedin.com/pulse/demonstration-dose-driven-photoelectron-spread-euv-resists-chen/; https://semiwiki.com/lithography/312476-demonstration-of-dose-driven-photoelectron-spread-in-euv-resists/

[4] https://www.spiedigitallibrary.org/journals/Journal-of-MicroNanolithography-MEMS-and-MOEMS/volume-19/issue-2/024601/Cascade-and-cluster-of-correlated-reactions-as-causes-of-stochastic/10.1117/1.JMM.19.2.024601.pdf

[5] https://www.spiedigitallibrary.org/journals/journal-of-micro-nanolithography-mems-and-moems/volume-18/issue-1/013503/Localized-and-cascading-secondary-electron-generation-as-causes-of-stochastic/10.1117/1.JMM.18.1.013503.pdf

[6] https://en.wikipedia.org/wiki/Airy_disk

[7] M. Kotera et al., Jpn. J. Appl. Phys. 47, 4944 (2008).

This article first appeared in LinkedIn Pulse: The Electron Spread Function in EUV Lithography

 

Also read: 

Double Diffraction in EUV Masks: Seeing Through The Illusion of Symmetry

Demonstration of Dose-Driven Photoelectron Spread in EUV Resists

Adding Random Secondary Electron Generation to Photon Shot Noise: Compounding EUV Stochastic Edge Roughness


DesignDash: ML-Driven Big Data Analytics Technology for Smarter SoC Design

DesignDash: ML-Driven Big Data Analytics Technology for Smarter SoC Design
by Kalar Rajendiran on 06-06-2022 at 10:00 am

DesignDash Better Decisions Faster

With time-to-market pressures ever increasing, companies are continually seeking enhanced designer productivity, faster design closure and improved project management efficiency. To accomplish these, organizations invest a lot in implementing both standardized approaches and proprietary techniques. With ever increasing product complexities, more and more engineers are needed to implement the designs. Consequently, regular onboarding of a mix of fresh engineers and experienced ones is an ongoing process.

A typical chip project includes thousands of tool-flow runs with different setup configurations. A problem that gets introduced in one flow could have ramifications later on within the same flow or in a different flow. Understanding the relationship and dependencies is critical for rapidly debugging an issue, better still for avoiding such issues in the first place. An SoC design produces a massive amount of data that gets archived away and doesn’t see the light of day after a project completes. The learnings generally don’t get documented as the team members are off to work on the next project. So, critical knowledge essentially resides among the various team members.

Being able to leverage the learnings from one project is not only helpful at the start of future projects but is even more useful when facing problems that need to be debugged. This is where and why the institutional knowledge developed over time becomes very valuable. Everything is fine until one or more team members leave and/or when many fresh engineers start working on a project. One of the biggest gripes at many companies is the loss of talent along with the institutional learnings. What if there is a way to leverage the massive amount of data to enhance productivity and efficiency of an SoC design? A way for engineering teams to benefit from the valuable insights that are hidden within the archived big data. Until recent years, compute power and machine learning technology were not available in a commercially viable scale to mine this big data.

Earlier this week, Synopsys launched an ML-Driven Big Data Analytics technology which helps solve this long standing problem of lost knowledge. You can access the entire press release here. I had a chat with Mark Richards, Sr. Staff Product Marketing at Synopsys to gain more insights about the recently announced technology. This article is a summary of the salient points I garnered from our discussion.

Synopsys DesignDash Solution

The Synopsys DesignDash solution is an EDA-tuned big data analytics tool that leverages machine-learning techniques to yield enhanced designer productivity and faster design closures. It brings immense value, particularly in the context of high system complexities, shrinking time to market windows and challenging talent resource landscape. The tool delivers a real-time, unified, 360-degree view of all design activities for faster decision making, a deeper understanding of run-to-run, design-to-design and project-to-project trends, and enhanced collaboration within an SoC development environment.

Some salient features and benefits offered by the tool include:

  • Extensive real-time design status through powerful visualizations and interactive dashboards
  • Actionable insights from structured and unstructured EDA metrics and tool-flow data
  • Classification of trends and identification of design limitations
  • Guided root-cause analysis and delivery of flow consumable, prescriptive resolutions
  • Team-wide dashboard for consistent and comprehensive data views to make status comparisons easy, informative, and actionable
  • Simple metric tracking (e.g., machine, license, and other KPIs) to optimize project management

The cloud-optimized DesignDash solution is natively integrated with Synopsys Digital Design Family of tools and offers easy 3rd party tools support too. The solution complements the Synopsys SiliconDash product and thus enables valuable data analysis across the complete design-to-silicon lifecycle.

You can visit the DesignDash product page for more details.

Customer Experience

Customer demand for the DesignDash solution is very strong and a number of customers have already benefitted by using it for their projects. IDMs, established Fabless and startup companies alike have leveraged the automated insights and prescriptive guidance provided by the tool.

For more details, visit Synopsys.com or call your local Synopsys representative.

DesignDash and DSO.ai: A Powerful Interplay

Synopsys is already known for DSO.ai solution, an AI-based design space optimization tool. Together with DesignDash, customers enjoy the benefit of finding the best solution. While DesignDash recommends a range of suitable system architectures to consider given the product requirements and constraints, DSO.ai will help deliver the best implementation for each of these architectures.

Also Read:

Coding Guidelines for Datapath Verification

Very Short Reach (VSR) Connectivity for Optical Modules

Bigger, Faster and Better AI: Synopsys NPUs


An Update on In-Line Wafer Inspection Technology

An Update on In-Line Wafer Inspection Technology
by Tom Dillinger on 06-06-2022 at 6:00 am

inspection overview

From initial process technology development (TD) to high volume manufacturing (HVM) status for a new node, one of the key support functions to improve and maintain yield is the in-line wafer inspection technology.  Actually, there are multiple inspection technologies commonly employed, with tradeoffs in pixel resolution, defect classification methods, and throughput.

At the recent SPIE Advanced Lithography + Patterning conference, Timothy Cummins, Senior Principal Engineer, and Kale Beckwitt, Principal Engineer, from the Yield, Logic Technology Development group at Intel, collaborated on a thorough review of the current status of inspection technology, in their talk, “SEM inspection for logic yield development and control”.  (Although incoming bare wafer inspection is also a key function, the focus of their presentation and this article is on patterned wafer inspection employed in the fab.)

Metrology versus Inspection

As a precursor to the discussion, Tim elaborated upon the difference between metrology and inspection.  The figure below highlights the key features of the techniques.

Critical-dimension scanning electron microscope (CD-SEM) metrology data collection and analysis provide process development engineers with systematic, high-resolution measures of patterned wafer images.  Additionally, metrology systems can assist with the defect density assessments over a very small field-of-view (FOV).

Inspection systems evaluate patterned wafers for the presence of systematic and random defects.  In HVM, the goal is to assess whether the wafer-level defect density represents an outlier to the expected value.

Tim indicated, “Both the scale of defects and defect size are factors in achieving process maturity.  For example, current targets are ~5-10 defects per 300mm diameter wafer, with a critical defect size of 10nm.” (e.g., for a 200 mm**2 die size, 5K wafer starts per week in HVM).  The figure below illustrates several facets of inspection.

Specific comments to highlight about the figure above include:

    • the source of defects includes both fab operations and materials

(In addition to in-line wafer evaluation, incoming material inspection is also a key step.)

    • inspection image acquisition methods are described in “giga-pixels per second” sampling rate
    • the “signal-to-noise ratio” (SNR) in the image varies among different inspection technologies

The contrast between the foreground features of interest and the background “noise” may be limited, necessitating, additional software processing of the pixel data to identify potential defects.

Operational Requirements for In-Line Wafer Inspection

Tim provided the following guidelines for wafer inspection procedures:

    • classification of both systematic and random defects
    • a runtime target of “one hour per selected wafer” for yield control
    • “full wafer” evaluation on a sufficient number of wafer samples to sustain HVM yield, on a fab running tens of 1000’s of wafers per month

Process line yield control requires measuring sufficient wafer area (e.g., 100’s of cm**2 per hour) to detect excursions from the target HVM defect density (e.g., 0.01 defects per cm**2).

OPWI and EBI Technologies

The two major categories of inspection equipment are:

    • optical pattern wafer inspection (OPWI)
    • electron beam inspection (EBI)

Tim highlighted the relative market share of these approaches, shown in the figure below (by equipment revenue, circa 2018).

Parenthetically, OPWI systems scan a light source across the patterned wafer.  There will be a combination of reflected and scattered light from the wafer surface – a defect will contribute significantly to the scattered intensity.  The light detector generates an image, which is compared against a reference, subtracting the two – a defect will be identified by the difference.

The goal for OPWI is to generate a high-resolution, high-contrast wafer scan image – the defect-related optical “signal” to background “noise” ratio (SNR) should be high.  A brightfield inspection system places the detector in the reflected light path; the light source and detector are typically within ~45 degrees of normal incidence.  A large field-of-view (FOV) and low required light intensity make brightfield analysis an attractive option.  The loss of intensity due to scattering from a defect will shop up as a set of dark pixels in the bright background.  Conversely, a darkfield inspection system places the detector away from the reflected beam to receive the scattered light; the light source is typically at an oblique angle to the wafer surface, between 0 and 45 degrees.

The scattering cross-section (SCS) of a defect increases (exponentially) with the defect size, while the SCS is inversely (exponentially) related to the source wavelength.  To provide high-resolution images for smaller defect sizes, OPWI systems have had to incorporate shorter wavelength sources, transitioning from visible to UV to deep UV illumination.

The three main characteristics of the OPWI system are:

    • the image resolution of the light source and detector
    • the light scattering properties of the materials, and
    • the resulting image contrast

The scattered light is a function not only of the presence of a defect, but also the refractive index and reflectivity of the (background) patterned materials.  The (higher-order) optical diffraction angles from the background reflection increase as the CD of the patterned shapes is reduced.  As a result, the noise in the image from valid patterns increases.  Line edge roughness of a (valid) patterned line also contributes to noise, and a reduced SNR for defect detection.

Yield engineers strive to optimize the illumination wavelength(s), the illumination pattern (e.g., full-field, annular), and the light collection pupil plus detector design to improve the generated image for defect identification.  (Note that a range of wavelengths may be used, given the strong wavelength dependence of the refractive index and reflectivity for the wafer surface materials.)

There are two OPWI approaches toward deriving the reference image for the difference calculation:

    • comparison to an adjacent die

The reference could simply be the image from an adjacent die location to the die site being evaluated.  A random defect present in the test site will almost assuredly differ from the image for a neighboring site.  (Systematic defects in the two images may zero out and go undetected.)

    • comparison to a reference model

At the expense of producing a model image for an expected “good” die, the test site image would be subtracted from this reference, known as a die-to-database comparison.

The alternative to optical reflection plus scattering technology is electron beam imaging, where an incident beam on a surface results in the emission of secondary electrons to be detected.

Tim described some of the characteristics of OPWI and EBI technologies for in-line yield monitoring applications.

    • throughput

OPWI has maintained a prevalent presence for in-line defect detection due to the faster throughput:

      • OPWI:  ~20 gigapixels/sec @ 100nm pixel size
      • CD-SEM EBI:  ~20 megapixels/sec @ 1nm pixel size

Clearly, EBI would not provide sufficient throughput to be a complete yield development and control technology.  Increasing the e-beam inspection current (and/or applying multiple adjacent e-beams) to improve throughput is limited by the Coulomb interaction between primary and secondary electrons.

    • resolution

EBI offers improved direct resolution over OPWI, due to the optical diffraction limit.  EBI also provides the opportunity for better contrast and defect detection across greater changes in wafer surface topography.

OPWI system developers are pursuing methods to improve resolution.  Deep learning image post-processing algorithms may offer insights into defect images that are sufficient excursions from an image training set.  One option for the training set would be to use a small range of values for each pixel around the median calculated over a large sample of images as the threshold for defect identification.  As the wafer defects of interest are difficult to identify, a synthetic training set of images with both defects and low SNR contrast may be generated to guide the deep learning training phase.

Another option being pursued to improve defect identification with OPWI systems relates to the nature of the incident light.  Rather than using the pixel intensity amplitude difference of two images, a polarized incident source beam will undergo a phase change upon reflection from the topography of the patterned surface.  The presence of a defect will alter that phase change, which could be detected by darkfield ellipsometry.

Kale shared the figure below, depicting the OPWI and EBI tradeoffs, with a gap region for the combination of HVM defect density and small defect size.

Although OPWI remains the workhorse inspection method, Kale highlighted applications where EBI techniques provide complementary support.  The key technology that Kale focused on was voltage-contrast SEM inspection.

Briefly, VC-SEM utilizes the principle that a conduction element on the wafer surface with a voltage bias will alter the distribution of emitted secondary electrons when the beam is scanned.  For example, a passive VC scan would electrically connect the wafer substrate to ground.  A node on the surface that should be floating would acquire charge from the incident (low energy, <1keV) electron beam, achieving a negative potential.  The floating node would thus emit more secondary electrons, and appear brighter in the VC-SEM scan.  If the floating node were shorted to ground, few secondary electrons would be emitted resulting in a dark area in the VC-SEM.  Differences between the images for an unconnected wafer and a VC-SEM grounded wafer connection can provide a means of defect inspection.

Additionally, a VC-SEM connection which should be grounded which is of high scan intensity is indicative of a floating node, likely due to a defective contact or via.  Indeed, VC-SEM provides a class of defect identification which OPWI cannot.

Parenthetically, note that passive VC-SEM does not pinpoint the defect.  It highlights an electrical issue associated with a circuit node.  Further analysis of the physical layout is required for defect identification.

Active VC scanning goes further, applying patterns to the device under test to generate real-time SEM images.  More sophisticated cabling/probing is required in the SEM vacuum chamber.  Nodes at a non-varying potential during the pattern application will not change intensity in the scan.  As before, differences between the sequence of time-based VC-SEM images of the DUR and a reference device are indicative of a defect.

About VC-SEM technology, Kale said, “VC inspection is limited to specific process layer and structure types.  With careful experimental design, inline VC can have a 1:1 correlation to end-of-line fails.” 

The throughput of VC-SEM relative to OPWI is still a concern, yet Kale offering the following comments about its applicability.  “Test structures designed for defect mode-specific VC visibility give rapid yield and process window visibility.”

As shown in the figure below, he described a “design for inspection” methodology, with a unique standard cell layout that would be inserted as a filler cell during block placement, offering a significant amount of defect data per wafer.

Correspondingly, VC-SEM equipment manufacturers would provide multiple beam systems that can localize image capture over a subset of the full FOV, improving throughput.

Summary

Incremental advanced to both OPWI and EBI technologies continue to enable in-line defect inspection for technology development and HVM:

    • improved optical and (multiple) e-beam sources
    • computational advances benefit SNR and throughput (e.g., die-to-database modeling, deep learning algorithms)
    • VC scan has high value for specific applications

However, Tim and Kale ended their presentation with the following observation:

  “Throughput is the biggest need.  A huge opportunity exists for the industry to provide disruptive solutions for higher throughput.” 

A huge opportunity, indeed.

-chipguy

Also Read:

0.55 High-NA Lithography Update

Intel and the EUV Shortage

Can Intel Catch TSMC in 2025?


Ecomotion: Engendering Change in Transportation

Ecomotion: Engendering Change in Transportation
by Roger C. Lanctot on 06-05-2022 at 6:00 am

Engendering Change in Transportation

Prioritizing gender-related behavior in the analysis of transportation patterns may change planning priorities and accelerate a transition to more sustainable mobility solutions. That is the finding of a report compiled by the Ministry of Environmental Protection in Tel Aviv, Israel and shared at the Women in Mobility summit as part of Ecomotion Week.

Speaking at the Tel Aviv summit, Tamar Keinan, of the Department of Economics of Bar Ilan University, described important differences between male and female transportation behaviors as identified from studies conducted in Israel and globally. Focusing on issues ranging from vehicle ownership, to driving styles, and the use of micromobility and public transit, Keinan shared surprising insights suggesting that women may serve as an important fulcrum for evolving transportation toward a more sustainable future.

Among the findings shared by Keinan were:

  • Men are more dependent on the use of privately owned vehicles and tend to travel continuously for long distances
  • Women tend to make multiple stops when driving a privately owned vehicle and prioritize slower and less expensive means of travel over cars
  • Men are more secure when traveling
  • Men are more likely to ride a bicycle
  • Men have a low awareness of environmental concerns and are less inclined to prioritize sustainability
  • Men are more inclined to work further from home
  • Women report a higher degree of satisfaction with public transportation
  • Private car ownership by men is 1.5x greater than women for most age groups
  • Men account for three quarters of employer-provided vehicles

The findings point to differences in transportation behaviors that are important enough to influence decision making. In fact, the studies suggest that ongoing analysis of this nature ought to be prioritized to foster a deeper understanding of transportation client behaviors in order to create policies intended to influence user decision making to achieve preferred outcomes.

The report concludes:

  • Improving public transportation is a tool for achieving greater gender equality
  • There is a lack of continuous information gathering to allow the measurement of gaps in transportation service delivery
  • Female transportation patterns are not necessarily tied to the upbringing of children
  • There is a need to redefine the peak and low hours of public transportation demand
  • Women may play an essential role as change agents for achieving greater sustainability

The report suggests action items for transportation authorities:

  • An expansion and improvement of public transportation including gender-specific marketing
  • Road signs depicting both male and female figures
  • The encouragement of “collaborative” travel
  • More frequent travel behavior studies

The report also had recommendations for parents of daughters:

  • Do not drive your daughters to school. Help them plan the safe way. Do not get them used to being passengers, next to the driver.
  • Encourage them to ride a bike at a young age.
  • Set a personal example, maintain gender balances in family transportation
  • Experience public transportation, walking, and cycling

And recommendations for statisticians:

  • Recognize the importance of differences in estimating satisfaction with public transportation among frequent users
  • Recognize that sharing information helps reduce gender gaps in transportation
  • Collect and analyze gender data regarding toll roads and vehicles from the workplace

And for municipal authorities:

  • Create short walking distances
  • Foster the use of multiple transportation modes
  • Provide safe and continuous bike paths to help increase the percentage of female riders
  • Strengthen the sense of security in the public space to foster the independent mobility of women

The transportation priorities and concerns of women are better aligned with societal sustainability goals – relative to the behaviors and preferences of men. By better addressing the transportation needs of women, municipalities and public authorities generally can accelerate their simultaneous progress toward transportation equity and ecologic and economic responsibility.

While the individual study findings seem deceptively obvious in some cases, the powerful implications are worthy of note. Women can serve as global change agents for achieving unmet global mobility goals.

Also read:

Connecting Everything, Everywhere, All at Once

Radiodays Europe: Emotional Keynote

Why Traceability Now? Blame Custom SoC Demand

 


Podcast EP83: The ESD Alliance and SEMI – Mission and Strategy with Bob Smith

Podcast EP83: The ESD Alliance and SEMI – Mission and Strategy with Bob Smith
by Daniel Nenni on 06-03-2022 at 10:00 am

Dan is joined by Bob Smith, executive director of the ESD Alliance, a SEMI technology community. The ESD Alliance represents the electronic system and semiconductor design ecosystems. It is the first design-oriented community in SEMI.

Bob explores the mission of the ESD Alliance and how the organization leverages the worldwide footprint of SEMI. Security, software piracy, encryption and tracking worldwide events impacting the design and manufacturing ecosystems are some of the topics Dan and Bob discuss in this far-reaching and informative podcast.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


Leveraging Simulation to Accelerate the Design of Plasma Reactors for Semiconductor Etching Processes

Leveraging Simulation to Accelerate the Design of Plasma Reactors for Semiconductor Etching Processes
by Kalar Rajendiran on 06-03-2022 at 6:00 am

Etching Processes

There is no shortage of reporting on the many technological advances happening within the semiconductor industry. But sometimes it feels like we hear less in the area of semiconductor manufacturing equipment than in the design and product arenas. That doesn’t mean that there is less happening there or what is happening there is of any less value. In fact, the advances or hurdles thereof determine the course of the entire semiconductor industry.

The never ending push for packing more transistors into a sq.mm area of a wafer continues to drive the need for reducing feature sizes. Advanced manufacturing technology is what makes these finer and finer feature sizes possible. Naturally, advanced manufacturing equipment are needed to produce these advanced chips. With many more end-markets creating demands for advanced process-based chips than ever before, wafer fab equipment sales are growing rapidly. According to SEMI, worldwide sales of semiconductor manufacturing equipment in 2021 rose 44% to an all-time record of $102.6 billion.

While fab equipment vendors see huge market opportunities, they also have to address many challenges to deliver cost-effective equipment optimized for mass production use. I recently conversed with Richard Cousin, an Industry Process Expert in Electromagnetics within the SIMULIA Brand line of Dassault Systèmes, headquartered in France. Richard is currently leading the Charged Particle Dynamics driven products and has a strong expertise on vacuum electron devices and plasma physics. The conversation focused on the etching process, which is one of the many steps involved in the manufacture of semiconductor chips. Richard discussed the challenges with the etching process, the designing of advanced plasma reactors and the plasma reactor device workflow. And he closed the conversation with a teaser of how simulation tools at the Fabrication process level can help address the plasma reactor design challenges. This article is a summary of that conversation. For those interested in learning more specifics about the simulation tools, you may want to register for a June 8th Webinar. In that webinar, Richard will dive into the details of simulation and optimization techniques that designers of semiconductor manufacturing equipment will find useful.

Etching Process: Wet vs Dry

The wet etching process is commonly used for isotropic etching whereas the dry etching process allows anisotropic etching as well as isotropic etching. While the wet etching process is still in use, the chemical reactions can cause issues when radicals are expanded over the device.

With the dry etching process using the Argon neutral gas, there are no chemical reactions. The process is controlled by the plasma uniformity and offers a better selectivity of the material to be etched. The resulting anisotropic etching process is well suited for small size features in the nanoscale. Also, more physical parameters can be controlled to characterize the plasma, such as the input power as well as the pressure of the neutral gas which controls the plasma density. Tuning the operating frequency is set by a so-called matching box.

Thus, the dry etching technique offers better flexibility for delivering accurate and uniform profiles well suited and more critical for small size features.

Designing a Plasma Reactor for Dry Etching

The design of a plasma reactor has to balance many different parameters to accommodate various requirements and plasma characteristics. The plasma characteristics include the density profiles, the ionization rate, the effect of the pressure and the type of gas used as well as the influence of the geometry to prevent damages. One key parameter is the matching network connected to the plasma reactors in order to maximize the input power to be transmitted to the device. This is a critical requirement to maintain the discharge and for better control of the plasma formation for the type of etching to be performed.

Referring to the Figure below, there are different types of dry etching processes that broadly fall into the IBE and RIE categories. The IBE process is a deposition process similar to a sputtering approach whereas the RIE process removes some materials from the substrate. Therefore, the pressures involved in RIEs are more important and require higher plasma densities to etch the substrates. Thus, the designing of a plasma reactor is challenging as it needs to support different types of dry etching processes.

3DS SIMULIA Space Charge Dynamics Simulation Tool

SIMULIA possesses an unique capability of calculating the space charge dynamics for predicting experimental results when no diagnostics are available. The SIMULIA approach allows for virtual prototyping which helps with designing real and new plasma reactors from the space charge dynamics computation to the appropriate RF matching network.

When simulating, SIMULIA can do a microscopic approach or a macroscopic approach. Under the microscopic approach, it uses a time-domain kinetic approach with a Poisson based Particle-In-Cell code. Several particle interactions are taken into account in a global Monte-Carlo Collision (MCC) type model. The ionization of the neutral gas, its excitation and the elastic collisions are considered simultaneously to compute the plasma kinetics.  Under its macroscopic approach, SIMULIA treats the plasma as bulk. The plasma parameters are extracted from the MCC models to feed an equivalent dispersion model. This enables it to accurately characterize the matching network (so-called matching box) connected to the plasma reactor.

The SIMULIA space charge dynamics simulation tool simulates all phases of the plasma reactor device workflow except phase 7 (Refer to Figure below).

For those interested in learning more specifics about the simulation tools, you may want to register for a June 8th Webinar.
Also Read:

Executable Specifications Shorten Embedded System Development

Fully Modeling the Semiconductor Manufacturing Process

Optimizing High Performance Packages calls for Multidisciplinary 3D Modeling


Flexible prototyping for validation and firmware workflows

Flexible prototyping for validation and firmware workflows
by Don Dingee on 06-02-2022 at 10:00 am

The Prodigy S7-P19 Logic System can help speed up validation and firmware workflows

The quest for bigger FPGA-based prototyping platforms continues, in lockstep with each new generation of faster, higher capacity FPGAs. The arrival of the Xilinx Virtex UltraScale+ VU19P FPGA takes capacity to new levels and adds transceiver and I/O bandwidth. When these slot into S2C’s Prodigy S7-19P Logic System, the result is flexible ASIC and SoC prototyping for validation and firmware workflows. Let’s look at FPGA-based prototyping for these use cases.

Automated partitioning and realistic high-speed I/O for validation

Hardware teams get concerned when designs must be modified to be tested. Modifications take time and introduce the possibility for errors. FPGA-based prototyping is now mature technology. Firms like S2C have put in years of research on accurately partitioning logic into interconnected FPGAs with automated tools. Improvements in FPGA transceiver bandwidth – up to 4.5Tb/s total in the VU19P – streamline SerDes interconnects between parts for minimal latency.

With four VU19Ps, the 7th-generation Prodigy S7-19PQ can take on ASIC and SoC designs of up to 196M gates. Teams can scale down with smaller configurations using one or two VU19P FPGAs or scale up by stacking systems and interconnecting logic modules. All interconnects between FPGAs use the same high-speed transceivers, so stacking is as simple as cabling. On-board DDR4 memory keeps the FPGAs fed at speed. Advanced clock management keeps both standalone and multi-system configurations in sync.

During system validation, realistic high-speed I/O for stimulus and output evaluation is a requirement. Often, the burden on teams to create their own I/O testing hardware, S2C has standardized its I/O and created over 90 daughtercards with pre-tested, proven designs. I/O voltages are adjustable in software with steps from 1.2V to 1.8V. Using the same interface, teams can turn to S2C or design their own if a full-custom I/O card is required.

Instead of leaving validation to the very end of the project, and perhaps getting surprised, teams can move to a more continuous validation workflow using FPGA-based prototyping. This also allows room for expanding validation testing, since it’s not compressed into a small window. System-level faults can be exposed earlier and corrected in the ASIC or SoC design.

More power and control in the hands of firmware developers

One of the more powerful use cases for FPGA-based prototyping technology is firmware development for SoCs. For enhanced productivity, software teams need a dedicated system an individual developer can configure and control.

With daughter cards and cables plugged in, the Prodigy S7-19P auto detects its configuration. Designs load into the FPGAs through Ethernet, USB, JTAG, or a microSD card. Developers can kick the box when they need to remotely, powering it on or off or resetting it over Ethernet. A virtual UART allows a developer into firmware for debugging. Virtual switches and LEDs make it easy to change settings and see status remotely.

This kind of power can put software teams on their own workflow track, out of the way of ASIC or SoC hardware teams. Conflicts early in the design, when hardware is in flux and access is tight, can set software schedules behind. Without the right firmware for initializing registers across the chip, hardware progress may slow as well. Firmware developers can start code on a working configuration, reset the platform when they need to, and even reconfigure the platform from a design file stored on the network without bothering the hardware teams.

Win by speeding up surrounding validation and firmware workflows

Most semiconductor IP is thoroughly rung out before it gets into the hands of ASIC and SoC developers. And the tools for combining IP into a design and debugging it are much better. The remaining long poles in the tent are now validation and firmware workflows – and speeding those up with FPGA-based prototyping technology can be a huge win. Platforms like the S2C Prodigy S7-19 Logic System give validation and firmware teams their own environment working in parallel with hardware development. A short S2C video highlights this new platform.

Also read:

White Paper: Advanced SoC Debug with Multi-FPGA Prototyping

Prototype enables new synergy – how Artosyn helps their customers succeed

S2C’s FPGA Prototyping Solutions