ads mdx semiwiki building trust gen 800x100ai

Yep – It’s Still an Analog World in this Digital Age

Yep – It’s Still an Analog World in this Digital Age
by Daniel Nenni on 11-02-2021 at 6:00 am

Fig 1 Omni Design Blog 271021

With all the advances in digital technology to serve mankind’s insatiable appetite for automation, it’s easy to lose sight of the reality that we still live in an analog world.  It’s easy to take for granted that somewhere in the chain of events that takes place in the process of applying state-of-the-art computational technology to real-world applications – data-conversion from analog to digital must take place.  Conversion of real-world information into a digitally encoded representation, and often back again to an analog representation.  Some are now claiming that we have evolved past the “digital age”, and we are now in the “age of data” – big data.

Sentient beings are largely algorithmic engines driven by the need to survive in a hostile environment – touch, vision, hearing, smell, taste (and sometimes “proprioception”). The human system converts our real-world senses into electrochemical “data” for processing by the brain’s neocortex to render our perception intelligence. As Artificial Intelligence (a.k.a. “AI”) promises to reshape our way of life beyond recognition, more data will be the key to better performing AI.  And, yes – I am referring to real-world data.  From online user data for better commercial transactions, to biosensor data for better healthcare, to real-time physical environment data for autonomous vehicle navigation.  In Kai-Fu Lee’s book “AI Super-Powers”, the use of real-world physical environment data by AI leads to what Kai-Fu refers to as perception AI – where AI-driven machines can “perceive their environment”.  And, clearly, high performance data-conversion will be a key ingredient for perception AI to achieve its full potential.

Omni Design Technologies is a leading provider of high-performance, ultra-low power data-conversion IP cores for SoCs that enable environment perception.  Specifically, Omni Design provides production-proven data-conversion IP that is more commonly known as ADCs (analog-to-digital converters), DACs (digital-to-analog converters), and AFEs (analog front-ends).  As you can easily imagine, ADCs, DACs, and AFEs are essential architectural components for any system that needs to interact with the real-world for environment perception – including those systems employing Light Detection and Ranging (“LiDAR”) technology.  LiDAR is making its way into a wide assortment of applications from the iPhone 12 Pro and iPhone 12 Pro Max with built-in LiDAR scanners for better photos and AR – to enabling vehicles with environment perception for autonomous navigation.

If the use of LiDAR in autonomous vehicles is your thing, Omni Design has recently published a white paper entitled “LiDAR Implementations for Autonomous Vehicle Applications”.  The white paper introduces LiDAR technology and discusses pulse-detection LiDAR as well as continuous-wave LiDAR.

Figure 1: Pulse-Detection LiDAR

Figure 2: FMCW LiDAR

The white paper goes on to discuss several different implementations of LiDAR systems, SoC design requirements, and a block diagram example of a pulse-based LiDAR system SoC with available Omni Design IP.

Figure 3: Pulse-Based LiDAR Block Diagram

Finally, the white paper wraps up with a summary of Omni Design’s IP offering for implementing SoCs for LiDAR applications.

About Omni Design Technologies
Omni Design Technologies  is a leading provider of high-performance, ultra-low power IP cores in advanced process technologies that enable highly differentiated systems-on-chip (SoCs) in applications ranging from wired and wireless communications, automotive, imaging, sensors, and the internet-of-things (IoT). Omni Design, founded in 2015 by semiconductor industry veterans, has an excellent track record of innovation and collaborating with customers to enable their success. The company is headquartered in Milpitas, California with additional design centers in Fort Collins, Colorado, Billerica, Massachusetts and Bangalore, India. For more information, visit www.omnidesigntech.com.


Latest Updates to Altair Accelerator, the Industry’s Fastest Enterprise Job Scheduler

Latest Updates to Altair Accelerator, the Industry’s Fastest Enterprise Job Scheduler
by Mike Gianfagna on 11-01-2021 at 10:00 am

Latest Updates to Altair Accelerator the Industrys Fastest Enterprise Job Scheduler

Altair is a broad-based company that delivers critical enabling technology across many disciplines that will be familiar to SemiWiki readers. According to its website, Altair delivers open-architecture solutions for data analytics & AI, computer-aided engineering, and high-performance computing (HPC). You can learn more about Altair projects covered on SemiWiki here. Enterprise-level job scheduling is one example of the critical enabling technology Altair delivers. Recent, significant enhancements to the product are the subject of this post. Read on to learn about the latest updates to Altair Accelerator, the industry’s fastest enterprise job scheduler.

What It Is

Accelerator is a high-throughput, enterprise-grade job scheduler. Its application is focused on complex semiconductor design. The architecture is flexible and can support a variety of infrastructures from small, dedicated server farms to complex, distributed high-performance cluster environments.

A tool like this is needed by designers to allow for quick scheduling and resource management for their design tasks across CPU, memory and EDA license utilization. As compute infrastructures become more complex and distributed, there is also a growing need to manage resources to keep throughput high while managing overall costs. From a management perspective, there can be many thousands of jobs to schedule and prioritize each day – a tool with high visibility and low latency is needed to operate successfully in such an environment.

What’s New

This year, a native Kafka interface was added to enhance visualization across a broad range of information regarding batch scheduling for Accelerator. Information like this is notoriously difficult and costly to capture, so this enhancement is significant. Apache Kafka is an open-source tool for processing multiple sources of streaming data in real-time. Kafka is quite popular, with more than 80% of all Fortune 100 companies using it.

Monitoring of a high-performance batch system can be difficult. Most approaches typically slow down the system because of the reporting tasks, which compete with the tasks to process jobs.

It gets more difficult when there are several consumers of the monitored data. Slowing the refresh rate of monitor data can help, but then the information is not accurate or real-time.

A system such as Kafka helps by supporting many consumers. Data is published once but many consumers can read the message, so there is only one extra load on the batch system instead of many. For multiple clusters, multiple batch systems can be configured that publish to a single Kafka instance.

The frequency of data publishing does require care, even in this setting. How often should the system publish and how is the data extracted from the batch system? It’s important to understand how fast things are changing in the batch system. For example, Altair Accelerator can dispatch several hundred jobs per second, and each dispatch may change the state of 20-30 metrics. That’s a huge volume of data for just some basic measurements.

With its internal metrics system, Accelerator accumulates data over a short time window — around 10 seconds. While this loses resolution at the individual dispatch loop level, the resulting data is often more useful because of the high variance between dispatch loop iterations. Even with such accumulation, there’s still the overhead of getting the data out of the inner loop. Altair chose to directly code the Kafka publisher routines into the batch system core for lower overhead.

To Learn More

The results of the enhancements to Altair Accelerator are significant. You can get more information and view sample reports here. You can also see a short demonstration of the new Accelerator dashboard here. There are also great examples of enhanced data streams that are now possible with the new release here.  This information will help you learn about the latest updates to Altair Accelerator, the industry’s fastest enterprise job scheduler.

Also Read

Six Essential Steps For Optimizing EDA Productivity

Chip Design in the Cloud – Annapurna Labs and Altair

Webinar: Annapurna Labs and Altair Team up for Rapid Chip Design in the Cloud


Highlights of the TSMC Open Innovation Platform Ecosystem Forum

Highlights of the TSMC Open Innovation Platform Ecosystem Forum
by Tom Dillinger on 11-01-2021 at 8:00 am

N3 comparison

TSMC recently held their 10th annual Open Innovation Platform (OIP) Ecosystem forum.  The talks included a technology and design enablement update from TSMC, as well as specific presentations from OIP partners on the results of recent collaborations with TSMC.  This article summarizes the highlights of the TSMC keynote from L.C. Lu, TSMC Fellow and Vice-President, Design and Technology Platform, entitled: “TSMC and Its Ecosystem for Innovation”.  Subsequent articles will delve more deeply into specific technical innovations presented at the forum.

TSMC OIP and Platform Background

Several years ago, TSMC defined four “platforms”, to provide specific process technology and IP development initiatives aligned with the unique requirements of the related applications.  These platforms are:

  • High-Performance Computing (HPC)
  • Mobile (including, RF-based subsystems)
  • Automotive (with related AEC-Q100 qualification requirements)
  • IoT (very low power dissipation constraints)

L.C.’s keynote covered the recent advances in each of these areas.

OIP partners are associated with five different categories, as illustrated in the figure below.

EDA partners develop new tool features required to enable the silicon process and packaging technology advances.  IP partners design, fabricate, and qualify additional telemetry, interface, clocking, and memory IP blocks, to complement the “foundation IP” provided by TSMC’s internal design teams (e.g., cell libraries, general purpose I/Os, bitcells).  Cloud service providers offer secure computational resources for greater flexibility in managing the widely diverse workloads throughout product design, verification, implementation, release, and ongoing product engineering support.  Design center alliance (DCA) partners offer a variety of design services to assist TSMC customers, while value chain aggregation (VCA) partners offer support for test, qualification, and product management tasks.

The list of OIP partners evolves over time – here is a link to an OIP membership snapshot from 2019.  There have been quite a few recent acquisitions, which has trimmed the membership list.  (Although not an official OIP category, one TSMC forum slide mentioned a distinct set of “3D Fabric” packaging support partners – perhaps this will emerge in the future.)

As an indication of the increasing importance of the OIP partner collaboration, TSMC indicated, “We are proactively engaging with partners much earlier and deeper (my emphasis) than ever before to address mounting design challenges at advanced technology nodes.”       

Here are the highlights of L.C.’s presentation.

N3HPC

In previous technical conferences, TSMC indicated that there will be (concurrent) process development and foundation IP releases focused on the HPC platform for advanced nodes.

The figures below illustrate the PPA targets for the evolution of N7 to N5 to N3.  To that roadmap, TSMC presented several design technology co-optimization (DTCO) approaches that have been pursued for the N3HPC variant.  (As has been the norm, the implementation of an ARM core block is used as the reference for the PPA comparisons.)

Examples of the HPC initiatives include:

  • taller cells, “double-high” standard cells

N3HPC cells adopt a taller image, enabling greater drive strength.  Additionally, double-high cells were added to the library.  (Complex cells often have an inefficient layout, if confined to a single cell-height image – although double-high cells have been used selectively in previous technologies, N3HPC adopts a more diverse library.)

  • increasing the contacted poly pitch (CPP)

Although perhaps counterintuitive, increasing the cell area may offer a performance boost by reducing the Cgs and Cgd parasitics between gate and S/D nodes, with M0 on top of the FinFET.

  • an improved MiM decoupling capacitance layout template (lower parasitic R)
  • greater flexibility – and related EDA auto-routing tool features – to utilize varied (wider width/space) pitches on upper-level metal layers)

Traditionally, any “non-default rules” (NDRs) for metal wires were pre-defined by the PD engineer to the router (and often pre-routed manually);  the EDA collaboration with TSMC extends this support to decisions made automatically during APR.

Note in the graph above that the improved N3HPC performance is associated with a slight power dissipation increase (at the same VDD).

N5 Automotive Design Enablement Platform (ADEP)

The requirements for the Automotive platform include a more demanding operating temperature range, and strict reliability measures over an extended product lifetime, including:  device aging effects, thermal analysis including self-heating effects (SHE), and the impact of these effects on electromigration failure.  The figure below illustrates the roadmap for adding automotive platform support for the N5 node.

Cell-aware internal fault models are included, with additional test pattern considerations to reduce DPPM defect escapes.

RF

RF CMOS has emerged as a key technology for mobile applications.  The figure below illustrates the   process development roadmap for both the sub-6GHz and mmWave frequency applications.  Although N16FFC remains the workhorse for RF applications, the N6RF offering for sub-6GHz will enable significant DC power reduction for LNAs, VCOs, and power amplifiers.

As for the Automotive platform, device aging and enhanced thermal analysis accuracy are critical.

N12e sub-Vt operation

A major initiative announced by L.C. related to the IoT platform.  Specifically TSMC is providing sub-Vt enablement, reducing the operating supply voltage below device Vt levels.

Background – Near-Vt and Sub-Vt operation

For very low power operation, where the operating frequency requirements are relaxed (e.g., Hz to kHz), technologists have been pursuing aggressive reductions in VDD – recall that active power dissipation is dependent upon (VDD**2).

Reducing the supply to a “near-Vt” level drops the logic transition drive current significantly;  again, the performance targets for a typical IoT application are low.  Static CMOS logic gates function at near-Vt in a conventional manner, as the active devices (ultimately) operate in strong inversion.  The figure below illustrates the (logarithmic) device current as a function of input voltage – note that sub-Vt operation implies that active devices will be operating in the “weak inversion” region.

Static, complementary CMOS gates will still operate correctly at sub-Vt levels, but the exponential nature of weak inversion currents introduces several new design considerations:

  • beta ratio

Conventional CMOS circuits adopt a (beta) ratio of Wp/Wn to result in suitable input noise rejection and balanced RDLY/FDLY delays.  Commonly, this ratio is based on the strong inversion carrier mobility differences between nFET and pFET devices.  Sub-Vt circuit operation depends upon weak inversion currents, and likely requires a different approach to nFET and pFET device sizing selections.

  • sensitivity to process variation

The dependence of the circuit behavior on weak inversion currents implies a much greater impact of (local and global) device process variation.

  • high fan-in logic gates less desirable

Conventionally, a high ratio of Ion/Ioff is available to CMOS circuit designers, where Ioff is the leakage current through inactive logic branches.  In sub-Vt operation, Ion is drastically reduced;  thus, the robustness of the circuit operation to non-active leakage current paths is less.  High fan-in logic gates (with parallel leakage paths) are likely to be excluded.

  • sub-Vt SRAM design considerations

In a similar manner, the leakage paths present in an SRAM array are a concern, both for active R/W cell operation and inactive cell stability (noise margins).  In a typical 6T-SRAM bitcell, with multiple dotted cells on a bitline, leakage paths are present through the access transistors of inactive word line rows.

A read access (with pre-charged BL and BL_bar) depends on a large difference in current on the complementary bitlines through only the active word line row array locations.  In sub-Vt operation, this current difference is reduced (and also subject to process variations, as SRAMs are often characterized to a high-sigma tail of the statistical distribution curve).

As a result, the number of dotted cells on a bitline would be extremely limited.  The schematic on the left side of the figure below illustrates an example of a modified (larger) sub-Vt SRAM bitcell design, which isolates the read operation from the cell storage.

  • “burst mode” operation for IoT

IoT applications may have very unique execution profiles.  There are likely long periods of inactivity, with infrequent “burst mode” operations requiring high performance for a short period of time.  In conventional CMOS applications, the burst mode duration is comparatively long, and a dynamic-voltage frequency-scaling (DVFS) approach is typically employed by directing a DC-to-DC voltage regulator to adjust its output.  The time required for the regulator to adapt (and the related power dissipation associated with the limited regulator efficiency) are rather inconsequential for the extended duration of the typical computing application in burst mode.

Such is not the case for IoT burst computation, where power efficiency is utmost and the microseconds required for the regulator to switch is problematic.  The right hand side of the figure above depicts an alternative design approach for sub-Vt IoT CMOS, where multiple supplies are distributed and switched locally using parallel “sleep FETs” to specific blocks.  A higher VDD would be applied during burst mode, returning to the sub-Vt level during regular operation.

TSMC is targeting their initial sub-Vt support to the N12e process.  The figure below highlights some of the enablement activities pursued to provide this option for the IoT platform.

TSMC hinted that the N22ULL process variant will also receive sub-Vt enablement in the near future.

L.C. also provided an update on the TSMC 3D Fabric advanced packaging offerings – look for a subsequent article to review these technologies in more detail.

Summary

TSMC provided several insights at the recent OIP Ecosystem forum:

  • HPC-specific process development remains a priority (e.g., N3-HPC).
  • The Automotive platform continues to evolve toward more advanced process nodes (e.g., N5A), with design flow enhancements focused on modeling, analysis, and product lifetime qualification at more stringent operating conditions.
  • Similarly, the focus on RF technology modeling, analysis, and qualification continues (e.g., N6RF).

and, perhaps the most disruptive update,

  • The IoT platform announced enablement for sub-Vt operation (e.g., N12e).

-chipguy

Also read: Design Technology Co-Optimization for TSMC’s N3HPC Process


Lecture Series: Designing a Time Interleaved ADC for 5G Automotive Applications

Lecture Series: Designing a Time Interleaved ADC for 5G Automotive Applications
by Kalar Rajendiran on 11-01-2021 at 6:00 am

Slide AMS Lecture Series Snapshot

A recent educational virtual event with the above title was jointly sponsored by Synopsys and Global Foundries. The objective was to bring awareness to state-of-the-art mixed-signal design practices for automotive circuits. The 2-day event comprised of lectures delivered by engineering professors and doctoral students from Wayne State University. The lecture series focused on designing a time interleaved ADC for 5G V2X automotive applications.

If you were not available to attend the live virtual event, you can now listen to the lectures on-demand. This blog provides a backdrop for the lecture series and a brief overview of the lectures.

Automotive Industry

The automotive industry ecosystem has been working on a vehicle-to-everything (V2X) communication system to support autonomous vehicles. The primary purpose of a V2X system is to improve road safety, enhance road traffic efficiency and bring energy savings to automobiles. As such, in addition to the usual requirements of performance, power, area and cost, field reliability of the implemented integrated circuits becomes imperative. As a result, IC designers must ensure that their designs are fail-operational, have low defect rates, and operate reliably over a long period of time.

Lectures Series Overview

You will learn how to design and verify a time-interleaved ADC targeted for 5G V2X application using Global Foundries 22nm FDSOI process technology. You will also learn about the Global Foundries FDSOI process technology. This is the process technology that is used to implement the ADC during this analog-mixed-signal (AMS) lecture series. The lectures cover radio transceiver architecture for V2X systems, ADC architecture, ADC design and related challenges, layout and related changes, accounting for post-layout parasitics, and aging effects.

Why Focus on Time Interleaved SAR ADC?

A complete V2X communication system has four subsystems to it, namely, vehicle-to-vehicle (V2V), vehicle-to-infrastructure (V2I), vehicle-to-pedestrian (V2P) and vehicle-to-network (V2N) communications. The complete system requires transceivers with two different specifications. This would lead to high cost and large chip area to implement the system and will also consume a high amount of power. The WINCAS Research Center at Wayne State University was already working on a single transceiver that can support all V2X applications.

The transceiver architecture is based on frequency planning and programmable ADCs. This means that the ADC has to deal with two different channel bandwidths. For V2V/V2P/V2I, the ADC input bandwidth is 75MHz with sampling frequency at 150MHz. For V2N, the ADC input bandwidth is 400MHz with sampling at 800MHz.

This frequency planning approach offers many benefits, some of which are:

    • maximum hardware sharing while covering all bands of V2X
    • PLLs achieving good phase noise at much reduced current consumption

While the benefits are very attractive, the architecture is the first of its kind, bringing with it its share of challenges to overcome. The ADC plays a critical role in this architecture, making it a suitable circuit to showcase state-of-the-art AMS circuit techniques through a lecture series.

Time Interleaved SAR ADC

The Time Interleaved ADC designed during this lecture series uses 8 channels. Each channel is a SAR ADC. Each SAR ADC has a DAC inside which has a comparator and track and hold. Before diving into the ADC design, the lectures present an improved version of track and hold circuit to avoid the typical issues with traditional implementations. Conventional implementations of track and hold circuits could lead to breakdowns due to over stress voltage conditions. They could also lead to incorrect bit conversions as a result of voltage changes on the top plate of the DAC inside the comparators. The lectures also discuss selection of capacitors for the DAC in order to handle thermal noise.

Design and Layout Challenges

Time Interleaved SAR ADCs are exposed to common mode noise and are sensitive to many types of errors. One such error is the clock skew error. The lecture discusses how to mitigate time skew through the use of dummy logic circuits and offers design techniques to eliminate common mode noise.

AMS designs experience noisy interaction between digital and analog portions of the circuitry. Major sources of this noise are capacitive coupling and power supply lines. Another source is lateral coupling between metal lines that run parallel. Floor plan techniques are presented to help minimize these kinds of situations during the physical routing.

Robustness of SAR ADC

Process, temperature and voltage (PVT) do affect the performance of an ADC. Given this circuit is for an automotive application, validating the design robustness across PVT is very critical. Robustness of a circuit in this regard means that it has a high probability of operating correctly even under conditions outside the specification. Results of corners simulations are presented showing that the performance of the SAR ADC is robust across all corners at different temperatures to meet the AEC-Q100 Grade 1 standard.

Aging and Electromigration

Aging and electromigration triggered faults are very key concerns for automotive applications. The choice of FD-SOI process for implementing the SAR ADC seems to be a good one in this regard. Not only does this process offer FinFET like performance with energy efficiency, it also allows for electrostatic control of the channel via body biasing. This allows for compensating for the static process variations and the dynamic temperature and aging variations.

FD-SOI is the only process technology to bring together three substantial characteristics of CMOS transistors:

  • 2D planar transistor structure
  • Fully depleted operation
  • Capability to dynamically modify the threshold voltage of transistors after manufacturing

Summary

It may be worthwhile for anyone involved in the development of V2X systems to register and listen to these lectures to learn some new circuit techniques. Below is a screenshot showing the different topics covered over a series of nine lectures. You can listen to the lectures in any order or just the ones that are of interest to you. Access the lecture series on-demand from here.

Also read:

Synopsys’ ARC® DSP IP for Low-Power Embedded Applications

Synopsys’ Complete 800G Ethernet Solutions

Safety + Security for Automotive SoCs with ASIL B Compliant tRoot HSMs


SISPAD – Cost Simulations to Enable PPAC Aware Technology Development

SISPAD – Cost Simulations to Enable PPAC Aware Technology Development
by Scotten Jones on 10-31-2021 at 10:00 am

Slide11

I was invited to give a plenary address at the SISPAD conference in September 2021. For anyone not familiar with SISPAD it is a premiere TCAD conference. This year for the first time SISPAD wanted to address cost and my talk was “Cost Simulations to Enable PPAC Aware Technology Development”.

For many years the standard in technology development has been Power, Performance and Area (PPA), for example: on TSMC 2020-Q4 earnings call, N3 will have 30% lower power at the same performance (Power), 15% greater performance at the same power (Performance) and 70% greater density (Area).

More recently increasing wafer costs are driving the need to add cost as PPAC, Power, Performance, Area and Cost. Companies such as TSMC at IEDM 2019 [1], Imec at their technology forum in 2020 [2] and Applied Materials at SEMICON West in 2020 [3], and many others are all taking about PPAC.

The current practice when developing a new technology is to define initial PPA targets, identify designs for PPA evaluation, select a transistor architecture, develop an initial process flow, simulate transistor performance, and extract a SPICE model, select a standard cell architecture, and generate a cell library. The cell library and process flow are then fed into a Design Technology Co Optimization simulation suite such as is offered by Synopsys to simulate the process, generate a 3D structure, and extract the parasitic netlist. The library can then be characterized, a physical design can be done and PPA can be evaluated. The PPA is then evaluated, and the designed experiment iterations can be done to achieve the PPA targets all in a simulation environment. What is missing in this process is any cost awareness. If the ability to simulate cost is added to a DTCO suite then the process can target PPAC, and iterations can be done in a simulation environment to achieve the PPAC targets.

To accurately simulate costs both the facility running the process and the process must be considered. The same process in two different facilities will have different costs, sometimes significantly different. Two different processes run in the same facility will have different costs, sometimes significantly different.

Facility Cost

The designed capacity of a fab has a significant impact on cost. There is a wide variety of throughputs for fab equipment and the higher the fab design capacity the better the capacity matching of the equipment set can be achieved. This results in higher capital efficiency and therefore lower cost per wafer for higher capacity fabs. Figure 1. Illustrates the normalized wafer cost versus capacity for a greenfield fab running a 5nm process in Taiwan.

Figure 1. Wafer cost versus fab capacity.

 The country a fab is in also impacts the cost. Figure 2 compares the same fab described above designed for 40,000 wafers per month in six different countries. The costs in figure 2 are operating costs only and do not include any incentives.

Figure 2. Wafer cost versus country.

Another critical cost factor is the age of the fab. For a new fab depreciation can represent over 60% of the cost of making a wafer. Figure 3 illustrates the same fab previously described for five different time frames:

  1. the first year ramping up (assuming 50% utilization on average).
  2. Years two through five when the fab is ramped up but the equipment is still depreciating.
  3. Year six when the equipment is depreciated.
  4. Year eleven when the facility systems are depreciated.
  5. Year sixteen when the building shell is depreciated.

Figure 3. Wafer cost versus fab age.

Accurate cost modeling requires the ability to define the fab capacity, country, and age.

Process Cost

Process costs begin with the starting wafer or wafers cost. Modeling needs to account for whether the starting wafer is a polished wafer, Epi wafer, or specialty wafer such as some kind of SOI. Also modeling needs to allow for more than one wafer, for example for processes where two wafers may be used and then bonded together.

Direct labor costs are the cost for operators to process the wafers. In current generation 300mm fabs there are every few operators because the wafer transport systems lower the front opening unified pods (FOUPs) right onto the tool but there are some operators. The labor hours required for a particular flow most be calculated, and the appropriate labor rate applied depending on the country where the fab is located.

Depreciation is the largest single cost in wafer fabrication, for new processes representing over 60% of the wafer cost (see figure 6 below). Accurate depreciation estimates require determining the equipment required and throughput for every step in the process flow. An accurate model needs to determine the appropriate generation of equipment for a process, the throughput, equipment cost, and physical space needed for the equipment and building a complete set for a target capacity. An accurate model should have background tables of equipment cost and configuration by node and construction costs for cleanroom space to enable detailed capital cost calculations.

Equipment maintenance costs include the costs for equipment parts that are consumed during processing such as quartz rings used in etch chambers, repair parts to replace equipment sub systems that break during operation of the equipment, and finally equipment service contracts. All these costs need to be estimated for the equipment set determined during the depreciation calculations.

Indirect labor costs encompass engineers and technicians that maintain the process and equipment, supervisors that manage the direct labor and managers that oversee everything. Headcounts need to be estimated and salaries by country and year applied.

Facility costs include electricity, water and sewer, ultrapure water generation, natural gas, facility maintenance, occupancy costs and insurance. Many of these costs depend on the country as well as year. An accurate model needs to have background tables by country and year and algorithms to perform the calculations.

Consumables are made up of hundreds of different materials consumed by the process (these are distinct from the equipment parts consumed during processing accounted for in equipment maintenance). Process materials include things like bulk gases, CVD and ALD precursors, CMP consumables, PVD targets, photoresist and reticles and many other items. An accurate model needs to have costs by year for thousands of target materials by year and calculate material usage by process step.

Commercial Implementation

IC Knowledge is the world leader and cost and price modeling for semiconductors and has recently developed process simulation technology to enable step by step process definition and cost estimation (Cost Explorer). Synopsys is a world leader in TCAD tools for technology development and simulation. IC Knowledge and Synopsys have partnered to embed IC Knowledge’s Cost Explorer in Synopsys Process Explorer tool that is used to simulate the physical structure produced by target process flow. Cost Explorer plug in for Process Explorer will enable users of Synopsys DTCO suite to define PPAC targets and design processes to meet those targets in a virtual environment utilizing designed experiments to optimize for all four elements of PPAC simultaneously.

Figure 4 illustrates the IC Knowledge – Synopsys solution.

Figure 4. Commercial PPAC TCAD Solution.

The current timeline for this solution:

  • Current status – beta testing at one customer with customer developed script to automatically populate Cost Explorer from Process Explorer. Beginning to show the capability to select customers.
  • End of 2021 – external cost model with script (Synopsys script) to populate Cost Explorer from Process Explorer.
  • Mid 2022 – fully implemented Process Explorer plug-in and commercial availability.

Customer Examples

As mentioned in the previous section we have customer beta testing the solution. The customer is a large OEM who uses Synopsys’ DTCO solution for technology development. The customer is developing Complementary FET (CFET) processes as a next generation solution beyond FinFETs and Horizontal Nanosheets (HNS).

Figure 5 illustrates the wafer cost broken out by category for a possible process flow. In the actual model the results are all in dollars and represent a specific fab and process configuration.

Figure 5. Wafer Cost by Category.

The OEM wanted to evaluate how CFET costs compared to FinFETs. They compared a standard FinFET, a FinFET with a Buried Power Rail (BPR) (BPR enables better density), a monolithic CFET with BPR and a sequential CFET where the CFET process is split between two wafer that are then bonded together, once again, in the actual model the results are all in dollars.

Figure 6. Normalized Wafer Cost Versus Process.

 The key conclusion from figure 6 is the OEM developed CFET process with BPR is competitive on cost to a FinFET process with BPR. Because CFETs stack the nFET and pFET devices they offer significant density improvements over FinFETs.

Another conclusion from figure 6 is the monolithic CFET process is less expensive than the sequential CFET process. The monolithic CFET process developed by the OEM is highly self-aligned and cost optimized.

While doing this work the OEM also evaluated lithography options for local interconnect comparing two solutions:

  1. EUV local interconnect mandrel mask with EUV cut, and EUV via mask.
  2. EUV local interconnect mandrel mask with multipatterned DUV cut, and EUV via mask.

Because the multipatterned cut can be implemented with a relatively simple multi-patterning scheme they found they could save $52 although there would be some cycle time impact.

Conclusion

The accelerating cost increases to fabricate leading edge wafers are driving the need to switch from PPA based technology development to PPAC based technology development. The partnership of IC Knowledge and Synopsys will for the first time provide the industry with the ability to design for PPAC in a virtual environment before ever running wafers. This capability will be a game changer for the industry and enable the continued evolution of Moore’s law.

References

[1] Geoffrey Yeap of TSMC during Applied Materials IEDM 2019 panel “Logic: EUV is Here , Now What?, “Power Performance Area Cost Time – PPACT where new technologies need to be on-time”.

[2] Luc Van Den Hove, President and CEO of Imec, Imec Technology Forum 2020, “Technologies for People in the New Normal,” slide 45, “Scaling Roadmap” “Power – Performance – Area – Cost”.

[3] Applied Materials, “Selective Gap Fill Announcement,” SEMICON West 2020, slide 2, “Power, Performance, Area-Cost” also including t for time to market.

Also Read:

TSMC Arizona Fab Cost Revisited

Intel Accelerated

VLSI Technology Symposium – Imec Alternate 3D NAND Word Line Materials


Losing Lithography: How the US Invented, then lost, a Critical Chipmaking Process

Losing Lithography: How the US Invented, then lost, a Critical Chipmaking Process
by Craig Addison on 10-31-2021 at 8:00 am

Lithography pioneers Perkin Elmer and Mann Co SEMI image

Lithography is arguably the most important step in semiconductor manufacturing. Today’s state-of-the-art EUV scanners are incredibly complex machines that cost as much as a new Boeing jetliner.

From humble beginnings in 1984 as a joint venture with Philips, ASML has grown to become the world’s second largest chip equipment maker – and the only supplier of EUV machines.

“Losing Lithography”, an episode in The Chip Warriors podcast series, provides a first hand account of how the US invented, then lost, this critical part of the chipmaking process. The episode is based on interviews with pioneers at Fairchild Semiconductor, David W Mann Co, Cobilt, GCA, Nikon and Silicon Valley Group (SVG), among others.

Early attempts to print images onto silicon wafers were undertaken at Bell Labs in the mid 1950s. Later that decade, Fairchild improved the process in order to make transistors.

“We decided to use photo resist in order to delineate the areas,” said Jay Last, one of the original eight co-founders of Fairchild, along with Bob Noyce.

“Bell Labs had made some efforts there and thought this was just impossible to work with so they never pursued it. Bob [Noyce] and I worked with Kodak and they gave us the best resists they had at the time and we gradually had a working relationship with them that resists kept steadily improving.

“There were a lot of technical problems and technical setbacks, but we just said we are going to use this and we have to make it work — and we did.”

In the 1960s contact mask aligners were used for wafer printing, with Kulicke & Soffa the first to introduce them commercially. Later, Kasper Instruments became the dominant supplier, but when three former Kasper engineers formed their own company, called Cobilt — and it was acquired by Boston-based CAD giant Computervision in 1972 — a new paradigm for wafer printing emerged.

“Cobilt made mechanical aligners that printed the semiconductor wafer with somewhat superior technology to the standard of the day. And Computervision had a package of automatic alignment which would allow you to align the layers more exactly,” said Sam Harrell, who moved from  Computervison to the West Coast to be Cobilt’s vice president of engineering.

“We sold hundreds of machines all over the world. It really reigned until the period of the projection printers became dominant.”

Ed Segal, who sold aligners at Kasper before joining Cobilt, saw how Cobilt lost its lead when Perkin-Elmer developed the projection mask aligner.

“When the mask aligner went to the next stage from a contact mask aligner to what was called a projection aligner, or projecting the image of the mask on the wafer, Perkin-Elmer just absolutely came in and took that market over,” Segal said. “Cobilt attempted to build one and it was really a very big failure. And the company eventually was sold to Applied Materials in 1981.”

Jim Gallagher ran the semiconductor equipment business at GCA, which was the world leader in lithography before ceding the market to Japanese companies in the 1980s. In the podcast, he recounts the eventual demise of the company after Japanese suppliers like Nikon and Canon became market leaders.

“We started to sell off operations as best we could. But when you’re going downhill, so to speak, that’s not the time to start selling because what you’re doing is, everybody knows your problem and they’re going to give the lowest, lowest prices. So that was the beginning of our slide,” Gallagher said.

By the late 1980s, the dominance of Japanese stepper suppliers was a worry for American chipmakers. In an effort to develop an alternative source, Intel worked with Censor, a European company. However, the effort failed and Censor was sold to Perkin-Elmer in 1984.

Intel co-founder Gordon Moore recalls the concern at the time. “The big steppers were coming out of Canon and Nikon. There wasn’t a comparable piece of equipment in the US, and that was such a critical part of the entire process.

“We had a major program with a Liechtenstein company [Censor] to make a stepper. Very sophisticated but also very expensive, and the development went too slowly for them to really make an impact on the market. We ended up buying Japanese equipment because it was the best available, and there wasn’t really an alternative source for that.”

Shoichiro Yoshida, who would later become CEO of Nikon, designed the company’s first step-and-repeat camera for semiconductor manufacturing. In the podcast, hear him describe (in English) the early development of steppers at Nikon.

In the 1990s, SVG expanded into lithography under newly appointed CEO Papken Der Torossian. SVG had tried to buy GCA but the deal never materialized, and GCA was sold to General Signal in 1988.

However, Der Torossian was successful in acquiring a next generation step-and-scan system, Micrascan, developed by Perkin-Elmer in association with IBM — but he said it required tens of millions in R&D and two-and-a-half years to fix bugs in the system. The result was the Micrascan II.

“The machine that they had didn’t work — had a mean time between failure of less than one hour. IBM couldn’t use it. But it had very good basic technology,” he said.

Der Torrossian explains how a shortage of cash led to a missed opportunity to keep advanced lithography in the US.

“In ‘92 ASML was bleeding. Philips owned them and came to me to buy ASML for $60 million. I didn’t have $60 million. I told them, ‘I’ll give you equal number of shares so let’s have a joint venture.’ They said, ‘No, Philips needs cash.’”

By 2001, ASML had turned the business around and it ended up buying SVG — the last major US lithography company — for $1.6 billion. The deal was delayed by several months over national security concerns but eventually approved by the George W. Bush administration after ASML agreed to divest SVG’s Tinsley Labs unit.

The Chip Warriors podcast series, written and produced by Craig Addison, is based on SEMI oral history interviews he conducted between 2004 and 2008. The interviews have been used under license from SEMI, which is not affiliated with the podcast.

Lithography pioneers are depicted in this painting commissioned by SEMI in 1980. From left, the team behind Perkin-Elmer’s projection mask aligner (Abe Offner, Jere Buckley, David Markel and Harold Hemstreet), and on the right, Burt Wheeler, principal inventor of the photo repeater at David W. Mann Co.

 

Related Lithography Posts


Intel – “Super” Moore’s Law Time warp-“TSMC inside” GPU & Global Flounders IPO

Intel – “Super” Moore’s Law Time warp-“TSMC inside” GPU & Global Flounders IPO
by Robert Maire on 10-31-2021 at 6:00 am

GF TSMC Intel

“Super” Moore’s Law- 5 nodes in 4 years- Too good to be true?
Gelsinger said “Intel will be advantaged with High NA EUV”
Ponte Vecchio better with “TSMC Inside”
Global Flounders IPO as price drops on public debut

Lets do the time warp again….(apologies to Riff Raff)

Its just a jump to the left
And then a step to the right….

Pat Gelsinger is talking about Intel’s ability to bend time (and Moore’s Law) to Intel’s will and go through 4 nodes in 5 years. This translates to a 1.25 year per node cadence versus the original Moore’s Law 2 years and recent Intel 3 to 4 or more years per node. Even TSMC can’t do that as its yearly advances are less than a full node and usually more of an incremental tuning.

Whats more interesting is that Pat seems dead serious about what he coined as “Super Moore’s Law“. We didn’t hear much hedging in the statements. He is dead serious.

This is going to be very binary as it will either be the world’s greatest success and comeback or it will prove very embarrassing.

Gelsinger: “we’re going to be advantaged at High NA (EUV)”

We have previously pointed out that Intel is at a huge disadvantage in EUV tool count versus TSMC and even Samsung. We also suggested that Intel likely cut some sort of understanding with ASML to be the first large supporter of High NA much as TSMC was the first out of the gate with EUV.

What we don’t see is how Intel will have an advantage. Its not like ASML will sell its High NA tools only to Intel and forsake its biggest and bestest customer TSMC. Thats not gonna happen.

Intel has also not gone through all the pain and learning process of EUV and has a miniscule amount of experience in real world use as compared to TSMC years of experience running many, many wafers. on EUV tools.

Much of the EUV learning that Intel has yet to do is a prerequisite for figuring out High NA.

Its quite clear that with all the experience gained in its huge lead in EUV that TSMC will enter High NA EUV with an advantage over Intel and not the other way around.

This suggests that “Ribbon” transistors are the main advantage that Intel can bring to bear and we just don’t see it.

Backside power has been around for a while and is not unique to Intel.

So we still want to understand how Intel goes from 3-4 years per Moore’s Law node to 1.25 years per node virtually overnight (not counting the fact that High NA is years away)

With a bit of a mind flip
You’re into the time slip

Intel’s Ponte Vecchio better than expected with “TSMC Inside”

It appears the performance of Ponte Vecchio is better than originally planned. Could it be that using some TSMC silicon inside made the difference?
TSMC’s N5 process seems to be quietly helping boost performance that Intel’s 7NM couldn’t deliver.

We think that Intel’s use of TSMC silicon and “tiles” will be much higher than anticipated as Intel needs the performance of TSMC’s silicon process to be competitive versus others in the space.

We find it mildly hypocritical that while Intel management bashes global dependence on TSMC it is on a path to increase just that.

Global Flounders in its IPO debut.

Global Foundries priced its stock offering at $47 only to have it drop on its first trading day. At one point it was down to almost $44 before closing in the after market at $46 (even after some end of day trading “support”) Not a very auspicious start as most IPO’s tend to “pop” on the first day of trading. Perhaps investors were expecting something different from a company that can’t make money in the strongest industry conditions.

At roughly 5 times trailing revenue, GloFo is valued similarly to a foundry company that actually makes money at similar revenue levels. SMIC in China….which is likely a better stock buy if you want a trailing, smaller, foundry.

Also Read:

Intel- Analysts/Investor flub shows disconnect on Intel, Industry & challenges

LRCX- Good Results Despite Supply Chain “Headwinds”- Is Memory Market OK?

ASML- Speed Limits in an Overheated Market- Supply Chain Kinks- Long Term Intact


Podcast EP45: Designer, IP and Embedded Tracks at DAC

Podcast EP45: Designer, IP and Embedded Tracks at DAC
by Daniel Nenni on 10-29-2021 at 10:00 am

Dan and Mike are joined by Ambar Sarkar, the chair of the designer, IP and embedded tracks at DAC this year. Ambar talks about the breadth of these programs, including what topics are hot, along with some exciting new formats for presentation and interaction this year.

https://www.dac.com/

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


CEO Interview: Dr. Ashish Darbari of Axiomise

CEO Interview: Dr. Ashish Darbari of Axiomise
by Daniel Nenni on 10-29-2021 at 6:00 am

Ashish 2020 s

Dr. Ashish Darbari is the founder & CEO of Axiomise. As founder & CEO of Axiomise, he has led the company to successfully deploy the unique combination of training, consulting, services, and verification IP to a range of customers. Dr. Darbari has expertise in all aspects of formal methods including theorem proving, property checking, and equivalence checking. A keen innovator in formal verification, Dr. Darbari has numerous papers in top conferences and 44 US, UK, and EU patents in formal verification.

Although he has a Doctorate in formal verification from the University of Oxford, to learn formal verification from him, you don’t need a Ph.D.! He is a Fellow of British Computing Society and IETE, and a senior member of ACM and IEEE.

Describe the formal verification landscape and challenges companies are facing today?

Before I describe the landscape for formal or challenges, let me first say why we need formal methods?

Why formal methods?

It is not easy to catch all the bugs using simulation as the design complexity makes it very hard for anyone to conceive of all possible ways of catching the bugs by driving all interesting combinations of stimulus. Humans should not build stimulus generators; they should describe what needs to be verified by writing checks and capturing environmental constraints. Stimulus generation should be free and checking should be exhaustive. This is what you get with formal verification – no stimulus to write and proofs that are built for you automatically. Sounds amazing, isn’t it? And yet, formal verification has not become mainstream.

Harry Foster’s report in 2020 should be an eye-opener for all of us. 68% of ASIC designs and 83% of FPGA designs fail in their first attempt, while for processor design houses the ratio of verification to designer head count is 5:1. If you’re putting five times more verification engineers than designers and still failing to spin out correctly in the first attempt, what does it say about the quality of verification and cost of investment. We need to find corner case bugs earlier, prove that they don’t exist and ensure that silicon that is in the planes and our cars is safe and secure. We cannot afford an Ariane 5 explosion, a Meltdown type security scenario or an FDIV as there is much more silicon used now a days in our lives. Formal verification is the only way to build proofs of bug absence and formal property checking is the only way to guarantee that.

Formal verification landscape

Okay, so let’s talk about the FV landscape by giving you a 30,000 ft perspective on history of formal first if I may.

Historical perspective

Formal verification use in the industry started at least in late 80s to early 90s with most of the work done at IBM. When Intel hit the FDIV road block, they made significant investments in formal methods and at one point the Intel’s Strategic CAD Labs was one of the best places to be to do formal. Throughout this period, the focus was on in-house proprietary tool development with bespoke languages, compilers and methodology that was kept secret. However, in early 2000s things started to change and more companies started to look at using formal property checking. People may remember 0in, IFV and Magellan as the tools of the early 2000 era.

A seismic shift happened when Jasper Design Automation now part of Cadence, started making breakthroughs in adoption of formal apps through their JasperGold platform and this in my view changed the game for formal in a significant way. For many design engineers struggling with problems such as connectivity checking, CDC, X-checking, and unreachable code coverage waivers formal tools became the answer.

I don’t know of any semi-conductor design house that does not use a formal tool to solve these problems in 2021. While this was a great advancement for formal application, the bulk of the verification work was still left in the hands of dynamic simulation.

Dynamic simulation is still widely used as the de-facto verification technology, not formal verification.

Main players

There are three main EDA providers for commercial formal tools and each of these are driving the increase in tool adoption through apps. But what is not covered by the apps is the mainstream functional verification that is currently done by simulation. Simulation quality depends upon stimulus quality and humans should not be writing stimulus generators, tools should.

We are currently the only company in the world that has over two decades of experience using formal and is offering fully dedicated solutions in formal verification shrinking schedules for our customers and helping them find bugs earlier through shift-left paradigm and at the same time providing them with guarantees of bug absence through formal proofs. You know the best part is that all of this is that they can use any formal tool they like.

Why did you start Axiomise?

I started Axiomise because I love formal, and we can help customers use it to find bugs earlier and overcome the problems customers face with formal verification adoption.

Let me describe the challenges with formal adoption and this will help us understand why I needed to start Axiomise.

Challenges with formal adoption – The main reason for lack of formal adoption is a lack of methodology know-how and lack of vendor-neutral custom solutions. I know so many design houses have formal tools that are only used for apps. Though property checking adoption has picked up, it does not mean property checking is used to formally verify designs in all the cases. In many cases, assertions are written for simulation.

The main problem with formal verification adoption is that when formal properties are sent to formal tools for execution the tools cannot always guarantee proof convergence. What it means is that the outcome of running a proof may be unknown as we ran out of time or compute power. The formal tool usually reports a number which defines the extent to which the design was explored. What do you do in that case?

You need a sign-off method that is accurate, predictable, and reusable to ensure you can trust your explored results. Formal verification use in the industry has not always yielded predictable results in predictable time. This makes it very hard for any management to embrace formal fully. People understand simulation is incomplete, will miss bugs and they will cover their bases with functional coverage, but they are happy with that as they know what can be done. With formal methods this know-how has been not consistent, and not widely shared amongst the community. Although bespoke solutions were designed sporadically a lot of these were tied into a specific vendor’s solution.

Axiomise – Enabling predictable formal verification

I started Axiomise in Feb 2018, to make formal predictable for everyone in design verification including designers as well as architects. We are formal verification methodology experts. Our expertise is in executing formal verification for design verification in a holistic manner – right from the very first hour of design bring up all the way to tape out to the physical design teams and beyond to post-silicon debug. We believe if you start well, you will end well. We captured this methodology at a high-level.

By starting well, in many cases, using powerful abstractions and problem reduction strategies, we can avoid proof convergence issues altogether and can guarantee a known outcome and make schedules predictable. By offering a multi-dimensional sign-off methodology such as our vendor-neutral six-dimensional coverage solution we can guarantee that bugs are not missed, proofs are not obtained for trivial reasons and when proofs are bounded, we can deploy a systematic method to close the gaps.

How does Axiomise differentiate?

No two customers are the same, and that’s why we have different solutions for different customers. Not only do we offer different solutions, are solutions are also differentiated.

We are currently the only provider that provides a unique combination of solutions covering formal verification.

  • We are the only company in the world fully dedicated in formal methods offering a complete solution from training to consulting & services to custom software which is all vendor neutral.
  • We have been using formal methods for over two decades, and we teach what we practice, and practice what we teach.
  • We shrink your DV schedule by helping you to roll out formal earlier and find more bugs.
  • You will need less DV engineers per project if you were to adopt formal with our help reducing your costs and increasing verification ROI.
  • We provide you with the secret recipes and methodology that will allow you to obtain high proof convergence and sign-off your verification with our six-dimension coverage solution.
  • We can provide you with consistent, and predictable FV methodology that can be used by your entire DV teams. Knowledge that our customers require remain with them.

Our solutions

Training- Those customers who have DV engineers can learn the art of scalable formal methodology from us through our dedicated instructor-led training programmes. We have already taught over a hundred so far. Those individuals who work in organizations where formal adoption is not yet considered can still learn formal through our online, on-demand courses and can then request instructor-led course through their organizations. Please check out our training page to see how to sign up and see what some of our customers have to say.

Educational Podcasts  – We have also been actively doing podcasts over the whole of last year and this year talking about interesting verification topics and a lot on formal verification. I believe we were one of the first ones to start podcasting on verification topics. If you haven’t already heard our podcasts, tune in to Axiomise podcasts on our website, or youtube.com, or your favourite podcast app.

Consulting & services – For many organizations, they just do not have enough dedicated verification engineering resource, so we work with their designers to not only teach them what we are doing but also at the same carry out the work. This way, they can see formal in its full glory on their designs and get the opportunity to learn.

Training, Consulting & Services – In many cases, we have delivered instructor-led courses as well as consulting and service work so customers can get the complete experience and build expertise in their own teams. We are helping design houses build their own dedicated FV teams this way.

Custom solutions – formalISA – We have noted that in some cases, engineering companies do not have the power to make sustained human resource investment in their organizations due to cost reasons but would still like to get the benefits of formal. This is typically the case with many RISC-V companies trying to build processors using the open-source RISC-V architecture. Not every company has deep pockets to invest in dedicated FV teams and that is where we can help them by giving them a push-button automated custom FV solution such as our formalISA app. Within a few hours, we can verify RISC-V processors exhaustively. Axiomise is a strategic member of the RISC-V international foundation and members of the OpenHW group. You can find out more about formalISA by visiting our site, or reading our latest blog on SemiWiki.

What real world problems are you solving today?

Tackling the challenges of formal proof convergence, sign-off and adoption on a wide variety of multi-million gate designs from high-speed 10G/100G ethernet switches, to processors (RISC-V being our focus), to GPUs, AI/ML hardware, to designs used in automotive and medical diagnostics.

Where is Axiomise going tomorrow?

We will be driving more momentum in the industry by driving formal adoption so formal becomes the mainstream verification choice. It will allow us to  collectively build a safer and secure ecosystem. We will be doing this through custom training, consulting and service arrangements but also building new automated custom solutions for our customers.

I saw you have a paper at the upcoming Design Automation Conference, can you outline what you will present?

Yes, indeed, I have a paper on security verification titled, “Comprehensive processor security verification: A CIA problem”. I will be talking about addressing the security verification challenge in the context of processor verification and will be presenting results on security issues found in several RISC-V cores.

How would a company engage with Axiomise?

Contact us by emailing us at info@axiomise.com or contact us through www.axiomise.com and we can get on a call to figure out what you need. You can also follow us on our LinkedInhttps://www.linkedin.com/company/axiomise and Twitter pages and don’t forget to sign up on our Youtube channel for podcasts, and interesting talks and webinars. We are here to help.

Also Read:

CEO Interview: Jothy Rosenberg of Dover Microsystems

CEO Interview: Mike Wishart of Efabless

CEO Interview: Maxim Ershov of Diakopto


Semiconductor CapEx too strong?

Semiconductor CapEx too strong?
by Bill Jewell on 10-28-2021 at 1:00 pm

Oct 2021 capex2

Semiconductor capital expenditures (CapEx) are on track for strong growth in 2021. For many companies the increase should continue into 2022. TSMC, the dominant foundry company, expects to spend $30 billion in CapEx in 2021, a 74% increase from 2020. TSMC announced in March it plans to invest $100 billion over the next three years, primarily for CapEx. Our Semiconductor Intelligence (SC-IQ) estimate is TSMC CapEx will be $35 billion in 2023, but it could go higher.

Samsung is also expected to spend about $30 billion on semiconductor CapEx in 2021. Samsung Group announced a plan to invest 240 trillion won (US$210 billion) over the next three years to expand its businesses. An analyst at Kiwoom Securities in Korea expects about 110 trillion won (US$97 billion) will be semiconductor CapEx. We have estimated Samsung 2022 CapEx at $32 billion, but as with TSMC it could be higher.

Intel surprised analysts in its third quarter 2021 earnings announcement last week with a plan to spend $25 billion to $28 billion on CapEx in 2022, following $18 billion to $19 billion in 2021. Intel will use the funds in an effort to become a major foundry as well as expand and advance capacity for its own products. Based on the mid-point of these ranges, Intel Capex should increase 30% in 2021 and 43% in 2022.

TSMC, Samsung and Intel combined account for over half of total semiconductor industry capital spending. Gartner’s July forecast for 2021 industry CapEx was $141.9 billion, a 28% increase from 2021. Other companies which have projected significant CapEx increases in 2021 include foundries UMC and GlobalFoundries; memory companies Micron Technology and SK Hynix; and integrated device manufacturers (IDMs) STMicroelectronics, Infineon Technologies, and Renesas Electronics.

The semiconductor industry is currently experiencing substantial demand improvement from increased automotive electronic content, 5G smartphones and infrastructure, the internet of things (IoT), data centers, and accelerated PC growth due to pandemic-driven home-based work, education, and entertainment. Gartner’s forecast of 28% CapEx growth in 2021 would be the highest since 29% growth in 2017. Should 2021 CapEx grow 30% or more, it will be the highest since 118% in 2010, 11 years ago.

The big question is how much CapEx is too much? The semiconductor industry has a long history of strong CapEx leading to over-capacity followed by price collapses and a declining semiconductor market. The chart below illustrates the relationship between semiconductor CapEx and the semiconductor market. The green line on the left axis is the annual change in CapEx from 1984 through the forecast for 2021. The blue line on the right axis is the annual change in the semiconductor market. Our analysis at semiconductor intelligence has modeled levels of CapEx change which have a major impact on the semiconductor market. The red Danger line is set at 56%. In years when CapEx growth has exceeded 56%, the semiconductor market in the following year has declined or seen a significant deceleration. The orange Warning line is set at 27%. When CapEx growth has been over 27% but less than 56%, the semiconductor market experienced a decline in the next two to three years.

The table below illustrates this process. There have been six years since 1984 where CapEx growth has exceeded 57%. The most extreme cases were in 1984, 1995 and 2000. In 1984, 106% CapEx growth and 46% semiconductor market growth were followed by a 17% market decline in 1985, a growth deceleration of 63 percentage points. In 1995 Capex grew 75% and the market grew 42%. The next year the market declined 9%, a 50-point deceleration.  In 2000, 82% CapEx growth and 37% semiconductor market growth was followed by a 2001 decline of 32%, a 69-point growth deceleration. In the three other cases (1988, 2004, and 2010) the market did not decline the following year, but growth decelerated by at least 21 points.

Since 1984 there have been two instances where CapEx growth was greater than 27% but less than 56%. In 2006, CapEx growth was 27% and market growth was 9%. Each of the following three years showed decelerating market increases culminating in a 9% decline in 2009. From 2006 to 2009, growth decelerated a total of 18 points. In 2017, CapEx growth was 29% and market growth was 22%. Over the next two years growth decelerated a total of 34 points, with a 12% decline in 2019.

Gartner’s latest forecast is 28% CapEx growth in 2021 and the WSTS forecast is 25% market growth in 2021. Based on the model above, we should see decelerating growth in the next two to three years, with a possible decline in 2023.

While the model shows consistent trends, the rate of CapEx increases is only one factor affecting semiconductor market growth. High CapEx growth years are usually substantial market growth years, as companies use robust current growth as justification for higher CapEx. The vigorous market rise is usually not sustainable, resulting in major growth deceleration or a decline in the following year. End demand changes also are a major factor in semiconductor market declines. In 1985, the emerging PC market had its first decline. In 2001, the internet bubble burst, leading to a collapse in demand for internet infrastructure and other equipment. In 2017, the smartphone market had its first decline, with declines continuing through 2020.

Although substantial CapEx gains can be a predictor of semiconductor market growth deceleration, it is not necessarily cause and effect. However, strong increases in CapEx bear watching in forecasting the semiconductor market.

Also Read:

Auto Semiconductor Shortage Worsens

Electronics Recovery Mixed

Electronics Recovery Mixed