TSMC ISSCC 2021 Keynote Discussion

TSMC ISSCC 2021 Keynote Discussion
by Daniel Nenni on 03-01-2021 at 6:00 am

Mark Liu TSMC ISSCC 2021

Now that semiconductor conferences are virtual there are better speakers since they can prerecord and we have the extra time to do a better job of coverage. Even when conferences go live again I think they will also be virtual (hybrid) so our in depth coverage will continue.

ISSCC is one of the conferences we covered live since it’s in San Francisco so that has not changed. We will however be able to cover many more sessions as they come to our homes on our own time.

First off is the keynote by TSMC Chairman Mark Liu:  Unleashing the Future of Innovation:

Given the pandemic related semiconductor boom that TSMC is experiencing, Mark might not have had time to do a live keynote so this was a great opportunity to hear his recorded thoughts on the semiconductor industry, the foundry business model, and advanced semiconductor technologies. Here are some highlights from his presentation/paper intermixed with my expert insights:

  • The semiconductor industry has been improving transistor energy efficiency by about 20-30% for each new technology generation and this trend will continue.
  • The global semiconductor market is estimated at $450B for 2020.
  • Products using these semiconductors represent 3.5% of GPD ($2T USD).
  • From 2000 to 2020 the overall semiconductor industry grew at a steady 4%.
  • The fabless sector grew at 8% and foundry grew 9% compared to IDM at 2%.
  • In 2000 fabless revenue accounted for 17% of total semiconductor revenue (excluding memory).
  • In 2020 fabless revenue accounted for 35% of total semiconductor revenue (excluding memory).
  • Unlike IDMs, innovators are only limited by their ideas not capital.

Nothing like a subtle message to the new Intel CEO. It will be interesting to see if the Intel – TSMC banter continues. I certainly hope so. The last one that started with Intel saying that the fabless model was dead did not end so well.

Mark finished his IDM message with:

“Over the previous five decades, the most advanced technology had been available first to captive integrated device manufacturers (IDMs). Others had to make do with technologies that were one or several generations behind. The 7nm logic technology (mass production in 2017) was a watershed moment in semiconductor history. In 2017, 7nm logic, was the first time that the world’s most advanced technology was developed and delivered by a pure-play foundries first, and made available broadly to all fabless innovators alike. This trend will likely continue for future technology generations…”

As we all now know Intel will be expanding TSMC outsourcing at 3nm. TSMC 3nm will start production in Q4 of this year for high volume manufacturing beginning in 2H 2022. The $10B question is: Will Intel get the Apple treatment from TSMC (early access, preferred pricing, and custom process recipes)?

I’m not sure everyone understands the possible ramifications of Intel outsourcing CPU/GPU designs to TSMC so let’s review:

  • Intel and AMD will be on the same process so architecture and design will be the focus. More direct comparisons can be made.
  • Intel will have higher volumes than AMD so pricing might be an issue. TSMC wafers cost about 20% less than Intel if you want to do the margins math.
  • Intel will have designs on both Intel 7nm and TSMC 3nm so direct PDK/process comparisons can be made.

Bottom line: 2023 will be a watershed moment for Intel manufacturing, absolutely!


How SerDes Became Key IP for Semiconductor Systems

How SerDes Became Key IP for Semiconductor Systems
by Eric Esteve on 02-14-2021 at 10:00 am

Ethernet bandwidth

We have seen that the interface IP category is seeing incredibly high growth rate over the last two decades and we expect this category to generate an ongoing high source of IP revenues for at least another decade. But if we dig into the various successful protocols like PCI Express, Ethernet or USB, we can detect a common function in the Physical (PHY) part, the Serializer/Deserializer (SerDes) function.

In 1998, advanced interconnects used in telecom application were based on 622 MHz LVDS I/O. Telecom chip makers were building huge chips integrating 256 LVDS I/O running at 622 MHz to support Networking fabric. Today, advanced PAM4 SerDes run at 112 Gbps; over a single connection to support 100G Ethernet. In twenty years, SerDes technology efficiency jumped by a factor of 180-times! If we make a quick comparison with CPU technologies. In 1998 Intel released the Pentium II Dixon processor, whose frequency was 300 MHz. In 2018, an Intel Core i3 run at 4 GHz. CPU frequencies have grown by a factor of 15 times over a span of twenty years while SerDes speeds have exploded by by a factor of 180-times.

SerDes are now used in many more application than just telecom, to interface chips and systems. At the end of the 2000’s, smartphones integrated USB3, SATA and HDMI interfaces, while Telecom and PC/server integrated both PCIe and Ethernet. These trends resulted in the interface IP market to become a sizeable IP category growing above $200 million at that time. It was small compared to the CPU category, which was four or five times larger. But, since 2010, the interface category has seen at least 15%, year over year. It was the fastest growing category compared with all other semiconductor IP categories, such as CPU, GPU, DSP, Library, etc. The reason is directly linked with the number of connected devices growing every year, each exchanging more data (more movie, pictures, etc.). Connectivity is the beginning of the chain of communication, to the internet modem or base station, Ethernet switch and the datacenter network.

Figure 1: Long Term Ethernet Switch Forecast (source: Dell’Oro)

During the 2010 decade the worldwide community became almost completely connected. Ethernet became the backbone of this connectivity as both the connectivity rates and the number of datacenter quickly increased over the decade. If we use SerDes rates as an indicator, it was 10 Gbps in 2010, 28 Gbps in 2013 and 56 Gbps in 2016 (allowing to support 10G, 25G and 50G Ethernet resp.) and 112 Gbps in 2019.

Then, in 2017, the exploding high-speed connectivity needs for emerging data-intensive compute applications such as machine learning and neural networks started to appear, adding to the growing need of high bandwidth connectivity. At the same time, analog mixed-signal architectures, which were the norm for SerDes design since the inception, became extremely difficult to manage and much more sensitive to process, voltage, and temperature variations, due to the evolution of CMOS technology toward advanced FinFET. In modern nanometer FinFET technologies, building transistors involves stacking individual electrons, given the tiny dimensions of the transistors. Thus, the construction of precise, analog circuits that can sustain stressful environmental variations is extremely difficult.

But the positive point with advanced technology like 7nm is that you can integrate an incredible number of transistors by sq. mm (density of 100 million Transistor per sq. mm), so it’s now possible to develop new digital-based architecture leveraging Digital Signal Processing (DSP) to do the vast majority of the Physical Layer work. DSP-based architecture enables the use of higher order Pulse Amplitude Modulation (PAM) modulation scheme compared to Non-Return to Zero (NRZ or PAM2) used by historical previous analog mixed-signal approaches. PAM 4 enabled doubling data throughput of channels without having to increase the bandwidth of channels themselves. As an example, a channel with 28 GHz of bandwidth can support a maximum data throughput of 56 Gbps using NRZ signaling. With the use of PAM-4 DSP technique this same 28 GHz bandwidth channel can now support a data rate of 112 Gbps! When you consider that analog SerDes architectures are limited to a maximum of 56 Gbps rates for physical reasons (and maybe less…), DSP SerDes  are the approach to scale rates to 200 Gbps and beyond, with the use of more sophisticated modulation schemes (eg. PAM-6 or PAM-8).

Using DSP based SerDes is not only required for building robust interfaces in FinFET technologies, but it is also the only way to double data rates for above 56 Gbps, eg. 112 Gbps with PAM-4, 200 Gbps with PAM-8. And this need for more bandwidth is the requirement linked with emerging data-intensive applications like AI (to interconnect CPU and accelerator), ADAS, and to follow the data-centric trend of the connected human community, expected to grow steadily over the next decade.

Figure 2: Top 5 Interface IP Forecast & CAGR (source: IPnest 2020)

In the “Interface IP Survey” IPnest rank the market share of IP vendor revenues by protocol since 2009. In the 2020 version of the report, we have shown that the Interface IP category will have 15% CAGR from 2020-2024 to reach $1.57 billion, as listed in Figure 2. This is a wide IP market including PCIe, Ethernet and SerDes as well as USB, MIPI, HDMI, SATA and memory controller IP. In 2019, Synopsys is a strong leader with 53% market share of the estimated $870 million IP market, followed by Cadence with 12%. Both EDA companies have defined a one-stop-shop business model, addressing the mainstream market. This strategy is successful for these large companies as it targets a wide part of various segments (smartphone, consumer, automotive or datacenter), but not the most demanding high-end portion of these segments.

Nevertheless, another strategy can be successful for the IP market, which is to strongly focus on one segment (eg. High-end) of the market and provide the best experience to very demanding hyperscalar customers. If you can build an excellent engineering team able to develop top quality products on the most advanced technologies, focusing on the high end of the market, the resulting business model can be rewarding.

We have seen that SerDes IP is the key to the interface IP market. Furthermore if we concentrate on PCIe and Ethernet protocols, Figure 3 illustrates the IP revenues forecast 2020-2025, limited to high-end PCIe (Gen 5 and Gen 6) and high-end Ethernet (PHY based on 56G, 112G and 224G SerDes), including D2D protocol for a reason that will be describes shortly.

 

Figure 3: High-End Interface IP Forecast & CAGR (source: IPnest 2021)

This high-end interface IP forecast show 28% CAGR from 2020-2025 (to be compared with 15% for the total interface IP market), and a TAM of $806 million in 2025. One young company has demonstrated strong leadership on this High-End interface IP segment, thanks to their focus on high-end SerDes (112G since 2017 and soon 200G) targeting the most advanced technology nodes (7nm in 2017, then 5nm in 2019) offered by the two leading foundries, TSMC and Samsung. Alphawave, was founded in 2017 has are rumored to have $75 million in orders in 2020, thanks to their positioning targeting the most advanced rates and application of the high-end segment of PCIe and Ethernet. In this portion of the market, they enjoyed 28% market share in 2019 and 36% in 2020. If Alphawave can keep their advance on the high-end SerDes market, it’s not unrealistic to foresee $300-400 million in IP revenues… by 2024-2025!

Since 2019, a new sub-segment, D2D interface, has emerged and is expected to grow with 46% CAGR from 2020-2024. By definition, D2D protocol are used between two chips or die, within a common silicon package. Briefly, we consider two cases for D2D: i) dis-integration of the master SoC to avoid SoC area to badly impact yield or become larger that the maximum reticle size, or ii) SoC interconnect with “service” chiplet (can be I/O chip, FPGA, accelerator…).

At this point (February 2021), there are several protocols being used, with the industry trying to build formalized standards for many of them. Current leading D2D standards includes i) Advanced Interface Bus (AIB, AIB2) initially defined by Intel who has offered royalty free usage, ii) High Bandwidth Memory (HBM) where DRAM dies are stacked on each other on top of a silicon interposer and are connected using TSVs, iii) Domain-Specific Architecture (ODSA) subgroup, an industry group, has defined two other interfaces, Bunch of Wires (BoW) and OpenHBI. All of these D2D standards are based on DDR-like protocol, a parallel group of single-ended data wires being accompanied with a forwarded clock currently operating in the the 2GHz to 4 GHz range. By using literally hundred of parallel wires over very short distances, these interfaces compete with VHS SerDes NRZ, usually defined around 40 Gbps, and offering a strong advantage to enable a much lower latency and lower power consumption, compared to SerDes.

There is now consensus in the industry that a maniacal focus on achieving Moore’s law is becoming not valid anymore for advanced technology node, eg. 7nm and below. Chip integration is still happening, with more transistor being added per sq. mm at every new technology node. However, the cost per transistor is growing higher every new node. Chiplet technology is a key initiative to drive increased integration for the main SoC while using older mainstream nodes for service chiplet. This hybrid strategy decreases both the cost and the design risk associated with integration of the service IP directly into the main SoC. IPnest believes this trend will have two main effect in the interface IP business, one will be the strong growth of D2D IP revenues soon (2021-2025), and the other is the creation of the heterogenous chiplet market to augment the high end SerDes IP market.

We have forecasted the growth of the D2D Interface IP category for 2020-2025, passing from less than $10 million in 2020 to $171 million in 2025 (87% CAGR). This forecast is based on the assumption that the service chiplet market should explode in 2023, when most of advanced SoC will be designed in 3nm. This will make integration of high-end IP like SerDes far too risky, leading to externalizing this functionality into a chiplet designed in more mature node like 7 or 5nm. If Interface IP vendors will be major actors in this revolution, the Silicon foundries addressing the most advanced nodes like TSMC and Samsung and manufacturing the main SoC will have key role. We don’t think they will design chiplet, but they could make the decision to support IP vendors and push them to design chiplet to be used with SoC in 3nm, like they do today when supporting advanced IP vendors to market their high-end SerDes as hard IP in 7nm and 5nm. Intel’s recent transition to 3rd party foundries is expected to also leverage third party IPs, as well as heterogenous chiplet adoption by the semiconductor heavyweight. In this case, no doubt that Hyperscalars like Microsoft, Amazon and Google will also adopt chiplet architecture… if they don’t even precede Intel in chiplet adoption.

By Eric Esteve (PhD.) Analyst, Owner IPnest

Also Read:

Interface IP Category to Overtake CPU IP by 2025?

Design IP Revenue Grew 5.2% in 2019, Good News in Declining Semi Market

#56thDAC SerDes, Analog and RISC-V sessions


Will EUV take a Breather in 2021?

Will EUV take a Breather in 2021?
by Robert Maire on 02-07-2021 at 6:00 am

KLA EUV Slowdown

-KLAC- Solid QTR & Guide but flat 2021 outlook
-Display down & more memory mix
-KLAC has very solid Dec Qtr & guide but 2021 looks flattish
-Mix shift to memory doesn’t help- Display weakness
-Despite flat still looking at double digit growth
-EUV driven business may see some slowing from digestion

As always, KLAC came in at the high end of the guided range with revenues of $1.65B and non GAAP EPS of $3.24 versus the guided range of $2.82 to $3.46. Guidance is for $1.7B +-75M and non GAAP EPS range of $3.23 – $3.91. By all financial and performance metrics a very solid quarter

A “flattish” 2021 while WFE grows “mid teens”

Management suggested that WFE which exited 2020 at $59-$60B would grow double digits in 2021 but the year would look a bit more flat for KLAC as its acquired display group is expected to shrink and there is an expected mix shift towards memory which is less process control intensive.

Foundry has been strong which has been very good for KLA and the current quarter is expected to see roughly 68% of business from foundry

Will EUV take a breather?

KLA obviously sells process management tools to companies working on new process such as EUV. TSMC has bought so many EUV tools it probably has problems finding the space for more. TSMC has also clearly gone well over the hump of getting EUV to work and likely may not need as much process control and maybe could slow its EUV scanner purchases a bit given that its so far ahead.

Intel is obviously still coming up the learning curve and purchasing curve and Samsung is in between the two. We would not expect either Samsung nor Intel to be as EUV intensive as TSMC has been, at least not in the near term. All this being said , it is not unreasonable to expect EUV related process management to slow slightly.

Memory not as intensive as Foundry/logic

The industry is expecting memory makers to increase capex spend in 2021 as supply and demand have been in reasonable balance and supply is expected to get tighter.

Most of the expectation is on the DRAM side which is slightly less process control intensive as compared to NAND and likely lower in overall spend. This mix shift towards memory is obviously better for memory poster child Lam than for foundry poster child KLA. However its not like foundry is falling off a cliff with TSMC spending a record of between $26B and $28B in capex.

Service adding nice recurring revenue

As we have seen with KLA’s competitors, the service business continues its rise in importance to the company. The recurring revenue stream counterbalances the new equipment cyclicality and lumpiness. Having 25% or more of your revenue coming from service is very attractive

Wafer inspection positive while reticle inspection negative

EUV “print check” has obviously been very good for KLA and a way to play the EUV transition given the issues in reticle inspection. Patterning (AKA reticle inspection) was down significantly after a nice bump in prior quarters where KLA managed to take back some business from Lasertec (which now sports a $10B Mkt Cap).

Obviously “missing the boat” on EUV reticle inspection is toothpaste that can’t be put back in the tube. We expect Lasertec to get the lions share of Intel’s business as it ramps up EUV.

The stock

If we assume roughly $7B in revenues for 2021 ($1.75B/Q) with roughly $15 in EPS ($3.75/Q) we arrive at roughly 19X forward EPS, at the current stock price. This is likely a pretty good valuation for a company with stellar/flawless execution in a slowing, but still strong, market.

Investors will likely get turned off by the “flattish” commentary despite the good numbers. It also doesn’t help that the chip stocks have been feeling a bit like they are turning over here Despite any weakness KLA remains the top financial performer in the industry.

Also Read:

New Intel CEO Commits to Remaining an IDM

ASML – Strong DUV Throwback While EUV Slows- Logic Dominates Memory

2020 was a Mess for Intel


A Research Update on Carbon Nanotube Fabrication

A Research Update on Carbon Nanotube Fabrication
by Tom Dillinger on 12-22-2020 at 10:00 am

IV measurement testchip

It is quite amazing that silicon-based devices have been the foundation of our industry for over 60 years, as it was clear that the initial germanium-based devices would be difficult to integrate at a larger scale.  (GaAs devices have also developed a unique microelectronics market segment.)  More recently, it is also rather amazing that silicon field-effect devices have found a new life, through the introduction of topologies such as FinFETs, and soon, as nanosheets.  Research is ongoing to bring silicon-based complementary FET (CFET) designs to production status, where nMOS and pMOS devices are fabricated vertically, eliminating the lateral n-to-p spacing in current cell designs.  Additionally, materials engineering advances have incorporated (tensile and compressive) stress into the silicon channel crystal structure, to enhance free carrier mobility.

However, the point of diminishing returns for silicon engineering is approaching:

  • silicon free carrier mobility is near maximum, due to velocity saturation at high electric fields
  • the “density of free carrier states” (DoS) at the conduction and valence band edges of the silicon semiconductor is reduced with continued dimensional scaling – more energy is required to populate a broader range of carrier states
  • statistical process variation associated with fin patterning is considerable
  • heat conduction from the fin results in increased local “self-heat” temperature, impacting several reliability mechanisms (HCI, electromigration)

A great deal of research is underway to evaluate the potential for a fundamentally different field-effect transistor material than silicon, yet which would also be consistent with current high volume manufacturing operations.  One option is to explore monolayer, two-dimensional semiconducting materials for the device channel, such as molybdenum disulfide (MoS2).

Another promising option is to construct the device channel from carbon nanotubes (CNT).  The figure below provides a simple pictorial of the unique nature of carbon bonding.  (I’m a little rusty on my chemistry, but I recall “sp2” bonding refers to the pairing of electrons from adjacent carbon atoms from a sub-orbital “p shell” around the nucleus. There are no “dangling bonds”, and the carbon material is inert.)

Note that graphite, graphene, and CNT structures are similar chemically – experimental materials analysis with graphite is easier, and can ultimately be extended to CNT processing.

At the recent IEDM conference, TSMC provided an intriguing update on their progress with CNT device fabrication. [1]  This article summarizes the highlights of that presentation.

CNT devices offer some compelling features:

  • very high carrier mobility (> 3,000 cm**2/V-sec, “ballistic transport”, with minimal scattering)
  • very thin CNT body dimensions (e.g., diameter ~1nm)
  • low parasitic capacitance
  • excellent thermal conduction
  • low temperature (<400C) processing

The last feature is particularly interesting, as it also opens up the potential for integration of silicon-based, high-temperature fabrication with subsequent CNT processing.

Gate Dielectric

A unique process flow was developed to provide the “high K” dielectric equivalent gate oxide for a CNT device, similar to the HKMG processing of current silicon FETs.

The TEM figure above illustrates the CNT cross-section.  Deposition of an initial interface dielectric (Al2O3) is required for compatibility with the unique carbon surface – i.e., suitable nucleation and conformity of this thin layer on carbon are required.

Subsequently, atomic level deposition (ALD) of a high-K HfO2 film is added. (These dielectric experiments on material properties were done with a graphite substrate, as mentioned earlier.)

The minimum thicknesses of these gate dielectric layers is constrained by the requirement for very low gate leakage current – e.g., <1 pA/CNT, for a gate length of 10nm.  The test structure fabrication for measuring gate-to-CNT leakage current is illustrated below.  (For these electrical measurements, the CNT structure used a quartz substrate.)

The “optimal” dimensions from the experiments results in t_Al2O3 = 0.35nm and t_HfO2 = 2.5nm.  With these extremely thin layers, Cgate_ox is very high, resulting in improved electrostatic control.  (Note that these layers are thicker than the CNT diameter, the impact of which will be discussed shortly.)

Gate Orientation

The CNT devices evaluated by TSMC incorporated a unique “top gate plus back gate” topology.

The top gate provides the conventional semiconductor field-effect device input, while the (larger) back gate provides electrostatic control of the carriers in the S/D extension regions, to effectively reduce the parasitic resistances Rs and Rd.  Also, the back gate influences the source and drain contact potential between the CNT and Palladium metal, reducing the Schottky diode barrier and associated current behavior at this semiconductor-metal interface.

Device current

The I-V curves (both linear and log Ids for subthreshold slope measurement) for a CNT pFET are depicted below.  For this experiment, Lg = 100nm, 200nm S/D spacing, CNT diameter = 1nm, t_Al2O3 = 1.25nm, t_HfO2 = 2.5nm.

For this test vehicle (fabricated on a quartz substrate), a single CNT supports Ids in excess of 10uA.  Further improvements would be achieved with thinner dielectrics, approaching the target dimensions mentioned above.

Parallel CNTs in production fabrication will ultimately be used – the pertinent fabrication metric will be “the number of CNTs per micron”.  For example, a CNT pitch of 4nm would be quoted as “250 CNTs/um”.

Challenges

There are certainly challenges to address when planning for CNT production (to mention but a few):

  • regular/uniform CNT deposition, with exceptionally clean surface for dielectric nucleation
  • need to minimize the carrier “trap density” within the gate dielectric stack
  • optimum S/D contact potential materials engineering
  • device modeling for design

The last challenge above is especially noteworthy, as current compact device models for field-effect transistors will definitely not suffice.  The CNT gate oxide topology is drastically different than a planar or FinFET silicon channel.  As the gate-to-channel electric field is radial in nature, there is not a simple relation for the “effective gate oxide”, as with a planar device.

Further, the S/D extensions require unique Rs and Rd models.  Also, the CNT gate oxide is thicker than the CNT diameter, resulting in considerable fringing fields from the gate to the S/D extensions and to the (small pitch separated) parallel CNTs.  Developing suitable compact models for CNT-based designs is an ongoing effort.

Parenthetically, a CNT “surrounding gate” oxide – similar to the gate-all around nanosheet – would be an improvement over the deposited top gate oxide, but difficult to manufacture.

TSMC is clearly investing significant R&D resources, in preparation for the “inevitable” post-silicon device technology introduction.  The results on CNT fabrication and electrical characterization demonstrate considerable potential for this device alternative.

-chipguy

References

[1]  Pitner, G., et al, “Sub-0.5nm Interfacial Dielectric Enables Superior Electrostatics:  65mV/dec Top-Gated Carbon Nanotube FETs at 15nm Gate Length”, IEDM 2020.


Advanced Process Development is Much More than just Litho

Advanced Process Development is Much More than just Litho
by Tom Dillinger on 12-16-2020 at 10:00 am

Vt distribution

The vast majority of the attention given to the introduction of each new advanced process node focuses on lithographic updates.  The common metrics quoted are the transistors per mm**2 or the (high-density) SRAM bit cell area.  Alternatively, detailed decomposition analysis may be applied using transmission electron microscopy (TEM) on a lamella sample, to measure fin pitch, gate pitch, and (first-level) metal pitch.

With the recent transition of the critical dimension layers from 193i to extreme ultraviolet (EUV) exposure, the focus on litho is understandable.  Yet, process development and qualification encompasses many more facets of materials engineering to achieve robust manufacturability, so that the full complement of product goals can be achieved.  Specifically, process development engineers are faced with increasingly stringent reliability targets, while concurrently achieving performance and power dissipation improvements.

At the recent IEDM conference, TSMC gave a technical presentation highlighting the development focus that enabled the N5 process node to achieve (risk production) qualification.  This article summarizes the highlights of that presentation. [1]

An earlier SemiWiki article introduced the litho and power/performance features of N5. [2]  One of the significant materials differences in N5 is the introduction of a “high mobility” device channel, or HMC.  As described in [2], the improved carrier mobility in N5 is achieved by the introduction of additional strain on the device channel region.  (Although TSMC did not provide technical details, the pFET hole mobility is also likely improved by the introduction of a moderate percentage of Germanium into the Silicon channel region, or Si(1-x)Ge(x).)

Additionally, the optimized N5 process node incorporates an optimized high-K metal-gate (HKMG) dielectric stack between gate and channel, resulting in a stronger electric field.

A very significant facet of this “bandgap engineering” for carrier mobility and the gate oxide stack materials selection is to ensure that reliability targets are satisfied.  Several of the N5 reliability qualification results are illustrated below.

TSMC highlighted the following reliability measures from the N5 qualification test vehicle:

  • bias temperature stability (BTI)
  • both NBTI for pFETs and PBTI for nFETs, manifesting in a performance degradation over time from a device Vt shift (positive absolute value) due to trapped oxide charge
  • also may result in a degradation of VDDmin for SRAM operation
  • hot carrier injection (HCI)
  • an asymmetric injection of charge into the gate oxide near the drain end of the device (operating in saturation), resulting in degraded carrier mobility
  • time-dependent gate oxide dielectric breakdown (TDDB)

Note that the N5 node is targeted to satisfy both high-performance and mobile (low-power) product requirements.  As a result, both performance degradation and maintaining an aggressive SRAM VDDmin are important long-term reliability criteria.

TDDB

The figure above illustrates that the TDDB lifetime is maintained relative to node N7, even with the increased gate electric field.

Self-heating

The introduction of FinFET device geometries substantially altered the thermal resistance paths from the channel power dissipation to the ambient.  New “self-heating” analysis flows were employed to more accurately calculate local junction temperatures, often displayed as a “heat map”.  As might be expected with the aggressive dimensional scaling from N7 to N5, the self-heat temperature rise is greater in N5, as illustrated below.

Designers of HPC products need to collaborate with both their EDA partners for die thermal analysis tools and their product engineering team for accurate (on-die and system) thermal resistance modeling.  For the on-die model, both active and inactive structures strongly influence the thermal dispersion.

HCI

Hot carrier injection performance degradation for N7 and N5 are shown below, for nFETs and pFETs.

Note that HCI is strongly temperature-dependent, necessitating accurate self-heat analysis.

BTI

The pMOS NBTI reliability analysis results are illustrated below, with the related ring oscillator performance impact.

In both cases, reliability analysis demonstrates improved BTI characteristics of N5 relative to N7.

SRAM VDDmin

The SRAM minimum operating voltage (VDDmin) is a key parameter for low-power designs, especially with the increasing demand for local memory storage.  Two factors that contribute to the minimum SRAM operating voltage (with sufficient read and write margins) are:

  • the BTI device shift, as shown above
  • the statistical process variation in the device Vt, as shown below (normalized to Vt_mean in N7 and N5)

Based on these two individual results, the SRAM reliability data after HTOL stress shows improved VDDmin impact for N5 versus N7.

Interconnect

TSMC also briefly described the N5 process engineering emphasis on (Mx, low-level metal) interconnect reliability optimization.  With an improved damascene trench liner and a “Cu reflow” step, the scaling of the Mx pitch – by ~30% in N5 using EUV – did not adversely impact electromigration fails, nor line-to-line dielectric breakdown.  The figure below illustrates the line-to-line (and via) cumulative breakdown reliability fail data for N5 compared to N7 – N5 tolerates the higher electric field with the scaled Mx pitch.

Summary

The majority of the coverage associated with the introduction of TSMC’s N5 process node related to the broad adoption of EUV lithography to replace multipatterning for the most critical layers, enabling aggressive area scaling.  Yet, process engineers must also optimize materials selection and many individual fabrication steps, to achieve reliability targets.  TSMC recently presented how these reliability measures for N5 are superior to prior nodes.

-chipguy

References

[1]  Liu, J.C., et al, “A Reliability Enhanced 5nm CMOS Technology Featuring 5th Generation FinFET with Fully-Developed EUV and High Mobility Channel for Mobile SoC and High Performance Computing Application”, IEDM 2020.

[2]  https://semiwiki.com/semiconductor-manufacturers/tsmc/282339-tsmc-unveils-details-of-5nm-cmos-production-technology-platform-featuring-euv-and-high-mobility-channel-finfets-at-iedm2019/

 

Related Lithography Posts


Design Considerations for 3DICs

Design Considerations for 3DICs
by Tom Dillinger on 12-14-2020 at 6:00 am

LVS flow phases

The introduction of heterogeneous 3DIC packaging technology offers the opportunity for significant increases in circuit density and performance, with corresponding reductions in package footprint.  Yet, the implementation of a complex 3DIC product requires a considerable investment in methodology development for all facets of the design:

  • system architecture partitioning (among die)
  • I/O assignments for all die, both for signals and the power distribution network (PDN)
  • die floorplanning, driven by the I/O assignments
  • probe card design (with potential reuse between individual die and 3DIC assembly)
  • critical timing path analysis, assessing the tradeoffs between timing paths on-die versus the implementation of vertical paths between stacked die
  • IR drop analysis, a key facet of 3DIC planning due to the power delivery to stacked die using through-silicon or through-dielectric vias
  • a DFT architecture, suitable for 3DIC testing using individual known good die (KGD)
  • reliability analysis of the composite multi-die thermal package model
  • LVS physical verification of the multi-die connectivity model

Whereas 2.5D IC packaging technology has pursued “chiplet-based” die functionality (and potential electrical interface connectivity standards), the complexity of 3DIC implementations requires early and extensive investment in the design and analysis flows listed above – a higher risk than 2.5D IC implementations, for sure, but with a potentially greater reward.

At the recent IEDM 2020 conference, TSMC presented an enlightening paper describing their recent efforts to tackle these 3DIC implementation tradeoffs, using a very interesting testchip implementation.  This article summarizes the highlights of their presentation. [1]

SoIC Packaging Technology

Prior to IEDM, TSMC presented their 3DIC package offering in detail at their Technology Symposium – known as “System on Integrated Chip”, or SoIC  (link).

A (low-temperature) die-to-die bonding technology provides the electrical connectivity and physical attach between die.  The figure below depicts available die attach options – i.e., face-to-face, face-to-back, and a complex combination including side-to-side assembly potentially integrating other die stacks.

For the face-to-face orientation, the backside of the top die receives the signal and PDN redistribution layers.  Alternatively, a third die on the top of the SoIC assembly may be used to implement the signal and PDN redistribution layers to package bumps – a design testcase from TSMC using the triple-stack will be described shortly.

A through-silicon via (TSV) in die #2 provides electrical connectivity for signals and power to die #1.  A through-dielectric via (TDV) is used for connectivity between the package and die #1 in the volumetric region outside of the smaller die #2.

Planning of the power delivery to the SoIC die requires consideration of several factors:

  • estimated power of each die (especially where die #1 is a high-performance, high-power processing unit)
  • TSV/TDV current density limits
  • distinct power domains associated with each die

The figure below highlights the design option of “number of TSVs per power/ground bump”.  To reduce IR drop and observe current density limits through a TSV, an array of TSVs may be appropriate – as an example, up to 8 TSVs are shown in the figure.  (Examples from both FF and SS corners are shown.)

The tradeoff of using multiple, arrayed TSVs is the impact on interconnect density.

As an illustration, TSMC pursued a unique SoIC implementation – a quad-core ARM A72 processor (die #1) where the L2$ cache arrays commonly integrated with each core have been re-allocated to die #2.  The CPU die in process node N5 maintains an L3$ array, while the SRAM die in process node N7 contains the full set of L2$ arrays.  A third die on top of die #2 provides the redistribution layers.  A total of 2700 connections are present between CPU die #1 and the L2$ arrays in die #2.

This is an example of how SoIC technology could have a major impact on system architectures, where a (large) cache memory is connected vertically to a core, rather than integrated laterally on a monolithic die.

PDN Planning

A key effort in the development of an SoIC is the concurrent engineering related to the assignment of bump, pad, and TSV/TDV locations throughout, for both signals and the PDN.

The figures above highlight the series of planning steps to develop the TSV configuration for the PDN – a face-to-face die attach configuration is used as an example.  The original “dummy” bond pads between die (for mechanical stability) are replaced with the signal and PDN TDV and TSV arrays.  (TSMC also pursued the goal of re-using the probe card, between die #1 testing and the final SoIC testing – that goal influenced the assignment of pad and TSV locations.)

The TSV implementations for the CPU die and SRAM die also need to be carefully chosen so as to meet IR goals, without adversely impacting overall die interconnect density.

LVS

Briefly, TSMC also highlighted the (multi-phase) LVS connectivity verification methodology, and unique DFT architecture selected for this SoIC test vehicle, as depicted below.

DFT

Another major consideration is the DFT architecture for the SoIC, and how connectivity testing will be accomplished using cross-die scan, as illustrated below.

 

TSMC demonstrated that the resulting (N5 + N7) SoIC design achieved a 15% performance gain (with suitable L2$ and L3$ hit rate and latency assumptions), leveraging a significant reduction in point-to-point distance afforded by the vertical connectivity between die.  The package areal footprint for the SoIC is reduced by ~50% from a monolithic 2D implementation.

3D SoIC packaging technology will offer system architects with unique opportunities to pursue design partitioning across vertical die configurations.  The density and electrical characteristics of the vertical bond connections may offer improved performance over lateral (monolithic or 2.5D chiplet-based) interconnects.  (The additional power dissipation of “lite I/O” driver and receiver cells between die versus on-chip signal buffering is typically small.)

The tradeoff is the investment required to develop the SoIC die floorplans for TSV and TDV vias to provide the requisite signal count and low IR drop PDN.  Although 2.5D chiplet-based package offerings have been aggressively adopted, the performance and footprint advantages of a 3DIC are rather compelling.  The TSMC test vehicle demonstrated at IEDM will no doubt generate considerable interest.

-chipguy

References

[1]  Cheng, Y.-K., et al., “Next-Generation Design and Technology Co-optimization (DTCO) of System on Integrated Chip (SoIC) for Mobile and HPC Applications”, IEDM 2020.

 


How Intel Stumbled: A Perspective from the Trenches

How Intel Stumbled: A Perspective from the Trenches
by Daniel Nenni on 12-07-2020 at 6:00 am

Stacy and Bob Intel SemiWiki

Bloomberg did an interview with my favorite semiconductor analyst Stacy Rasgon on “How the Number One U.S. Semiconductor Company Stumbled” that I found interesting. Coupled with the Q&A Bob Swan did at the Credit Suisse Annual Technology Conference I thought it would be good content for a viral blog.

Stacy Rasgon and Bob Swan

Stacy Rasgon is an interesting guy and a lot like me when it comes to offering blunt questions, observations, and opinions that sometimes throw people off. As a result, Stacy is not always the first to ask questions during investor calls and sometimes he is not called on at all which is the case for the most recent Intel Call.

Stacy is the Managing Director and Senior Analyst, US Semiconductors, for AB Bernstein here in California. Interestingly, Stacy has a PhD in Chemical Engineering from MIT, not the usual degree for a sell side analyst. Why semiconductors? Stacy did a co-op at IBM TJ Watson Research Center during his post graduate studies and that hooked him.

I thought it was funny back when Brian Krzanich (BK) was CEO of Intel. BK has a Bachelor’s Degree in Chemistry from San Jose State University and he was answering questions by an analyst with a PhD from MIT. The current Intel CEO Bob Swan is a career CFO with an MBA so maybe that explains the communication issues.

In the Bloomberg interview the focus was on the delays in the Intel processes starting with 14nm, 10nm, and now 7nm. Unfortunately they missed the point. In the history of the semiconductor industry leading edge processes were more like wine where in the words of the great Orson Wells “We will sell no wine before its time”. Guided by Moore’s Law, Intel successfully drove down the bumpy process road until FinFETs came along.

The first FinFET Process was Intel 22nm which was the best kept secret in semiconductor history. We don’t know if it was early or late since it was not discussed before it arrived. 14nm followed which was late due to defect density/yield problems. We talked about that on SemiWiki quite and I had a bit of a squabble with BK at a developer conference. I knew 14nm was not yielding and he said it was only to retract that comment at the next investor call. Intel 10nm is probably the most tardy process in the history of Intel and now 7nm is in question as well.

The foundries historically have been 1-2 nodes behind Intel so they got a relative pass on being late with new processes up until 10nm when TSMC technically caught Intel 14nm.

Bottom line: Leading edge processes use new technology and materials which challenges yield from many different directions. This is a very complex business so it’s extremely difficult to predict schedules because “you never know until you know”. So, try as one might, abiding by Moore’s Law in the FinFET era is a fool’s errand, absolutely.

The other major Intel disruption is the TSMC / Apple partnership. Apple requires a new process each year which started at 20nm (iPhone6). As a result TSMC now does half steps with new technologies. At 20nm TSMC introduced double patterning then added FinFETs at 16nm. At 7nm TSMC later introduced limited EUV and called it 7nm+. AT 5nm TSMC implemented full EUV (half steps).

This is a serious semiconductor manufacturing paradigm shift that I call “The Apple Effect” TSMC must have a new process ready for the iProduct launch every year without fail. Which means the process must be frozen at the end of Q4 for production starting in the following Q2. The net result is a serious amount of yield learning which results in shorter process ramps and superior yield.

The other interesting point is that during Bob Swan’s Credit Suisse interview he mentioned the word IDM 33 times emphasizing the IDM advantage over being fabless. Unfortunately this position is a bit outdated. Long gone are the days when fabless companies tossed designs over the foundry wall to be manufactured.

TSMC, for example, has a massive ecosystem of partners and customers who together spend trillions of dollars on research and development for the greater good of the fabless semiconductor ecosystem. There is also an inner circle of partners and customers that TSMC intimately collaborates with on new process development and deployment. This includes Apple of course, AMD, Arm, Applied Materials, ASML, Cadence, and Synopsys just to name a few.

Bottom line: The IDM underground silo approach to semiconductor design and manufacture is outdated. It’s all about the ecosystem and Intel will learn this first hand as they increasingly outsource to TSMC in the coming process nodes.

 

 


No Intel and Samsung are not passing TSMC

No Intel and Samsung are not passing TSMC
by Scotten Jones on 12-02-2020 at 6:00 am

Slide1

Seeking Alpha just published an article about Intel and Samsung passing TSMC for process leadership. The Intel part seems to be a theme with them, they have talked in the past about how Intel does bigger density improvements with each generation than the foundries but forget that the foundries are doing 5 nodes in the time it takes Intel to do 3. They also make a big deal about Horizontal Nanosheets (HNS) Versus FinFETs and yes that is impressive, but at the end of the day what you deliver for power, performance and area (PPA) is what really matters.

I have written about this before here.

In this article I will briefly review where each company is currently and where I expect them to be over the next five years. I do not want to go into too much detail in this article because I will be presenting on leading edge logic at the ISS conference in January and covering this in more depth then.

Intel

Figure 1 illustrates Intel’s node introductions starting at 45nm. After many nodes on a 2-year cadence Intel slipped to 3 years at 14nm and 5 years at 10nm. 10nm has been particularly bad with yield and performance issues, even today it is hard to get 10nm parts. Intel has recently announced 10+ now known as 10SF (Super Fin). The Super Fin provides a 17-18% performance improvement like a full node. There is also a rumor that Intel is using EUV for M0 and M1 although I have not confirmed this. M0 and M1 on the original 10nm process are the most complex metal pattering scheme I have ever seen so this might make sense for yield reasons.

Figure 1. Intel Node Introductions.

Intel’s 7nm was scheduled for 2021 and was supposed to get Intel back on track. At 7nm they are doing a smaller 2x density improvement and the implementation of EUV was supposed to solve their yield issues, but the process is now delayed until 2022.

Seeking Alpha makes an argument that Intel will be back on a 2 year cadence for their 5nm process, I am not sure I believe this given their 14nm, 10nm and 7nm history but even if they are I don’t think this puts them in the lead as I will describe below.

14nm/16nm

14nm was Intel’s second generation FinFET and they took a big jump in density. Intel’s 14nm process came out in 2014, Samsung’s 14nm process also came out in 2014 and TSMC’s 16nm process came out in 2015. Intel’s 14nm process was significantly denser than Samsung or TSMC’s 14nm/16nm processes.

Foundry 10nm

In 2016 both foundries came out with 10nm processes and they both passed Intel for the process density lead.

Foundry 7nm/Intel 10nm

In 2017 TSMC released their 7nm process moving further ahead of Intel and in 2018 Samsung released their 7nm process also moving further ahead of Intel. In 2019 Intel finally started shipping 10nm and the Intel 10nm process was slightly denser that TSMC or Samsung, but in 2018 TSMC’s 7+ process (half node) and in 2019 Samsung’s 6nm (half node) processes passed Intel 10nm density. Samsung’s 7nm is also notable as the industry’s first process with EUV, although TSMC soon had EUV running on their 7+ process and is in my opinion the EUV leader today, in fact TSMC claims to have half of all EUV systems in the world currently.

Foundry 5nm

In 2019 the foundries started risk starts on 5nm pulling further ahead of Intel. TSMC 5nm took a much bigger density jump than Samsung’s 5nm and they opened a lead over Samsung and Intel. TSMC 5nm also introduced a high mobility channel. 5nm has ramped throughout 2020 and utilizes EUV for more layers than 7nm.

Foundry 3nm/Intel 7nm

Risk starts for foundry 3nm are due in 2021 and TSMC will pull further ahead of both Intel and Samsung. Samsung will introduce the industry’s first HNS and that is a great accomplishment and positions them well for the future, but we expect TSMC’s 3nm process to be much denser with better power and performance.

Intel’s 7nm process is currently expected around 2022 and is slated to be their first EUV based process (although there may be some EUV use on 10nm as discussed above). Based on their announced density improvements and the announced density improvements for TSMC and Samsung we expect Intel 7nm and Samsung 3nm to have similar densities but TSMC will be much denser than either company.

Foundry 2nm/Intel 5nm

If Intel gets back onto a two-year node interval, then Intel 5nm using HNS will be due in 2024. I am not sure I believe that but for the sake of argument I will go with it. There is also a question as to whether Intel even does 5nm, they are looking at outsourcing and depending on how that goes they may not go beyond 7nm and may use foundries.

TSMC’s 2nm node is now expected to be available for risk starts in 2023 and production in 2024. TSMC has said it will be a full node and even with modest density improvements it will be denser than Intel’s 5nm process based on announced density improvements, Intel will likely pass Samsung but not TSMC. This would be Intel and TSMC’s first HNS. Possibly because it would be Samsung’s second generation HNS maybe they will take a bigger density jump but I don’t see them catching TSMC who is taking bigger jumps at both 5nm and 3nm.

Conclusion

The bottom line is Intel may be doing bigger density jumps at each node than the foundries but from the 14nm nodes in 2014 through the Intel 7nm node expected in 2022, the foundries have done 5 full nodes while Intel has done 3 full nodes and TSMC in particular has opened up a big process lead.

Also Read:

Leading Edge Foundry Wafer Prices

VLSI Symposium 2020 – Imec Monolithic CFET

SEMICON West – Applied Materials Selective Gap Fill Announcement


Webinar: 5 Reasons Why Others are Adopting Hybrid Cloud and EDA Should Too!

Webinar: 5 Reasons Why Others are Adopting Hybrid Cloud and EDA Should Too!
by Daniel Nenni on 11-27-2020 at 6:00 am

Rescale SemiWiki

With the complexity of transistors at an all time high and growing foundry rule decks, fabless companies consistently find themselves in a game of catch up. Semiconductor designs require additional compute resources to maintain speed and quality of development. But deploying new infrastructures at this current speed is a tall order for IT professionals tasked with supporting development and verification teams. When these  resources can’t keep up, engineers become compute constrained rather than compute empowered..

The semiconductor industry is not alone in the struggle to adopt new technologies that can accelerate the pace of  science and engineering breakthroughs. For that reason, cloud solutions are increasingly being implemented to empower R&D in a way never before seen. Breakthroughs in aerospace design, new drugs and vaccines, alternative energy solutions and much more are now being realized on cloud or hybrid cloud infrastructures. Because of security and IP concerns, EDA companies have primarily maintained on-premise data centers for their compute needs. However, that preference is changing due to manufacturers such as TSMC endorsing cloud. The industry has also seen a rise in startups entering the industry that do not have infrastructure of their own and are turn to the cloud to compete.

So let’s look at the main benefits of expanding EDA to a hybrid cloud environment. Join Rescale’s webinar to further explore how hybrid cloud will drive new levels of performance and efficiency in semiconductor. Register here.

Security

As companies look to move workloads to the cloud, the primary area of focus is how to protect sensitive information and IP. Recent research by Cloud Vision states , two thirds of companies consider this the main roadblock in adopting cloud. In light of this, major cloud providers have put substantial focus and investment to reduce risks and safeguard datacenters from any breach. As you can imagine, with companies like AWS, Microsoft and Google, no expense is spared to ensure they deliver a secure environment. As proof of these security measures, public cloud will experience 60% fewer security incidents compared to typical data centers this year. For organizations that require full stack compliance and security, platforms such as Rescale cover end to end workflows across the hardware and software layers with the highest of industry standards. . Even going as far as obtaining industry leading certifications to meet the strictest compliance requirements.

Agility

Never in our history has technological agility been more important than 2020. Facing a pandemic was the ultimate test of our systems and most companies found themselves not prepared. Being cutoff from typical on-premise infrastructure caused delays across the industry. VPNs became overwhelmed as engineers struggled to access the data and resources needed to continue development and run verification. The need to enable remote teams is not the only consideration. Systems need to have the flexibility to scale with phases of projects and production deadlines . For these reasons, hybrid cloud far out performs traditional infrastructures. It’s accessible anywhere you can find a wifi connection and compute resources scale as needed. The Rescale platform also offers remote desktop solutions and a wide variety of admin controls over budgets and permissions to keep operations running smoothly. With the stability and options of a multi cloud infrastructure and a variety of core types available on the platform, users can match the ideal core type to their workload and be confident in the stability of the infrastructure with a service level agreement that your job will run.

Impact and Productivity

Enabling engineers to focus on design means better products at a quicker pace. IT leaders need to look at the ways in which engineers are distracted or slowed from their core responsibilities. Companies spend top dollar to secure engineering expertise and talent and they should be working on the portion of the business where they will make the biggest impact. Distractions can come in the form of queues, slow workflows, license issues and more. Rescale looks to solve these issues with an intelligent control plane and full stack approach. Having an intelligent control plane for both local and cloud hardware allows R&D the ability to divert workloads to the best infrastructure based on performance and cost. A simple user interface with robust automation allows them to easily setup runs without relying on IT. And if they do come across a challenge, the Rescale support team is stacked with HPC and simulation experts that average a 15 min response time. All of this combines to allow engineers to be hyper focused on what they do best.

Speed to Market

A major component of gaining competitive advantage is to be first to market with a new product. This allows you to gain brand recognition, build customer loyalty and secure market share before competitors are even in play. A hybrid cloud approach enables semiconductor companies to dial up the number of iterations and accelerate speed to answer. Additionally, verification is expedited with the virtually unlimited resources available. When coupled with automated workflows, templates and continuous optimization from the Rescale platform, companies can make substantial improvements.

pSemi used Rescale to substantially speed up their development process, “We were able to use Rescale’s cloud platform to highly parallelize our simulations and bring the simulation time down from 7 days to 15 hours. We’ve demonstrated a 10x speed improvement on numerous occasions in our EM simulations using Rescale…”

The next wave of semiconductor advancements will be powered by hybrid cloud. The foundries have already started to adopt the technology. It is poised to revolutionize the industry by empowering engineers like never before and reaching new levels of performance and efficiency. Join Semiwiki and Rescale as we take a deeper look into the benefits of hybrid cloud and Rescale’s intelligent control plane approach. Register now!

About Rescale
Rescale is the leader in enterprise big compute in the cloud. Rescale empowers the world’s transformative executives, IT leaders, engineers, and scientists to securely manage product innovation to be first to market. Rescale’s hybrid and multi-cloud platform, built on the most powerful high-performance computing infrastructure, seamlessly matches software applications with the best cloud or on-premise architecture to run complex data processing and simulations.


China Semiconductor Bond Bust!

China Semiconductor Bond Bust!
by Robert Maire on 11-25-2020 at 10:00 am

China Semiconductor Bond Bust

– Tsinghua $198M Bond Bust
– Good for memory: Samsung Micron LG Toshiba –
– Not good for chip equipment
– Could China Credit Crunch hit more than foundry embargo?
– Damage to China memory positive for other memory makers
– Not good for chip equip if customers can’t get money

China’s most prestigious leader of the effort to become dominant in semiconductors suffered an embarrassment of defaulting on $198M in bonds that were due Nov 17th. While seemingly a drop in the bucket of overall debt and the fact that they were in the midst of negotiating their way out it still sends shivers through China’s debt market and sent the bonds plummeting.

Tsinghua is not the only state backed Chinese firm with bond troubles which makes the concerns all the more worrisome.

Chinese tech group joins list of companies to default on bond issue

NAND in Wuhan and DRAM in Chongqing
Tsinghua already has a NAND factory in otherwise famous Wuhan and is planning a DRAM fab in Chongqing. They are a spinoff subsidiary of the prestigious Bejing University. They are perhaps the shining star of China’s semiconductor aspirations. Though SMIC has been around a long time it seemed Tsinghua had more potential. Tsinghua Unigroup default tests China’s chipmaking ambitions

Good for non Chinese memory makers like Samsung & Micron, LG & Toshiba

Being in the memory market and having the specter of China entering your market after watching China annihilate the LED & solar cell markets was likely quite chilling. China obviously doesn’t care about profitability (at least not in the beginning) and could easily trash pricing and destroy the commodity memory market just like the commodity LED and solar markets before it.

If I were in Boise I might have a little schadenfreude about the Chinese bond market right now, not unlike TSMC and SMIC.

Anything that slows down China’s aspirations in the memory market is likely positive for other competitors.

Equipment vendors likely between a rock and a hard place in China
Checking accounts receivable.

Semiconductor equipment makers may not be as happy about the bond default and subsequent credit downgrades.

We would bet a lot of money that the equipment makers are likely owed a whole lot more than $198M in equipment purchases and are looking at many times that in future orders and business. So their exposure far exceeds the bondholders.

Unlike the bondholders, equipment makers don’t want to stop shipping to their biggest, best and fastest growing market, that is China.

If equipment makers stop shipping due to credit risk/downgrades or fear of not getting paid then Tsinghua will avoid doing business with them at all costs (its not like there aren’t trying to avoid American equipment already given what happened to their cousins at SMIC).

Equipment vendors have to keep shipping with the hope that the Chinese government will be the backstop, or the company figures it out.

We can only imagine that some CFO’s have to be checking their Tsinghua related accounts receivable exposure.

Credit is all about faith
Too big/important to fail?

Lest anyone forget, the credit market is all about faith. Faith in getting paid back on the loan. The 2008/2009 market collapse was a collapse in the credit market. Faith in ever getting repaid went to zero.

The semiconductor industry is very highly capital intensive and very fickle in cyclical profitability. In addition the would be Chinese chip makers are likely finding out that the semiconductor market is much, much harder than the LED and solar markets which were relative pushovers.

The cost of an LED “fab” and complexity of process is not even a rounding error as compared to making a 128 level NAND chip.

It is likely that getting to yield, meaning getting to revenue, let alone profitability will likely be a lot longer and a lot harder than many in China likely anticipated after the cakewalk in LED and solar.

This means that many Chinese firms could have miscalculated when they would have been able to pay back debt and could find themselves in a cash crunch needing to extend credit terms out some more years/months.

We don’t know what caused Tsinghua’s issue but breaking the faith was not good as their bonds fell all the way down to 68 cents on the dollar at one point. (we don’t think equipment vendors would like to take 68 cents on the dollar owed them).

In the end, Tsinghua, like some US financial firms in 2008/9, is too big/important to fail and the Chinese government will step in at some point. The question is when and how and who will get hurt in the collateral damage

Could the US administer a “Coup de Grace”?
Part of the outgoing, “Scorched earth” policy

It is abundantly clear that the outgoing administration has embarked on a scorched earth policy for various reasons. Much of the scorched earth has been directed at international relations such as potentially attacking Iran, recalling troops and trying to make good on other campaign promises. Trade with China has been talked about as one such target.

The SMIC embargo, announced shortly before the election certainly was effective at hurting China’s chip ambition. Could the embargo be extended to memory, which is certainly capable of potential military “dual use technology” as a parting shot on the way out the door? Or maybe a blanket embargo? If there were a time to hurt China, the lame duck session is it.

The stocks

Most all semi stocks have been super hot as demand continues to be strong. The Tsinghua news is mildly positive for other memory makers as it will likely weaken and or slow China’s memory ambitions and ability to crush memory pricing.

It is likely not all that negative for equipment companies as they have even survived the SMIC embargo without so much as a scratch.

If anything, it may be a hidden positive as it will likely moderate memory spending which drives the notorious boom bust cycles in memory.

TSMC continues to be a huge winner. Micron seems in fine shape as well and would be happy to see Tsinghua go the way of Jinhua, even though we don’t think that will happen.

Equipment companies may see a hiccup or two in revenue recognition but not likely more than that unless things really go off the tracks, like the US upping the ante. While a possibility, we think the administration seems too pre-occupied with other fights with too little time left on the clock.

Also Read:

Is Apple the Most Valuable Semiconductor Company in the World?

2021 will be the year of DRAM!

Post Election Fallout-Let the Chips Fall / Rise Where They May