Bronco Webinar 800x100 1

Chip Channel Check- Semi Shortage Spreading- Beyond autos-Will impact earnings

Chip Channel Check- Semi Shortage Spreading- Beyond autos-Will impact earnings
by Robert Maire on 03-07-2021 at 10:00 am

Robert Maire 2

– Semiconductor shortage is like toilet paper shortage in early Covid
– Panic buying, hoarding, double ordering will cause spike
– Could cause a year+ of dislocation in chip makers before ending
– Investors, Govt & Mgmt will get a wake up call from earnings hit

Auto industry is just a prominent tip of chip crunch iceberg. We believe the chip shortage is spreading across other industries

The automotive industry is just a very prominent, in your face, example of the semiconductor industry problem as it involves the highest financial impact ratio; That is that a 25 cent chip can stop the revenue associated with a $50,000 car.

Wait till we get the earnings report from Ford for Q1 and they have a significant revenue and earnings shortfall, due to the production halts, that they blame on those tech guys in California’s Silicon Valley.

From an investment perspective we think we will see similar revenue and earnings impact across a number of industries…not just tech related.

In the past we have seen delays in laptops and servers which were relatively common. Last year I ordered a laptop that was delayed two months due to “production problems” (AKA chip shortage).

We would expect chip shortages to hit telecommunications equipment makers; everything from 5G to routers etc. Video cards have always been in short supply due to chip shortages. It could roll downhill to consumer goods from TVs to washers (don’t laugh, large appliances have been already in short supply). We would bet that earnings season will see a whole bunch of diverse companies missing numbers due to components shortages. Its just hard to predict who because everything has a chip in it.

Being a Big BFF with a long history helps

In this type of situation it pays to be a long time, big, close customer to the chip makers, like Apple. They are so tight with TSMC there is no light between them. You can rest assured that Apple will get all the chips it needs, both expensive and cheap from TSMC and they will always be first in line. Apple is TSMC’s number one customer so it will be no other way.

On the other end of the spectrum you likely have auto makers who are notoriously tough with their suppliers buying 25 cent chips at low margins. What are the odds of their orders being sped up? Zero.

Auto makers only have themselves to blame as they cut orders early in Covid and shouldn’t be shocked when they had to get back in line, at the end of the line, to re-order. Its called supply chain management.

Tom Caufield, CEO of GlobalFoundries, had said that his phone is ringing off the hook from auto manufacturers asking for wafers and he is “everybody’s new best friend”.

Broadcom’s CEO, Hock Tan, said on their call last night that Broadcom is pretty much booked up for the year and he doesn’t know when the shortage will subside. Broadcom is a big customer of TSMC and it doesn’t sound like they are getting extra wafer capacity.

Panic buying, hoarding & double ordering. The toilet paper shelves are empty.

Perhaps the biggest physical evidence of the panic Covid caused was the shortages of toilet paper in supermarkets in the early part of Covid.

Consumers probably thought they were going to be locked in their homes for months or paper factories would be shut down for months because it seemed like a years worth of TP was sold in days.

As we have seen in the past we think we are also seeing evidence of panic buying of chips, double ordering and stocking up.

We think there has already been hoarding by Chinese customers for well over a year who were concerned, rightfully so, about being cut off. Now add to that, hoarding by more customers currently experiencing supply problems. If I were in the auto industry supply chain I would be double and triple ordering and stocking up lest I lose my job.

Coming down off the “sugar high” may be problematic- Is this the high point in the cycle?

Right now chip makers are everyone’s best friends and popular on speed dial but the hangover from the current party could create a headache. As we know from a very long history, the chip industry is cyclical and goes though those cycles which are based on supply and demand and therefore pricing. Right now supply is short and demand is high…maybe artificially high due to hoarding and double ordering…and maybe supply is tight in the short term due to the Texas power problem and other issues….seems a bit like a “perfect storm”

A year or two from now chip makers could be a swipe left and ghosted by those currently in desperate need of a chip fix. Poetic justice would be for chip equipment to suffer shortages. Not likely.

It would be very funny cosmic Karma if chip equipment companies were impacted by the current chip shortages. After all, semiconductor equipment does happen to have a lot of semiconductors in it and the supply chain goes directly through China. The equipment controllers are basically souped up PC’s and dep and etch tools have a myriad of sub-system suppliers; robots, RF, Gas boxes etc; An EUV lithography tool is such a Rube Goldberg it likely has hundreds of chips.

We don’t expect a problem from chip equipment makers, but it could happen. In general, most everybody in the chip industry understands and is on guard for supply issues….obviously unlike the auto industry.

Channel Checks say its not just chips

From what we can tell the shortage issues seem to go beyond chips. Other components and discrete semiconductors are also short in some cases. However, this is likely due to panic buying and ordering from nervous customers and not systemic supply issues as in the mainstream chip industry.

Is the Panic worse than the Problem?

Much as with toilet paper the problem is likely less severe than the issues caused by the surrounding panic. The semiconductor industry making the news is far from normal. If I made a $50 consumer good with chips in it, I might get freaked out when I hear Ford has to shut down factories cause they can’t get chips.

The only good thing that has come out of this is that this long term issue has finally risen to the level where it has hit the White House and they are talking about the industry and doing something about it (which we have never seen before…)

Could the chip shortage hit economic growth and Covid recovery?

The dislocation in the chip industry does not come at a good time as we are looking at climbing out of the hole that Covid has put us in. Having car factories shut down and revenue and earnings hits at some companies certainly will not help the recovery.

It just creates more friction and resistance to the recovery. We think we could very easily see two to three quarters of direct impact on companies with some residual impact even further out. What remains to be seen is whether the lessons learned will actually be adopted or forgotten once it leaves our immediate memory, a year down the road.

The stocks
Chip companies in general are obviously doing very well due to near term demand. Equipment companies are also doing very well as capital spending is high and will remain high while chip companies business is so good.

After a yearlong or more strong run, it has been feeling like the semiconductor stocks want to roll over. We have had some days of stumbles. Valuation multiples are at all time highs. Some suggest a “re-pricing” but we had a similar re-pricing at the last cyclical peak only to pull back.

2021 is shaping up to be a very good year as momentum seems strong for business with little probability of a downturn. But the stocks don’t always follow earnings step for step and the semi stocks have always turned before business turned.

The chip shortage will eventually end and the real question is what happens after?

Also Read:

Semiconductor Shortage – No Quick Fix – Years of neglect & financial hills to climb

“For Want of a Chip, the Auto Industry was Lost”

Will EUV take a Breather in 2021?


What’s Wrong with Car Connectivity

What’s Wrong with Car Connectivity
by Roger C. Lanctot on 03-07-2021 at 8:00 am

What is Wrong with Car Connectivity

I have run into far too many clever automotive executives lately who seem to believe that “we” as an industry have solved the car connectivity challenge. Consumers love built-in car connections and that’s the end of the story – or so they believe.

Sadly, this is not true. Consumers surveyed by Strategy Analytics in China, North America, and Europe, do report increased interest in car connectivity and some willingness to pay for it – but resistance or, worse, apathy, and worse still, hostility remains.

Consumers don’t want to pay. They don’t want to be tracked. They may not want to share their data – that is still somewhat unclear. And they are aware that there is a hacking problem of some kind – ransomware anyone?

Consumer ambivalence regarding connectivity is a reflection of auto maker ambivalence. Car makers don’t want to pay either. Car makers don’t want to pay for the hardware. They don’t want to pay for the cost of transmitting data. They don’t even want to pay the taxes on direct 911 calls to emergency response centers.

This car maker ambivalence or outright resistance – which can be translated as car makers viewing a built-in connection on a car as a cost center – manifests in promotional and marketing messages that either omit or de-emphasize connectivity. The in-vehicle experience furthers this impression as there is no direct communication in the car reflecting the vehicle connection.

Even worse than this ambivalence is the car maker determined to “monetize” the in-vehicle connection – most likely at the expense of the customer. Let’s be clear, any data extraction from the car ought to contribute to the safe operation of the vehicle and ought not to occur at the customer’s expense. In fact, the data sharing customer ought to be compensated in some fashion for sharing his or her vehicle data.

Car makers go to a massive amount of trouble at millions of dollars expense to deliver in-vehicle wireless connections. The original justification was to support automatic crash notification to summon emergency responders in the event of a crash. Today, the focus has shifted to software updates, remote diagnostics, remote start, vehicle finders, and remote door lock/unlock.

But after all the expense and trouble actually communicating the status of vehicle communications remains an afterthought. With so much concern regarding cybersecurity and hacking one would expect the car to start with a splash screen message that the car is: “Connected and Secure.”

Not only that, the car should also announce, perhaps on the same screen: “User Data Protected by XYZ.” This message could include a link to vehicle settings in case the user wants to make a change.

The problem is that nearly every car sold in the U.S. and more than half of the cars shipped in the world starting with 2021 come equipped with a built-in connection. The issue is that the industry has not actually “closed the deal” with the average consumer.

The industry is ambivalent and customers have picked up on this ambivalence. This is a big problem because the average customer is perfectly content to connect his or her phone in their car rather than pay a subscription for the built-in telematics system.

What’s missing? Three things. Transparency, control, and trust.

Car companies are less than transparent about the data they are collecting from vehicles and do not provide clear disclosures of this practice. Nor do they provide simple and transparent access to the vehicle data that is being collected such that the customer can see for his or her self.

In addition to this lack of transparency, there is a lack of control. There is no simple means for a customer to protect, delete, or transfer their data. Without transparency or control there is no trust. Without trust there is a shaky value proposition regarding vehicle connectivity.

Tesla Motors has been a leader in vehicle connectivity. Tesla has established this leadership by initially making vehicle connectivity free – later charging $10/month for most Tesla owners. Tesla is different from other auto makers because of its frequent software updates which help establish regular communications between the company and its vehicle owners.

No other car company has this level of customer engagement. Tesla is no paragon, though. If you don’t want to pay the monthly fee you may cut yourself off from vital software updates. Tesla makes it difficult for customers to opt out.

Tesla also does not provide any means for the customer to view or control the data being extracted from the vehicle. Tesla also fails to provide a means of cutting off or erasing that data.

There is a substantial trust delta in the automotive industry. In 2020, the Reader’s Digest identified Toyota as the most trusted passenger car brand and Ford as the most trusted pickup truck brand. But these surveyed results reflect – as noted in participant quotes – vehicle reliability.

In a world increasingly defined by software, wireless connectivity, and automated driving capabilities, trust will take on an entirely new meaning. Car makers must come to grips with the need for transparency and customer control of vehicle connectivity.

Car owners should know when their vehicles are connected and what their vehicles are communicating to the surrounding world and when. Those communications should be highlighted and communicated in the car in real-time and consumers should have the ability to limit or stop those communications.

The customer owns the car. Therefore the customer owns the data. That ought to be reflected in appropriate and non-distracting user interfaces.

We won’t close the trust gap or eliminate consumer ambivalence regarding connectivity until we improve in-vehicle communications regarding connectivity and enable and enhance customer control. We’ve done this with smartphones. We need to deliver an equivalent experience in cars.


Digital Filters for Audio Equalizer Design

Digital Filters for Audio Equalizer Design
by Rhishikesh Agashe on 03-06-2021 at 6:00 am

Digital Filters for Audio Equalizer Design

Equalizers were initially designed and developed for movie theaters and amphitheaters or outdoor areas but now they have become ubiquitous. Equalization is essential for creating professional sound and creating real life like sound effects. Equalizers are used for controlling the energy/loudness of a particular frequency or a specific frequency range/band within an audio signal.

Introduction
Each musical note contains multiple frequencies. The base note of the instrument will be the ‘Fundamental Frequency’ and all the other frequencies in that particular musical note would be the ‘Harmonics’ of the ‘Fundamental Frequency’. The existence of these harmonics is what causes the sounds of instruments, to differ. Hence, a change in the Equalizer triggers a change in sounds.

The Equalizer is usually used as an element of audio post-processing. Once Audio Processing is implemented on a PCM signal, a post-processing algorithm (for e.g. an equalizer) can be provided for quality enhancements in the audio signal to increase listening pleasure.

Most home audio systems may have 5, 7 or 10 band equalizers. Professional music equipment uses 20 to 30 band equalizers.

Figure 1: The Graphic Equalizer (Courtesy: https://www.waves.com/plugins/geq-graphic-equalizer)

Types of Equalizers
Graphic Equalizers: Graphic equalizers are the most commonly used equalizers for music systems. It works by allocating a range of frequencies in certain number of bands. The energy in the frequency bands are either attenuated, or boosted, depending on the requirement. The more the number of bands, the more the precision, and vice versa. However, they do not allow control over the shape of the filter for each band. Audio Filters are used to isolate bands, usually in a bell shape around the center frequency.

Parametric Equalizers:  These are the most frequently used equalizers in high-end home audio systems and in some recording studios as well. The Parametric Equalizer lets you to control the Center Frequency, its Gain and the Range of each frequency band.

Dynamic Equalizers: This equalizer provides all the facilities given by the parametric equalizers and on top of that provide the user with the control of compression and expansion of an audio signal.

Shelving Equalizers: A shelving equalizer works similar to a High Pass or a Low Pass Filter. Here, the frequencies in the higher or lower end of the spectrum are boosted or attenuated. The boost or attenuation of frequencies is independent of center frequency for a Shelving filter.

Equalizer Fundamentals
In an Equalizer, the audio filters are used to isolate bands around the center frequency, usually, in a bell shape (Band Pass Filter). Analyzing the individual bands of an Equalizer (EQ) yields the filter characteristics of that particular EQ. These are important parameters as they help establish the spectral range in which an equalizer will operate (affect the sound). The filter characteristics are classified into:

Center Frequency: The center frequency for a band establishes the frequency around which the boost or cut in the sound energy affects the audio signal.

Filter Type: Filter Type determine the general shape of the EQ band. The most common filter types that used to design an Equalizer are High Pass Filter (HPF), Low Pass Filter (LPF), Notch Filter, High Shelf Filter (HSF) and a Low Shelf Filter (LSF)

Filter Slope: The slope of a filter indicates the rate of attenuation of sound beyond the cut-off frequency. The filter slope is a term usually associated with band pass, high pass and low pass filters.

Filter Q: It helps determine the bandwidth of a filter band. Q is known as the Quality Factor of an EQ.

Filter Gain: The filter gain is measured in dB (deciBels) and indicates the amount of boost or cut that is applied to a frequency band.

The Architecture of a Typical Equalizer
Almost all modern equalizers are based on Octave based Center Frequencies and use a ‘Frequency Warped’ filter design (i.e. the center frequencies are not equidistant) to minimize the inter-band frequency interference. This results in a smooth transition between bands. The most commonly used Audio EQ has been designed by Robert Bristow-Johnson (RBJ). The RBJ EQ too uses frequency warped time varying digital filters. In a time varying digital filter the filter coefficients change over time. Care must be taken to design the filter coefficients to avoid any noticeable artifacts due to the changing filter coefficients. Switching between one filter response to the other, instantaneously, causes unwanted artefacts in the audio output, hence output crossfading is applied to the nearby bands to reduce the artifacts. Crossfading requires the signal to be filtered with both the old and the new filters in parallel for consecutive bands. Thereafter, to smoothen out the transition in the waveform time-domain based crossfading is applied. The goal of crossfading is to keep the difference between bands to less than 3 dB.

The RBJ filter is an implementation of 2nd order filters with octave based bandwidth. Since it is an octave based filter, the bands can be divided only w.r.t the octave frequencies, (i.e. 1/3 Octaves or 2/3 Octaves or 1 octave or 2 octaves and so on) and the number of band of the EQ are dependent on the same.

RBJ Cook Book formulae for designing an EQ
A second order filter is also known as a Biquad Filter. The Transfer function for a digital biquad filter (contains two poles and two zeros)

The above equation contains six coefficients. (namely: a0,a1,a2,b0,b1, and b2) The coefficients for the above equation are usually normalized such that a0 = 1. Now we have only 5 coefficients to work with. Since this is an IIR filter, the quantization error in coefficients can lead to instability. To avoid that cascaded second order filters are used in the design. For the filter to be stable, in the Z-domain, all the poles must be inside the unit circle.

Direct Form 1 implementation is typically used for implementing the above transfer function.

Figure 2: Digital Biquad Filter (Image Courtesy: https://en.wikipedia.org/wiki/Digital_biquad_filter)

Using the following “User defined” parameters the appropriate filters can be designed for an EQ.

Fs                           Sampling Frequency of Audio Signal

fc                           Center Frequency of the band

dBGain                 Used for Peaking and Shelving Filters

Q                           The Quality Factor

The coefficients for various filters are calculated using the above data and the RBJ cookbook formulae given below:

Low Pass Filter: (This filter removes all the frequencies above a specified frequency. It lets all the low frequencies to pass through and attenuates the higher frequencies)

High Pass Filter: (Removes all the frequencies below a specified frequency. It lets all the high frequencies to pass through and attenuates the lower frequencies)

Band Pass Filter: (Removes all the frequencies above and below specified cut-off frequencies. It only lets the frequencies in a particular band with a lower cut-off and higher cut-off frequency to pass through, and attenuates the frequencies outside the specified band)

Notch Filter: (These work on a very narrow band of frequencies. Removes all the frequencies in a narrow band. Notch filters are a subset a Band-Stop filters, with a very narrow band)

Peaking Filter: (A Peaking filter is used to boost or attenuate a range of frequencies around specified frequencies, to form a bell shape, by a ‘user defined’ value)

Low-Shelf Filter: (A Low-Shelf Filter is used to boost or attenuate a range of frequencies below the specified frequency, by a ‘user defined’ value)

High-Shelf Filter: (A High-Shelf Filter is used to boost or attenuate a range of frequencies above the specified frequency, by a ‘user defined’ value)

Where:
A (gain)  = sqrt( 10^(dBgain/20) )   = 10^(dBgain/40)     (for peaking and shelving EQ filters only) w0 (Angular freq) = 2*pi*fc/Fs

alpha = sin(w0)/(2*Q)

Once the coefficients are calculated using the above formulae, appropriate filter functions are called for each band. Usually, Shelving filters (Low-Shelf/High-Shelf) are used for the first and the last band of the EQ and Peaking filters (Bell Filters) are used for all the filters lying in between.

*These are just the fundamental building blocks of an equalizer and in no-way sufficient for designing the equalizer in totality.

Conclusion
Understanding the basics of filter shapes in an equalizer is fundamental to mix or create appropriate sound effects. One needs to know how to use an equalizer properly to make one’s own curves to change the way in which music and movies are heard, completely.  Most home audio systems use simple filters for controlling/adjusting bass (low/very low frequencies), mid-range, and treble (high frequencies). When it comes to recording studios, the equalizers tend to be more sophisticated and are capable of finer adjustments. These high-end equalizers are capable of eliminating unwanted noise/sounds, and either suppress certain musical instruments or magnify particular frequencies to make some instruments sound more spectacular.

eInfochips is a CMMi Level 3 & ISO 9001:2008 certified Product Engineering Services company. We at eInfochips create value across the Software Development Life Cycle(SDLC) by providing DSP Middleware Software Development, Porting, Optimization, Support, and Maintenance services for various RISC and CISC SoC’s. We help our customer’s set-up Offshore Development Centers, supplementing the right teams and appropriate execution Models. For more information contact us today.

About the Author
Rhishikesh Agashe holds nearly 19 years of experience in the IT Industry. 4 years as an Entrepreneur and 15 years in the Embedded domain wherein most of his experience was in Embedded Media Processing where he was involved in Implementation of Audio and Speech Algorithms on various Microprocessors/DSPs(ARM/MIPS/TI/CRADLE/CevaDSP/Meta).

References:

  1. https://www.musicdsp.org/en/latest/_downloads/3e1dc886e7849251d6747b194d482272/Audio-EQ-Cookbook.txt
  2. https://en.wikipedia.org/wiki/Digital_biquad_filter
Also read:

Understanding BLE Beacons and their Applications

Sign Off Design Challenges at Cutting Edge Technologies

Techniques to Reduce Timing Violations using Clock Tree Optimizations in Synopsys IC Compiler II

 


Podcast EP10: The M&A Landscape for Semis and EDA

Podcast EP10: The M&A Landscape for Semis and EDA
by Daniel Nenni on 03-05-2021 at 10:00 am

Dan and Mike are once again joined by Dr. Walden Rhines for an overview of the M&A scene for semiconductors and EDA. Wally discusses the periodic expansion and contraction of these markets along with the factors that cause these trends. Wally concludes with a view of the future.

Wally Rhines is widely recognized as an expert in business value creation and technology for the semiconductor and electronic design automation (EDA) industries. https://en.wikipedia.org/wiki/Wally_Rhines

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


Webinar: Samtec Teams with Otava and Avnet to Tame mmWave Design

Webinar: Samtec Teams with Otava and Avnet to Tame mmWave Design
by Mike Gianfagna on 03-05-2021 at 8:00 am

Webinar Samtec teams with Otava and Avnet to Tame mmWave Design

mmWave design has traditionally been a boutique technology used in satellite and defense applications. Lately that’s changing. It turns out the complex, high frequency capabilities of mmWave technology are a key enabler for the 5G wireless networks being deployed today. I discussed some of this backstory in a recent post about a new member of the Silicon Catalyst incubator.  The design challenges associated with mmWave are substantial. Optimizing the RF signal path is a requirement of course. Dealing with advanced phased-array processing is also a requirement, as is multiple simultaneous beams to deliver low latency with decreasing size, weight, power and cost.  That’s why a recent article on Samtec’s website caught my eye. Read on to learn how Samtec teams with Otava and Avnet to tame mmWave design.

There’s a lot of good information on Samtec’s website about mmWave support across multiple applications. There is also an informative webinar coming with Otava, Avnet and Samtec to discuss their collaboration. There are links coming. First, let’s examine the players, the application and the challenges.

Most SemiWiki readers should be familiar with Samtec by now. They provide connectors, cable assemblies and active optical modules across a broad range of applications and performance levels. If you want an introduction to the company, you can find it here. Otava is focused on end-to-end development of technologies used by advanced 5G commercial and DoD applications. Avnet is a leading global technology distributor and solutions provider. The company can support a product at each stage of its lifecycle, from idea to design and from prototype to production.

This is a very interesting collaboration in my view. It is a step above the typical ecosystem work we hear about. The breadth of applications and support offered by these three companies speaks to the daunting requirements for building effective mmWave systems. The beamforming part of the system design is a particularly dauting aspect. Beamforming uses multiple antennas to control the direction of a wavefront by weighting the magnitude and phase of individual antenna signals in an array of multiple antennas. Getting all this right is quite a challenge. Components, interconnect and algorithms all need to work in tight harmony to get the desired result. This kind of technology is what unlocks the high bandwidth and low latency of 5G and its performance is mission critical to many applications, including those found in autonomous driving systems.

The presenters for the webinar include Matt Burns, technical market manager at Samtec. SemiWiki readers should be familiar with Matt. You can hear a podcast with Matt discussing signal integrity challenges here. From Otava, Steve Fireman, senior VP of engineering is presenting. Steve is a co-founder of Otava and has experience designing state-of-the-art RFICs, ICs, module packaging and PCBs related to microwave and millimeter wave phased array systems at Lockheed Martin. Presenting for Avnet is Luc Langlois, director of products and emerging technology at Avnet. Luc has held a variety of roles at Avnet for 15 years and has a broad background in digital signal processing.

The webinar will cover bleeding edge beamformer technology and precision RF interconnect. New evaluation and development platforms that shorten development cycles will also be discussed. If you’re doing any type of 5G development, you should attend this webinar. The webinar was held on March 10, 2021 at 11AM Eastern Standard Time.  The good news is that a replay is available. Check it out to see how Samtec teams with Otava and Avnet to tame mmWave design. You can access the webinar replay here. You can also get a broad view of all the 5G network application solutions available from Samtec here.

 

The views, thoughts, and opinions expressed in this blog belong solely to the author, and not to the author’s employer, organization, committee or any other group or individual.


CEO Interview: Dr. Shafy Eltoukhy of OpenFive 

CEO Interview: Dr. Shafy Eltoukhy of OpenFive 
by Daniel Nenni on 03-05-2021 at 6:00 am

Shafy Eltoukhy

Dr. Shafy Eltoukhy has over 35 years of experience in the semiconductor industry. He served as VP and BU manager of the Analog Mixed Signal Group at Microsemi. He was the VP of Operations and Technology Development at Open-Silicon. He was the VP of Technology at Lightspeed Semiconductor where he joined the founding team that invented structured ASIC technology. Shafy was the Director of Technology Development at Actel Corporation, where he participated in the development of the first generation of FPGA products. He also held engineering positions at Intel Corporation. Shafy holds a Ph.D. from the University of Waterloo, Canada.

What brought you to semiconductors?
As an electrical engineering student studying for my master’s, I attended a class on semiconductor physics. I found the course to be very exciting and it sparked the passion I now have for the semiconductor industry. Once I earned my Ph.D. from the University of Waterloo, Canada, I became increasingly enthusiastic about semiconductors and their impact on the future. I was offered a job as a professor at the university but turned it down because I really wanted to be active in the industry.

My first job was at Intel where I was a device engineer working on DRAM technology. After a couple of years with Intel, some of my colleagues and I came up with an idea for a startup company, and Actel was born (now known as Microsemi). We launched it as an FPGA company, which was a new technology at that time. This was a turning point in my career, where I came to the realization that I loved to work for startup companies. It’s important to note that the fabless model was nonexistent at that time, so we talked to Chartered Semiconductor (now Global Foundries) and a few others about being our foundry. After the launch and success of our first product, the company went public. A few years later, I decided to start up another company, Lightspeed Semiconductor, which focused on ASIC technology. Since then, I’ve remained very active in ASICs and recently spent time as the general manager at Microsemi where I focused on analog mixed-signal product development.

The semiconductor industry is very exciting to me because every day there are changes and new advancements. ASICs have been especially exciting because of the collaboration with many different customers who have varied and innovative ideas, as well as many different target applications in a variety of vertical markets.

I can sense the excitement when you talk about startups. There are not that many people who are still in the startup arena and going strong.
That is true. I’ve found that I really enjoy small companies because, unlike larger ones, it’s much easier to implement key decisions and also to change the organization’s direction very quickly. It’s a fast-paced environment and the energy in the company is contagious. I think it’s one of the main reasons I still enjoy working for startups to this day.

What is OpenFive’s back story? Why was the business unit formed?
It started when Open-Silicon was acquired by SiFive, Inc. in 2018.  SiFive was focused on processor cores based on the RISC-V ISA. The addition of custom silicon capabilities to the SiFive portfolio helped us accelerate the IP integration and SoC design cycles and bring silicon to the market at a faster pace. The custom silicon BU built a successful business model that combined customization of SoCs with RISC-V cores. To drive further business growth, we launched the OpenFive brand and expanded into providing custom silicon solutions with differentiated IP, while being agnostic to processor architecture. This distinct OpenFive brand provides clarity on our ability to produce custom SoCs, from spec-to-silicon design, customizable IP, and manufacturing. The current emphasis in the industry on scalable silicon architectures makes Die-to-Die and Chip-to-Chip interconnects integral for disaggregated die and chiplet based SoC solutions, and has created strong requirements for experience in advanced packaging, test, and production in leading-edge process nodes such as 5nm. This need and opportunity drove the creation of OpenFive as you see it today.

What do you mean when you say OpenFive is processor agnostic? What is your core processor strategy?
OpenFive is very neutral as to which processor is used because we are an independent silicon business unit and we’re ultimately measured based on how many acres of silicon we sell. We have expertise in implementing SoCs with all relevant ISAs.

What customer challenges and business models is OpenFive addressing?
The challenges always depend on the type of customer, and we strive to offer each customer an optimized solution.  Let’s take for example a system company; they may not be familiar with the design process of a chip or with the manufacturing part of it, but they really want to get to the market quickly. OpenFive offers them a complete solution from spec-to-silicon. A lot of these customers are not semiconductor experts, and they simply want a chip that works based on their unique specifications. OpenFive also has customers that have their own design teams and front-end architecture. This is their bread and butter, and they know how to proceed, but they don’t have a physical design team or the tools to do the physical design. OpenFive supports these customers by using a netlist or RTL handoff model to deliver working silicon to them. This group doesn’t need to be concerned with acquiring physical design tools such as those from Cadence or Synopsys, and they don’t have to be concerned about their foundries, as OpenFive will take care of this for them. The third type of customer, which is also a sizable portion of our business, has a team that already has a chip that they have developed as a prototype. However, they are not experienced in working with the supply chain and dealing with the foundry and all the things that come with the operational side of it. They come to us with their design, and we handle the testing, production ramp, supply chain management, and so on. At OpenFive we are committed to offering customers our end-to-end expertise from SoC design, IP and manufacturing to deliver high-quality silicon in advanced nodes down to 5nm.

What do the next 12 months have in store for OpenFive? 
OpenFive’s goal with all of our customers is to add more value through our engagement, and with that in mind, we are moving in two major directions. The first is spec-to-silicon, where OpenFive will focus on a few vertical markets where we can add more value to the customer and take advantage of the platforms that OpenFive builds to reduce time-to-market and the solution cost for the customer. The second goal is to establish more investment in delivering IP for these vertical markets.

For example, we are delivering a lot of HBM solutions for the high performance computing market, going down to 7nm, 5nm, 3nm and so on. We’re also  staying ahead of the game by investing in More-than-Moore solutions with die-to-die (D2D) interfaces, chiplet technology and 2.5D packaging. By mixing and matching different technologies, we can offer chiplets that enable partitioning of the design into different functions, and the option to choose a process optimized for that particular function. The overall cost of the solution will be lower than going to a finer geometry process node that is very expensive. This area is very important to us moving forward. In the coming months, you will see many exciting new initiatives from OpenFive ranging from AI-enabled sub-systems to customizable D2D IP and chiplets with advanced 2.5D packaging, and we look forward to enabling customers to create domain-specific SoCs that are highly optimized for power, performance and cost.

Also Read:

CEO interview: Graham Curren of Sondrel

CEO Interview: Mark Williams of Pulsic

CEO Interview: Sathyam Pattanam


Perforce Embedded DevOps Summit 2021 and the Path to Secure Collaboration on the Cloud

Perforce Embedded DevOps Summit 2021 and the Path to Secure Collaboration on the Cloud
by Mike Gianfagna on 03-04-2021 at 10:00 am

Perforce Embedded DevOps Summit 2021 and the Path to Secure Collaboration on the Cloud

Perforce recently held their virtual Embedded DevOps Summit. There was a lot of great presentations across many disciplines. Of particular interest to me, and likely to the SemiWiki readership as well, was a presentation by Warren Savage entitled Secure Collaboration on a Cloud-based Chip Design Environment. I’ll provide a quick overview of the event and then I’ll dive into Warren’s presentation to illuminate the Perforce Embedded DevOps Summit 2021 and the path to secure collaboration on the cloud.

I’ve been to A LOT of virtual events this past year. I’m sure you have as well. After attending so many you start to get a sense of what works best. Live presentations and a robust way to interact with the speakers are two elements I find appealing. The Perforce DevOps Summit had both. The presentations were live, and each session had a live moderator to keep things moving. This makes the event a lot more interesting in my experience. Interaction with the speakers was done through Slack, a robust and reliable platform. All good.

Warren Savage

Back to secure collaboration on the cloud. I’ve known the presenter, Warren Savage for a long time. Warren has a very relaxed and effective style. He seems to be able to explain any complex topic in a way that everyone can understand.  Warren is a Silicon Valley veteran who hails from places like Fairchild, Tandem Computers and Synopsys. In 2004 he founded IPextreme with the goal of simplifying IP access. Silvaco acquired the company on 2016 and Warren has now been recruited by DARPA to work on cybersecurity.

The organization he’s working with is called ARLIS (Applied Research Laboratory for Intelligence and Security). The organization has a broad charter in research and development for artificial intelligence, information engineering and human systems. Warren’s talk had two parts – one that outlined the significant security risks that exist in the devices and networks we use every day and one that detailed the work DARPA is doing with Perforce to address these risks. Let’s take a look at both parts.

Vulnerabilities – Be Worried

Warren began with an overview of the incredibly connected world around us – how we got there. Most of us know this story quite well, so let’s skip to what’s wrong with it. If we look at all the interconnected devices around us, they communicate with the cloud and with each other. IoT is a good example. The issue with this massive network of devices is the uneven security that exists across the spectrum. Simple IoT devices can have weak security. Warren used the digital picture frame you give your grandmother so you can stream photos of her grandchildren to her as an example. This sounds innocent enough, but it turns out these devices present a meaningful attack vector.

Warren recounted the incident in 2016 when a denial-of-service attack on a major DNS server took the internet down on the east coast of the US and Europe for about a day. This attack was accomplished with a weapon called the Mirai botnet. Essentially a massive network of vulnerable devices (like that picture frame) all co-opted to perform a specific attack protocol. These botnets can be assembled in a hierarchical fashion, creating formidable processing power.  The recent SolarWinds hack is another example of how large the scope of these efforts can be. 

DARPA has created an Attack Surface Reference Model that catalogs the various methods to misappropriate chip technology. There are four primary vectors:

  • Side channel: extraction of sensitive data by observing external chip characteristics, such as power consumption or network traffic
  • Reverse engineering: extraction of algorithms and design details from illegally obtained representations
  • Malicious hardware: insertion of secretly triggered disruptive functions in the device
  • Supply chain: cloning, counterfeiting or re-marking devices

It turns out the semiconductor supply chain represents an enormous attack surface, some of the opportunities are detailed in the figure below.

Semiconductor Supply Chain Vulnerabilities

The Work Ahead – Be Less Worried

Warren described a rather ambitious program underway at DARPA to address these threats. Called Automatic Implementation of Secure Silicon (AISS), the goal is to embed security capabilities into the design flow. New tools and new IP will play a part. There are already approaches to address some of the threats mentioned. EDA tools can add additional logic to a chip that modifies its behavior. Only by entering a key can the chip’s original function be restored. This basically makes reverse engineering very difficult. The program’s goal is to add a fourth parameter to the familiar power, performance and area metrics (PPA) for security.

Warren went into some advanced work using blockchain technology to track the chain of custody of material in the semiconductor supply chain. This will help to close many attack surfaces. Back to AISS. There is a mandate that this system must operate entirely in the cloud. There is a lot work going on to host a familiar-looking, easy-to-use design flow in the cloud to deliver new security technology.

This design flow is where Perforce technology is used. A key goal of the system is to facilitate controlled access of assets from multiple companies who participate in a design project. Certain assets need access by specific users in multiple companies. Something like a crossbar switch is required to implement a system like this, and the Helix Core from Perforce is an excellent match for this need. The architecture of the system is shown below.

Perforce Helix Core Asset Control

The complete agenda of the Embedded DevOps Summit 2021 can be found here. I suspect there will be an opportunity to watch replays of the event. Keep watching here to get more information on the Perforce Embedded DevOps Summit 2021 and the path to secure collaboration on the cloud.

The views, thoughts, and opinions expressed in this blog belong solely to the author, and not to the author’s employer, organization, committee or any other group or individual.

Also Read:

Single HW/SW Bill of Material (BoM) Benefits System Development

A Brief History of Perforce

Conference: Embedded DevOps


Maximizing ASIC Performance through Post-GDSII Backend Services

Maximizing ASIC Performance through Post-GDSII Backend Services
by Kalar Rajendiran on 03-04-2021 at 6:00 am

Panel 1 Alchip – HPC ASIC Manufacturing Done Your Way 1030x579 1

ASICs by definition are designed to meet the respective applications’ requirements. ASIC engineers deploy various design techniques to maximize performance, minimize power and reduce chip size. But is there more that can be done after the GDSII is taped out? A recent press release from Alchip Technology dated Feb 4, 2021 claims “High Performance Computing Demand Puts Premium on Backend Engineering Expertise.” The subheading of the same press release states “Once Mundane Service Now Prized for Squeezing Out Last nth of Performance.” From the subheading it is clear that Backend services being offered by Alchip Technology is not new. But the point that these same services have now become prized is worthwhile to understand why and how. Is it just a temporary phenomenon due to fluctuating market demand or is it a permanent shift in how the services are and will continue to be valued in the future? What are the criteria that would make one company better than another in rendering these services? The following is an attempt to arrive at answers to the above questions by taking a look at the evolution of the industry, the technologies and the supply chain ecosystem.

In the beginning semiconductor companies were vertically integrated and had their own foundries. There were dedicated departments that handled design, layout optimizations for process, packaging, test and manufacturing related aspects. With that vertical integration breaking down over the years (EDA tools, packaging, test, foundries), these capabilities needed to move out as well. Subsequently these capabilities started getting highly specialized with advances in the respective technologies.

The following diagram captures what is involved in backend services. With the introduction of every new process node, packaging and substrate technologies, comes the opportunities for higher performance of the ASICs. But along with the opportunities come complexities and challenges too. The result is an increase in the outsourcing of packaging, test, assembly and production responsibilities to companies who are far more experienced in these capabilities.

To master today’s advanced manufacturing technologies and fully extract the performance, power and area benefits offered by them, specialists need to be deployed. To use an analogy from the software domain, an optimized code written in a high-level language may be further optimized at the assembly language level by an expert in that assembly language. And an optimized piece of code at the assembly language level may be further optimized at the machine code level by that machine code expert. Backend specialists are like machine code experts of the semiconductor domain.

As an example, let’s look at packaging technology. This is an area where there have been tremendous advances that directly impact the performance of a semiconductor application in terms of speed, signal and power integrity. Chip-on-Wafer-on-Substrate (CoWoS®) is one technology that enables increased performance bandwidth, reduced power consumption and smaller form factor.

Following are some excerpts from the press release.

“Packaging isn’t packaging anymore,” declares Leo Cheng, Senior Vice President of Engineering at Alchip.  “With today’s design complexity, packaging has become the most cost/efficient route to increasing performance, lowering power consumption and meeting real estate constraints. 

Alchip has elevated its packaging capabilities to include Chip-on-Wafer-on-Substrate (CoWoS®) first developed by TSMC and this spring is expected to announce a true 2.5D INFO capability. 

Alchip’s CoWoS process runs on dedicated tooling and demonstrates IP performance equivalent to that of an original design.  The process also includes online debugging and active thermal control.  The company’s in-house design substrate design capabilities assure compliance with all system requirements and establishes the frame work for critical foundry-to-final test flow. 

Packaging is just one of the many areas within backend services. There is value to be maximized by customers within each and every area of backend services by leveraging a specialist service provider.

Alchip with its HQ in Taipei, a dedicated team in Hsinchu and its well-honed backend services portends to bring tremendous value to its customers. It’s understandable that demand for Alchip’s post-GDSII backend services has increased exponentially across all high-performance computing ASIC applications. Any customer looking to squeeze out the last nth of performance from their semiconductor device may want to have exploratory discussions with Alchip.

Also Read:

Alchip at TSMC OIP – Reticle Size Design and Chiplet Capabilities

Alchip moves from TSMC 7nm to 5nm!

Alchip Delivers Cutting Edge Design Support for Supercomputer Processor


NetApp Enables Secure B2B Data Sharing for the Semiconductor Industry

NetApp Enables Secure B2B Data Sharing for the Semiconductor Industry
by Mike Gianfagna on 03-03-2021 at 10:00 am

NetApp approach to security

Data sharing between semiconductor companies and EDA software companies has been critical to the advancement of the industry.  But it’s had security issues and associated loss of trust along the way.  For instance, there have been cases of customer designs shared as a testcase finding their way into a product demo without the consent of the customer. How did this happen? There was no malicious intent. The primary cause was that the shared data was not controlled within a secure vault and there was no tracking of how the data was used and by whom.  There was also no clear way to return the data that was sent or ensure that all instances of the data were deleted. This has led to major B2B trust issues which then leads to longer bug fix cycles because data is not easily shared. A new approach is needed. Read on to see how NetApp is working to improve secure B2B data sharing for the semiconductor industry.

Why the Industry Needs Secure and Trusted B2B Data Sharing

As I have shared in previous articles, data is the ever-growing lifeblood of semiconductor design.  Double digit data growth between 7, 5 and 3nm design nodes is straining design infrastructure.  At the same time the value of that data is increasing. Data once deleted after successful or failed analysis is being saved so AI/ML models can train or learn from past design runs. Data shared for the joint development of AI/ML models is just one example of the importance of robust secure B2B data sharing solutions.

Let’s examine some of the key reasons for B2B data sharing in the semiconductor industry. These items won’t necessarily make big headlines, but they represent a crucial process to advance chip design. The following points highlight some scenarios of interest.

EDA vendor debug

EDA vendors will always require access to customer designs for software debug – this need will never go away.  Concerns around sharing testcase data results in delays to gain access to the data, creating longer debug and resolution times.  I have even heard stories of EDA teams trying to guess the cause of a problem when access to data was not an option. Rapid access to data is critical for fast issue resolution of issues and for meeting time to market goals.

AI development

EDA tools are rapidly building AI-enabled solutions. Machine learning (ML)/deep learning (DL) can reduce algorithm complexity, increase design efficiency and improve design quality.  Training complex ML and DL models requires massive amounts of data.  And in most cases, it is data EDA vendors don’t have.  The data EDA vendors need is their customer’s design data.  Secure data sharing is critical to the rapid advancement of AI in the semiconductor industry. The challenge is the volume and proprietary nature of the data further complicates sharing.

NDA compliance

We have an NDA in place, so we’re covered, right?  Most data sharing NDAs require that data be returnedand/or deleted once it is no longer needed.  Verifying that all copies of sensitive data were fully deleted in compliance with an NDA is difficult at best. 

Collaboration

Modern chip design is a team sport.  IP providers, library vendors, tool vendors and design services teams all work together to meet critical design timelines and design goals.  Secure data sharing to facilitate collaboration is critical for this process to work.

Can we change the way we think about secure data sharing?

Let’s talk about the roles and responsibilities of Data Owners and Data Users. 

  • Data Owners should be able to share data into a data user’s secure walled off datacenter while still retaining complete visibility and control over WHO can access the data and WHAT systems can access the data. There should be visibility into how often the data is accessed with the ability to highlight anomalous data access patterns. Data Owners should be able to monitor the security attributes of the systems that have access to the data

Data Owners should also be able to securely revoke (or even securely wipe) the data from the system including removing key access.  Data Owners should not find data sitting on a data user’s system unused or after the terms of use have expired or the data has turned cold.  Data Owners should have full visibility of their data at any time even when it is in the Data Users’ datacenter or cloud environment.

  • Data Users should be able to use or share data in their own secure walled off datacenter where they have access to their own resources and tools. They should be able to access the data for approved processes such as test case debug, AI model development and for design collaboration.  Data sets are often so large that it is impractical to expect the Data Owners to host the compute and storage resource for development.  So, it is often critical to have access to the data in Data User’s own datacenter.

The NetApp Approach

NetApp’s ONTAP storage operating system is used by all of the top semiconductor and EDA companies. ONTAP is also used in all of the 3-letter acronym government facilities today for data sharing.  This means that B2B secure data is most likely already a possibility.  Because NetApp’s ONTAP storage operating system runs in all of the commercial clouds, B2B data sharing can be done datacenter-to-datacenter, datacenter-to-cloud or cloud-to-cloud, all with the same controls and monitoring. You can learn more about ONTAP from this prior post.

You can also get a broad view of NetApp’s approach to security here. There is a very useful technical report available from NetApp. A link is coming.

First, let’s take a look at some of the capabilities that allow NetApp to enable secure B2B data sharing for the semiconductor industry.

  • Support for Zero-Trust security architectures
  • Virtual Storage Machine (SVM) – this enables data to be walled off on a shared storage system. This is effectively a secure multi-tenant data storage environment.  SVM allows for role-based access that allows controlled access to allow Data Owners to monitor the storage environment even inside the Data User’s datacenter for real time auditing
  • Secure data transfer via SnapMirror or FlexCache means no more downloading and untar’ing data.Data is automatically transferred from one ONTAP filer to another with data encryption both at rest and in flight. An added benefit is the data is always up to date in the case of rapidly changing data sets
  • Data encryption both with encrypted or unencrypted drive with external key manager is supprted
  • Secure data shredding is supported
  • NFS and SMB security with Kerberos is supported
  • Military grade data security credentials are supported. ONTAP is EAL 2+ and FIPS 140-2 certified
  • File-level granular event monitoring with integration is security information and event management (SIEM) partners is available and supports:
    • Log management and compliance reporting
    • Real-time monitoring and event management. This provides visibility of WHO is accessing the data, what systems are accessing the data and how often the data is being accessed.
  • Integration into third party security tools like:
    • Splunk-based system monitoring to report changes to the system
  • Cloud Secure technology also monitors for anomalous access patterns alerting the Data Owners of suspicious access patterns

The B2B Data Owner has the ability to securely transmit data, revoke data, monitor the usage and access pattern of data, monitor and alert when the secure Zero-Trust infrastructure has been changed, etc. 

I’ve only scratched the surface here. NetApp offers a lot of capability to create a trusted, secure environment. NetApp is working to improve secure B2B data sharing for the semiconductor industry.

The views, thoughts, and opinions expressed in this blog belong solely to the author, and not to the author’s employer, organization, committee or any other group or individual.

Also Read:

NetApp’s FlexGroup Volumes – A Game Changer for EDA Workflows

Concurrency and Collaboration – Keeping a Dispersed Design Team in Sync with NetApp

NetApp: Comprehensive Support for Moving Your EDA Flow to the Cloud


Semiconductor Shortage – No Quick Fix – Years of neglect & financial hills to climb

Semiconductor Shortage – No Quick Fix – Years of neglect & financial hills to climb
by Robert Maire on 03-03-2021 at 8:00 am

Toamagachi Semiconductor shortage

– Semi Situation Stems from long term systemic neglect
– Will require much more than money & time than thought
– Fundamental change is needed to offset the financial bias
– Auto industry is just the hint of a much larger problem

Like recognizing global warming when the water is up to your neck

The problem with the semiconductor industry has finally been recognized but only after it stopped the production of the beloved F150 Pick Up truck and Elon’s Tesla. Many analysts and news organizations wrongly blame the Covid pandemic and its many consequences and assume this is just another example of the Covid fallout. Wrong! This has been a problem decades in the making. Its not new. The fundamental reasons have been in the works for years. The only thing the pandemic did was to bring the issue to the surface more quickly.

The issue could have been brought to the surface just as easily and with worse consequences by a conflict between China and Taiwan. Or perhaps another trade spat between Japan and Korea.

The semiconductor industry is perhaps not as robust as would otherwise be thought given that it hasn’t seen a significant problem before.

The reality is that the “internationalization” of both the industry and its supply chain have opened it up to all manner of disruption coming at any point along that long chain.

The consolidation has further concentrated the points of failure into a small handful of players and perhaps one, TSMC, that is 50+% of the non memory chip market.

Tamagotchi Toys were the Canary in a Coal Mine

Most people may not remember those digital pets called Tamagotchi that were a smash hit in the late 90’s. Many in the semiconductor industry in Taiwan do remember them. In the summer of 1997 they sucked up a huge amount of semiconductor capacity in Taiwan and whacked out the entire chip industry for the entire summer causing delays and shortages of all types of chips.

Tamagotchi Tidal Wave Hits Taiwan

In essence, a craze over a kids toy created shortages of critical semiconductor chips. Semiconductor capacity is much greater now than it was 20 years ago but the industry remain vulnerable to demand spikes and slowdowns.

The memory industry is an example of the problem

Perhaps the best example of the chip industry’s vulnerability is the memory semiconductor market. The market lives on the razors edge of supply and demand and the balance maintained between the two.

Too much demand and not enough supply and prices skyrocket….too little demand and excess supply and prices collapse.

The memory industry is clearly the most cyclical and volatile in the semiconductor universe. One fab going off line for even a short while due to a power outage or similar causes the stop market for memory chips to jump.

Kim Jong-Un should buy memory chips futures

All it would take is one “accidentally” fired artillery round from North Korea that hit a Samsung fab in South Korea and took it out of commission. Memory prices would go through the roof for a very long time as the rest of the industry could never hope to make up for the shortage caused in any reasonable amount of time

Other industries, such as oil, do not have the same problem

When you look at other industries in which a product is a commodity like memory is you do not have the same production problem. The oil industry which is also a razor’s balance between supply and demand does not have the same issue as there is a huge amount of excess capacity ready to come on line at a moments notice.

The cost of oil pumps and derricks sitting around idle waiting to be turned on is very very low as compared to the commodity they pump. This means the oil industry can flex up and down as needed by demand and easily make up for the shortage if someone goes off line (like Iran).

Imagine if the oil industry kept pumping, at full output, never slowing, for each new oil field drilled.

In the semiconductor industry the capital cost is essentially the whole cost so fabs never ever go offline as the incremental cost to produce more chips is quite low. This means there is no excess capacity in the chip industry of any consequence and they run 24X7. Capacity is booked out months in advance and capacity planning is a science (perfected by TSMC).

The semiconductor industry has all the maneuverability of a super tanker that takes many miles to slow down or speed up….you just can’t change capacity that easily.

There is no real fix to the capacity issue due to financials

To build capacity that could be brought on line in a crisis or time of high demand would require an “un-natural” act. That is spending billions to build a fab, only to have it sit there unused waiting for the capacity to be needed. This scenario is not going to happen….even the government isn’t dumb enough to spend billions on a “standby” factory that needs a constant spend to keep up Moore’s law.

Its just not going to happen

Moving fabs “on shore” just reduces supply risk not demand risk

Rebuilding fabs in the US would be a good thing as it would mean fabs that are no longer an artillery shell away from a crazy northern neighbor or an hour boat ride away from a much bigger threat that still claims to own you.

That will certainly help reduce the supply side risk assuming we don’t build the new fabs on fault lines or flood zones. The demand side variability will still exist but could be managed better.

Restarting “Buggy Whip” manufacturing

The other key thing that most people do not realize is that most semiconductors used in cars, toys and even defense applications are made in very old fabs. All those older fabs that used to make 386 and 486 chips and 1 megabit memory parts have long ago been sold for scrap by the pound and shipped off to Asia (China) and are now making automotive and toaster oven chips.

Old fabs never die…they just keep making progressively lower value parts. As I have previously mentioned in a prior note, you don’t make a 25 cent microcontroller for a car in a $7B , 5NM fab….the math simply doesn’t work.
This ability to keep squeezing value out of older fabs has worked as demand for trailing edge has not exceeded capacity.

For a typical chip company, the leading edge fab makes the highest value CPU, the next generation older fab maybe makes a GPU, the next older fab maybe some I/O chips or comms chips, the older fab makes consumer chips and the oldest fabs make chips for TV remotes.

In bleeding edge fabs the equipment costs are the vast majority with labor being a rounding error. In older fabs , with fully depreciated equipment, labor starts to become a factor so many older fabs are better suited to be packed up and shipped off to a low labor cost country.

The biggest problem is that demand for older chip technology seems to have exceeded the amount of older capacity in the world as chips are now in everything and IOT doesn’t need bleeding edge.

Equipment makers for the most part don’t make 6 inch (150MM) tools anymore, some still make their old 8 inch (200MM) some don’t. As we have previously mentioned, demand for 200MM now exceeds what it was in their peak.

Old Tools are being Hoarded

Summary
Fixing not only the shortage issue but the risk issue will take not only a lot of time but a lot of money. The problem is systemic and has been dictated by financial math that has incentivized what we currently have in place.

In order to change the behavior of anyone who runs a chip company and can add we need to put in place financial incentives, legal decrees, legislative incentives and use a multiple of levers to change the current dynamics of the industry.

Even with all the written motivation in place it will still take years for the physical implementation of the incentivized changes.

TSMC has been under enormous pressure for years about a fab in the US. Now they are planning one in Arizona that is still years away, will be old technology when it comes on line and will barely be a rounding error….. all that from a multi billion dollar effort….. but its a start.

A real effort is likely to be well north of $100B and 10 to 20 years in the making before we could get back to where the US was in the semiconductor industry 20 years ago.

The Stocks
As the saying goes, buying semiconductor equipment company stocks is like buying a basket of the semiconductor industry. They can also be view as the “arms merchants” in an escalating war.

It doesn’t matter who wins or loses in the chip industry but building more chip factories is obviously good for the equipment makers, in general.

In the near term, foreign makers such as Tokyo Electron, ASM International, Nova Measuring and others may make for an interesting play.

There is plenty of time as we are sure that no matter what happens we will see zero impact from government sponsored activities in 2021 and it will likely take a very long time to trickle down so we would beware of “knee jerk” reactions that may drive the stocks near term.

Also Read:

“For Want of a Chip, the Auto Industry was Lost”

Will EUV take a Breather in 2021?

New Intel CEO Commits to Remaining an IDM