You are currently viewing SemiWiki as a guest which gives you limited access to the site. To view blog comments and experience other SemiWiki features you must be a registered member. Registration is fast, simple, and absolutely free so please, join our community today!
The old phrase that the cure is worse than the disease is apropos when discussing MBIST for large SOCs where running many MBIST tests in parallel can exceed power distribution network (PDN) capabilities. Memory Built-In Self-Test (MBIST) usually runs automatically during power on events. Due to the desire to speed up test and chip boot times, these tests are frequently run in parallel. The problem is that they can easily produce switching activity that is an order of magnitude above the levels found during regular chip operation. Indeed, these higher switching activity levels not only can cause supply droop affecting the test results, but also the high heat generated can harm chips. These effects can lead to incorrect binning or event direct and latent failures.
The solution is to simulate MBIST activity to predict the load on the PDN and the related thermal effects. With simulation results in hand, designers can correctly decide how many and which memory blocks can be tested in parallel. However, this is not always feasible in large SOCs with many memory blocks because the simulation times may be prohibitive. With gate level and even less accurate RTL simulation it may not be possible to run enough cycles to get the information needed.
In a white paper titled “Analyzing the power implications of MBIST usage”, Siemens EDA looks at how designers can run sufficient simulation to make informed decisions on the testing strategy before tapeout. Siemens worked with ARM on one of their test chips to create a test case where they could apply hardware emulation with the DFT and Power apps for the Siemens hardware emulator Veloce. First, the Veloce DFT app is used to output the internal activity during MBIST emulation. The app uses the Standard Test Interface Language (STIL) and produces industry standard output files.
The Veloce Power app takes the activity information from the MBIST runs to generate waveforms, power profiles and heat maps that can indicate when there are power spikes above specified limits. With this information test engineers can make informed decisions about the sequencing of MBIST.
MBIST power emulation
The ARM test case described in the Siemens white paper contains 176 million gates. Siemens used a Veloce system with 6 Veloce Strato boards for this test case. The Veloce emulator run took only 26 hours, which is 15,600 times faster than gate level simulation. Another benefit of the Veloce flow is that the activity information is streamed by the Power app to the power tools in the flow, saving disk space and time. The results from the test case showed several power spikes that violated the SOC design specifications. The output from the Veloce Power app shows the total power levels through the simulation along with the separate power contributions for the clock, combinational logic and memory. Likewise, there is information on where on the die the power is being used. This information makes it easy to determine where there are problems.
Finding problems such these requires running millions or billions of clock cycles. The limitations of software simulators make it prohibitive to perform the necessary analysis. Emulation offers a unique avenue to closely examine the power impacts of MBIST and other test operations long before silicon. The Siemens white paper offers insight into the power method used on a real test case. The white paper is available to download for reading on the Siemens website.
Traceability as an emerging debate around hardware is gaining a lot of traction. As a reminder, traceability is the need to support a disciplined ability to trace from initial OEM requirements down through the value chain to implementation support and confirmed verification in software and hardware. Demand for traceability appears most commonly in safety-critical applications, from automotive and rail to mil-aero. In process industries such as petroleum, petrochemical and pharmaceutical, to power plants and machinery safety-related controls. These are today’s applications. The more we push the IoT envelope, Industry 4.0, smart cities and homes, the more cannot-fail products we will inevitably create.
Why bother?
Traceability requirements in our world started in software to ensure that what an OEM wanted was actually built and tested. Now that application-specific hardware plays a bigger role in many modern designs, effective traceability must look inside some aspects of hardware as much as it does in software.
But why is traceability support important? Is this a temporary business fad we’ll soon forget? Or is it an unavoidable secular shift? And what kind of investment will it require? Tools and servers, probably, but added engineering effort is a bigger concern. Will you need to add more staff and more time to schedules? Let’s start with potential market downsides to ignoring traceability.
Locking out regulated markets
It’s easy to understand any case where regulation requires traceability. For medical devices covered by ISO 14971, one source asserts, “Auditing capabilities are also critical for regulatory compliance traceability. The FDA has been known to close down a product or prevent it from being shipped, or even shut down a whole division until you are in compliance.”
Jama Software – who know a thing or two about traceability – add, “even if everything goes according to plan, there’s no guarantee that the traceability workflows in use will account for all relevant requirements and risks. For instance, in the case of a medical device, a matrix created within Excel won’t come with frameworks aligned with industry standards like ISO 14971, making it more difficult to ensure coordinated traceability and ensure successful proof of compliance.”
You shouldn’t assume this only applies to medical devices. OEMs are now working more closely with SoC builders and expect more detailed evidence that device meet all requirements. Coverage reports won’t satisfy this need. They’d rather have a traceability path, from their requirement to a point in the RTL and test plans where you can defend your implementation.
Locking out a geography
Problems don’t only arise in highly regulated industries. We all know of cases when features requested in one geographical region are not important in others. In theory, detailed specifications and lists of requirements capture all needs and variations between clients and geographies. In practice, some escapes slip through. Remember the call from a client, after the spec is finalized, that they forgot one very important feature? The local apps guy takes careful notes and promises the spec will reflect this requirement. But it never appears in the official spec
In one case I heard of, an Asia-Pac client had such a requirement – perhaps a control/status register extension they needed which no other geography had asked for. Seemed harmless enough. But the R&D team didn’t see that requirement in the specs. They built the chip without that feature, and the customer rejected the product. That client was going to be the reference account for the region but became the wrong kind of reference. They lost the whole geography for that product.
The point here being that specs are not quite structured enough for a formal agreement on what you are going to build. Which is why software product builders now depend heavily on formal requirements documentation and traceability. Specs are a good way to elaborate on and explain requirements, but requirements are becoming the definitive definition. Clients won’t find this to be a problem. They are often already very familiar with requirements traceability and related tools. You will just need to build the same awareness/discipline in your team, from the field to R&D.
Won’t that create a big overhead for you?
That depends. If you are going to use a general-purpose requirements tool with no understanding of hardware design, then probably yes, you will have to put quite a bit of work into traceability bookkeeping. It might be time to learn more about the Arteris® Harmony Trace platform. This links intimately to hardware design on one end and traceability standards on the other end. With design semantic know-how to greatly reduce the burden on engineering teams.
As the oldest and largest EDA conference, the Design Automation Conference (DAC) brings the best minds together to present, discuss, showcase and debate the latest and greatest advances in EDA. It accomplishes this in the form of technical papers, talks, company booths, product pavilions and panel discussions.
A key aspect of driving advances in design automation is to discuss evolving EDA requirements, so the industry can develop the solutions as the market demands. At DAC 2021, Cadence sponsored an interesting panel session that gets to the heart of this. The session was titled “How System Companies are Re-shaping the Requirements for EDA,” with participants representing Arm, Intel, Google, AMD and Meta (Facebook). The discussion was organized and moderated by Frank Schirrmeister from Cadence Design Systems.
The following is a synthesis of the panel session on EDA requirements to support the upcoming era of electronics.
Frank Sets the Stage
Referencing a Wired magazine article, Frank highlights how data center workloads increased six-fold from 2010 to 2018. Internet traffic increased ten-fold and storage capacity rose 25x over that same time period. Yet data center power usage increased only 6% over the same period and we have semiconductor technology, design architectures and EDA to thank for it.
The electronics industry is entering an era of domain-specific architectures and languages, as predicted by John Hennessy and David Patterson back in 2018, The primary factors driving this move are hyperscale computing, high-performance edge processing and the proliferation of consumer devices. The next generation of hyperconnected, always-on consumer devices are expected to deliver user experiences never imaginable even a few years ago.
The Global DataSphere quantifies and analyzes the amount of data created, captured and replicated in any given year across the world. End point data creation growth is estimated at a CAGR of 85% from 2019 to 2025 and 175 Zettabytes in 2025. It is as much as there are grains of sand on all the world’s beaches. That’s quite a bit of data to be processed and dealt with.
The companies on the panel session are all involved in creating, analyzing, capturing and/or replicating this humongous amount of data. The discussion will cover what they see as their requirements on the EDA industry.
Arm – Chris Bergey
From an infrastructure perspective, Arm’s involvement is from HPC to data center to 5G to edge gateways. Specialty computing is a big focus area now. System validation is key, when customers are committing to large R&D expenses. When dealing with chiplets architectures leveraging 2D/2.5D/3D implementations, it is relatively easier when all the dies and design rules are owned by a single company.
For heterogeneous implementations, multi-chip packaging is generally used in markets where the margins are high enough to accommodate the extra design efforts, yield fallouts and margin stacking. In reality, hybrid chiplets implementations will help the market grow faster. The EDA industry is expected to play a big role in making heterogeneous chiplets implementation easier and robust.
Intel – Rebecca Lipon
High Bandwidth Memory (HBM) and high-speed servers drove development of critical IP that opened the floodgates for a whole bunch of new applications and products. The industry has to maintain its determination to continue on similar journeys and try to push the envelope. For example, IP level innovation at the packaging level.
Open Compute Project (OCP) is the foundation started by Meta a decade ago. Many companies including all of the companies represented on the panel today are members of this foundation. It works on initiatives that allow you to use open firmware and software that speeds up development and extends the life of products.
One of the initiatives that OCP is focused on is composable computing and supporting domain specific architectures. EDA industry should look into this and look to Linux as a model for open-source community.
Google – Amir Salek
The number of categories of workflows that run in our global data centers, is in the 1000s. And Google Cloud adds a whole new dimension to the demand on serving and processing data, supporting different workloads. Each workload has its own characteristics and while many of them can run on general-purpose hardware, many more need customized hardware.
Testing and reliability are primary areas of concern. I think this plays a major role in terms of understanding the causes of marginality and to decide how to deal with. Looking at TPU pods, we’re talking 1,000s and 1,000s of chips that are stitched together to work in coordination as a supercomputer. So, any little bit of a reliability issue during testing and test escapes, basically gets magnified. And then after many days, you find out that the whole effort was basically useless and you have to repeat the job again.
Prototyping FPGA is a tremendous platform for testing and validation. We are doubling down on emulation and prototyping every year to make sure that we close the gap between the hardware and software.
AMD – Alex Starr
The data center all the way to the consumer, the whole software stack needs to run on whatever the solution is. And many of our designs are implemented using chiplets architecture and that brings up different types of complexity to deal with. The things that keeps me up at night is how to verify and validate these complex systems and get to market quickly.
Hardware emulators and FPGA prototyping systems market is booming and is probably the highest growth area within EDA. Today’s emulators can fit very large designs and help prototype bigger devices. The hardware acceleration platforms to put large designs are tremendously expensive and difficult to get working at that scale. And, as designs grow to five plus billion dates, emulators are not going to scale. Emulation as used for prototyping is at its limit. We are looking at hybrid kind of modeling-based approaches. We are refining these internally and in collaboration with external standards bodies. We really want to extend out into our OEM customers and their ecosystems as well.
Meta (Facebook) – Drew Wingard
We are working on chips to enable our vision for the Metaverse. Metaverse involves socially acceptable all-day wearables such as augmented reality glasses. This new computing platform puts an enormous amount of processing resources right on one’s face. The result is that it demands very tight form factors, low power usage and very minimal heat dissipation.
We need to put different parts of processing in software and hardware. We need to think a lot about the tradeoffs between latencies vs throughputs and cost of computation vs cost of communication. We need a mix of options around different classes of heterogeneous processing and a whole lot of support around modeling. And we have to balance the desire for optimizing requirements versus offering optionality because nobody knows what the killer app is going to be.
As a consumer firm, privacy is incredibly important as they relate to our product usage. Our products should be socially acceptable for the persons wearing as well as the persons across from them.
When we roll all the above together, availability of system models and design cycle times become incredibly important. Many challenges revolve around availability of models and interoperability between models. This is where continuing to closely work with the EDA industry opens up opportunities.
To meet the increased demand for converter speed and resolution, JEDEC proposed the JESD204 standard describing a new efficient serial interface to handle data converters. In 2006, the JESD204 standard offered support for multiple data converters over a single lane with the following standard revisions; A, B, and C successively adding features such as support for multiple lanes, deterministic latency, and error detection and correction while constantly increasing Lane Rates. The JESD204D revision is currently in the works and aims to once more increase the Lane Rate to 112Gbps with the change of lane encoding and a switch of the error correction scheme to Reed-Solomon. Most of today’s high-speed converters make use of the JESD standard and the applications fall within but are not limited to Wireless, Telecom, Aerospace, Military, Imaging, and Medical, in essence anywhere a high-speed converter can be used.
The JESD204 standard is dedicated to the transmission of converter samples over serial interfaces. Its framing allows for mapping M converters of S samples each with a resolution of N bits, onto L lanes with a F octet sized frames that, in succession, form larger Multiframes or Extended Multiblock structures described by K or E parameters. These frames allow for various placement of samples in high- or low-density (HD) and for each sample to be accompanied by CS control bits within a sample container of N’ bits or at the end of a frame (CF). These symbols, describing the sample data and frame formatting, paired with the mapping rules dictated by the standard, allow to communicate a shared understanding of how the transmitted data should be mapped and interpreted by both parties engaging in the transmission.
The 8b10b encoding scheme of JESD204, JESD204A and JESD204B paired with Decision Feedback Equalizers (DFEs) may not work efficiently above 12.5Gbps as it may not offer adequate spectral richness, for this reason, and for better relative power efficiency 64b66b encoding was introduced in JESD204C targeting applications up to 32 Gbps. JESD204D that is following in its footsteps with even higher line rates planned up to 112Gbps utilizing PAM4 PHYs demands a new encoding to efficiently encapsulate the Reed Solomon Forward Error Correction (RS-FEC) 10-bit symbol-oriented mapping.
Deterministic latency introduced in JESD204B allows for the system to maintain constant system latency throughout reset, and power up cycles, as well as re-initialization events. This is accomplished in most cases by providing a system reference signal (SYSREF) that establishes a common timing reference between the Transmitter and Receiver and allows the system to compensate for any latency variability or uncertainty.
The main traps and pitfalls of system design around the JESD204 standard would deal with system clocking in subclass 1 where deterministic latency is achieved with the use of SYSREF as well as SYSREF generation and utilization under different system conditions. Choosing the right frame format and SYSREF type to match system clock stability and link latency can also prove challenging.
Comcores is a Key supplier of digital IP Cores and design services for digital subsystems with a focus on Ethernet Solutions, Wireless Fronthaul and [-RAN, and Chip to Chip Interfaces. Comcores’ mission is to provide best-in-class, state of the art, quality components and design services to ASIC, FPGA, and System vendors, and thereby drastically reduce their product cost, risk, and time to market. Our longterm background in building communication protocols, ASIC development, wireless networks and digital radio systems has brought a solid foundation for understanding the complex requirements of modern communication tasks. This know-how is used to define and build state-of-the-art, high-quality products used in communication networks.
Soitec is a semiconductor materials company known for its smart cut and Silicon on Insulator (SOI) technologies, which are critical in 5G, Silicon Photonics, and Silicon Carbide (EV) end-markets.
Yesterday, they announced that current CEO Paul Boudre will retire and be replaced by Pierre Barnabé in July of 2022. Barnabé is currently an SVP at Atos, a French IT consulting company. This seemingly routine CEO transition sank company shares by over 15%. It’s not what it seems.
First, the entire existing management team is opposed. The management team sent a letter to the board that does not mince words. Translated to English, it reads:
Soitec’s Management Committee deplores the takeover of Soitec by the Chairman of the Board of Directors for 3 years, which culminates today with the incomprehensible appointment of a new CEO.
What exactly is happening at Soitec? Well first let’s just start with the current story of the now outgoing CEO, Paul Boudre.
The Story of Soitec (and Paul Boudre)
Soitec, like most firms, was no overnight success. This timeline of the companies history is the best place to start. Soitec originally pursued other markets but failed to gain traction. In 2015, the company Soitec shuttered its solar business as part of a hard pivot away from its existing failures.
The situation was dire. The company had more debt than the value of its market capitalization.
For context Soitec today is a ~5.8 billion euro company now
In January of 2015, with shares trading at measly $200 million market capitalization, Paul Boudre was appointed CEO. Paul initially joined the company from KLA as an Executive VP in Sales before transitioning to COO in 2008. Drastic changes were needed.
Soitec’s rather bleak 2015
In the midst of significant layoffs in the solar division, a hefty financing package company to keep the company afloat, Paul focused the company on SOI and made it what it is today.
The details of the turnaround are unimportant, but the results today speak for themselves: Soitec now boasts a ~$5.8B market capitalization (~29x higher than the start of his tenure) and has 4x+ annual revenue. This is a legendary turnaround that will surely be sung of in Semiconductor Valhalla.
At the age of 63, Paul is approaching retirement. Given such a track record, one imagines a thoughtful process involving existing management, not the overnight skullduggery that actually occurred. Further, the new CEO candidate has no experience in the semiconductor field. Why did the board choose newcomer Pierre Barnabé over other qualified internal candidates?
To answer this, we need to learn about the Chairman of the Board.
Enter Eric Meurice (and Pierre)
Eric Meurice is best known as the former CEO of ASML. In 2013 Eric was appointed Chairman of the board of ASML after 8 years of being CEO. Note the exact lanaguage of the Chairman announcement.
Eric Meurice will be Chairman of ASML Holding and act as adviser to the new leadership and the Supervisory Board until the end of his contract on 31 March 2014, ensuring a smooth and comprehensive transition of critical tasks and processes, customer contacts and relations with strategic suppliers.
The current CEO gets the job during Eric Meurice’s contract and Mr. Meurice gets bumped to Chairman. It’s pretty customary for the outgoing CEO to spend time as the current Chairman. It’s less customary to not have the contract renewed. Despite European CEOs having shorter tenures, this is atypical.
As the former CEO of ASML, Eric Meurice is now an ideal candidate for board memberships. Here’s a list of the few boards he’s been a part of.
Let’s hone in on Eric’s stint at Soitec. Eric Meurice joined the board of Soitec in 2018 as the chair of the Nomination Committee. The committee was tasked with nominating a new Chairman, so else does Eric Meurice nominate but himself?
Eric, overqualified as he is, gets the job. But that’s not enough. In 2019, he picks up two more important roles as the Chair of the Strategic Committee and Chair of the Compensation Committee. He now holds both the keys to the kingdom and to its treasury.
Joining a company, becoming the Chairman of the Board, and then taking on additional roles as the head of the Compensation and Strategic Committees fit into the typical model of a high-powered executive. That’s standard.
How does the French Government fit into Eric’s rise to power?
Eric Meurice’s Extraordinary Actions
Let’s look at the specific actions Eric Meurice took that prompted significant backlash from the executive committee. In the executive committee letter, they listed out a few specific complaints that were (importantly) falsifiable. Their (translated) list of grievances is as follows:
Takeover of the interim compensation committee that has become definitive, creating an omnipresence in all committees and at the head of several committees
Interference in social dialogue without consultation with management.
Double language regarding the opposition of management on the implementation of PAT (Action Plan for All) in 2021.
Establishment of internal regulations granting exceptionally extensive powers to the Chairman of the Board of Directors and establishing the keys to his takeover.
Alteration of evidence in the context of the investigation of a governance drift.
Intimidation, vexatious practices towards members of the Executive Committee.
The board granted itself extensive power through a list of resolutions added to the typical bylaws of the company at the so-called “extraordinary shareholders general meeting”. It’s rare to see an extraordinary resolution, so it’s outright mind-boggling to see 35 resolutions. This is one of the broadest power grabs I’ve ever seen. Let’s consider a few of the resolutions.
There are a total of 35 resolutions, each giving the board more power than a typical board would have.
The resolutions are technical, but the gist of them is that the board now has a whole set of new powers that are usually reserved for the CFO. They can issue shares, buy back shares, decide who gets shares, and all of these powers are directly granted to the board. This creates extraordinary power to the board and the people on it.
This explains why they had 3 CFOs since the extraordinary resolutions began. Remy Pierre was replaced in September 2019 by Sébastien Rouge who was replaced one year later by Lea Alzingre. That’s high turnover for the job.
Maybe these resolutions could be kosher, but the reason they are not is all the extraordinary resolutions are new for Soitec. Take 2017 for example, when there were only 6 plain resolutions. Something has changed.
6 routine resolutions in 2017.
But besides the mundane extraordinary powers of resolutions given to the board, there’s more at play. I now want to focus on the executive committee complaint around the compensation board because this is where the other players (France) start to enter the scene.
Board Power Politics
Let’s talk about the composition of the board. The board power politics make sense when you can see who is sits on what board committees. There are 5 board committees in Soitec, and the restricted strategic meeting is an ad-hoc group for acquisitions or other events. So that means there are really 4 standing committees and 1 ad-hoc committee. They are the Strategic Committee, the Audit Committee, the Nomination Committee, and the Compensation Committee.
These committees are filled with 14 members, many of them are supposed to be independent. This breaks down with further examination as many of the “independents” are clearly affiliated with the French Government. Now let’s start with chairs of the five committees.
Eric Meurice is the Chairman of the Board, Chair of the Compensation Committee (most powerful committee), and the Chair of the Strategic Committee.
Laurence Delpy is Chair of the Nomination Committee, the chair that Eric Meurice previously held before he became Chairman of the Board. She’s independent, yet she sits on all 5 committees.
Lastly, Christophe Gegout is chair of the Audit and Risk Committees. He’s supposed to be an independent member, but he used to work for CEA aka the large French consortium with a meaningful stake in Soitec. He doesn’t work there anymore, but they’re clearly playing fast and loose with the definition of the word “independent”.
Let’s look at the actual composition of the committees and identify 1) who is in what committee and 2) which committees matter. I made a simple graphic based on the filing with a legend that explains where everyone’s allegiances lie. In order of importance, it’s the Compensation, the Nomination, the Strategic & Restricted Strategic, and lastly the Audit committee. I broadly categorized the board members into the 4 “teams”, aka Team France, Team China, Independent, and Employee Directors. Note the legend in the picture below.
Notice the legend in the photo above. Team Blue / France is the one to watch
Let’s start backward. The green team is employee directors and is part of a movement to get labor union leaders on the board so employees have more say in the company writ large. For the sake of this analysis, I consider them non-players, as this is their first year on the board, and they don’t sit on committees.
Next is the team “actually independent”. Note that Satoshi Onishiworks for Shin-Etsu, so he’s independent as in he’s representing his company’s JV with Soitec. He is not a big player. In Shuo Zhang’s case, I cannot make any meaningful connections to anyone else. Paul Boudre of course is the outgoing CEO. Notice that he does not sit on any important committees.
That brings me to Team China. Team China is the NSIG (National Silicon Industry Group) block, which invested 14.5% at the time which now is diluted to 10.34% stake (this is lower now) into Soitec in May 2016. NSIG, like many large Chinese firms, is an extension of the CCP. They hold two seats that I represent in Red. Kai Seikku is actually sitting on the powerful committees, aka the nomination and compensation committees. But importantly Jeffrey Wang has been pushed out to the not-important audit committee.
Last is Team France. With the exception of Francoise Chombar, they are all French Nationals. I put Francoise Chombar on Team France because she shares a board with Eric Meurice at Umicore, so I assume she’s on his team.
Everyone else either works for CEA (French Alternative Energies and Atomic Energy Commission) or Bpifrance (French Public Investment Bank), formerly worked there, or suspiciously looks connected (Laurence Delpy is clearly important but I have no links other than she worked at Alcatel Lucent where Pierre is from). Thierry Sommelet works at Bpifrance for example.
Importantly Team France consists of 6 out of 8 members of both of the most powerful Nomination and Compensation committees. And everyone else who is not affiliated with Team France conveniently sits outside of these boards with the exception of Kai Seikku, who represents the powerful ~10.34% share block from NSIG. Team France is clearly in control here, and the BPI and CEA seats are permanent, with rotating members but consistent committees.
What I’m trying to say is that Soitec’s board is controlled board by a very small number of players, all of which can be linked to France. Obviously, these members want to protect the interests of France, and what’s more most of the moves taken by the board pre-date Eric’s arrival. So it’s clearly not Eric in charge, but rather the representatives of France that are driving this bus!
What’s France Got to Do With It?
When I first started down the rabbit hole of the Eric Meurice takeover I thought the motivation was pretty simple. Ousted CEO Eric Meurice was looking for another kingdom to rule and found it in the form of Soitec. A clear power play, as highlighted by the management letter. After all, we knew he already had ambitions to sit in the CEO seat again, per this ST Micro rumor. In a series of successive moves, he rose from Director to Chairman, and he pushed his way to the top.
There are a few problems with that theory and it comes in two bold flavors. First, the year (July 2018) that Eric Meurice got appointed to the board as a director was the first year of expanded extraordinary resolutions. The previous year it went from 8 total resolutions to whopping 23 new resolutions. So the expanded board powers actually pre-date Eric’s arrival on the board. Eric Meurice was just the conduit for the control of the board.
The second thing was this extremely crucial disclosure about a standstill agreement with NSIG that made clued me in that something else was happening. When NSIG (National Silicon Industry Group aka China) bought the 14.5% stake in Soitec, they agreed to a standstill agreement on the shares.
The standstill agreement is an agreement at the time so that NSIG (China) didn’t continue to raise their stake in Soitec and effectively take over the company. It’s a takeover provision that stopped their large influence from increasing. The French board members were clearly aware of this.
But that standstill agreement ended on June 7, 2019, which was 1 year after Eric’s rise to power. This explains why he entered when he did. However, this tidbit made everything become clear(er) regarding the resolutions.
Should NSIG Sunrise S.à.r.l acquire shares in the Company before the expiration of the Shareholders’ Agreement at the close of the Shareholders’ General Meeting called to approve the financial statements for the fiscal year ended March 31, 2021, it would lose its rights relating to the Company’s governance
Team France knew they needed to lock down the company from a governance standpoint before March 31, 2021, or risk further influence from NSIG aka China. And Eric Meurice was the perfect man for the job. Win-win.
Besides Paul Boudre was ready to go. He terminated a paused employment contract so that the company wouldn’t have to pay him a termination fee, and they rewarded him with an incentive (I find this weird). He clearly signaled he was on the way out. Readers of the filings could have spotted his retirement as early as 2020, it was just the executive team that was blindsided.
But that brings us to what really happened. France Nationalized Soitec through a series of board moves, just in time before the Chinese government could push for more control. The CEO put in place is just a placeholder for the board.
National Champions Need Nationalization
How are we surprised that France wanted to nationalize Soitec?! This is France we are talking about here! It’s clear that there’s a vested interest in keeping Soitec French-controlled. The expanded board powers also expanded government influence over Soitec. Soitec has effectively become a State Owned Controlled Enterprise.
Look at the shareholder base. The ~17.67% French-controlled block was large, but not completely dominant given the large NSIG (aka China) block. A Chinese takeover of France’s semiconductor star would be devastating (not to mention embarrassing). Team France could not let this happen.
All of the board actions in conjunction means that this was likely premeditated by the controlling shareholders – the state of France – to further control Soitec. Paul’s retirement was just the catalyst to push the changes that have already been made ahead. So what about our underqualified CEO candidate, Pierre Barnabe?
The thing that I found really curious was that Pierre is on the board of INRIA, or the Institute for Research in Computer Science and Automation. It’s a government lead entity that’s core purpose is to further France’s technology interests. From the French National control lens, Pierre is a perfect CEO.
What Now?
So now France effectively owns Soitec. What are they going to do with it? Paul left the company on a strong financial note and Soitec is a springboard for other national pursuits, like manufacturing Silicon Carbide. Manufacturing Silicon Carbide would be terrible for Soitec the business, but great for France’s national ambitions.
Another likely option is that the board can use its expanded powers to purchase a fab in Europe, which would be perfect given Soitec produces wafers. Now France produces wafers and chips! There is a small design team within Soitec, and expanding that could also further French national interests. That’s a full stack semiconductor company with just 2 additional acquisitions.
All of this makes sense in the now geopolitically driven world of semiconductors. As of late, there have been multiple announcements for new mega-fabs, like the Intel Ohio fab or the new TSMC Japan fab. Every country is doing what it can to shore up its semiconductor businesses, and France couldn’t let China steal their national champion.
Conclusions and Questions
Soitec is effectively nationalized through board control. It makes sense given that China had a window to push for control, so instead, France just took the whole thing. France, Soitec’s largest shareholder, put everyone in a place of power in order to achieve this. What looked like a power-hungry move by a single actor, Eric Meurice, really was a coordinated win-win to control the company for France.
While I’m sure Soitec’s nationalization will be disappointing for free-market capitalists (where they at?), it’s not surprising at all given the current semiconductor climate. It’s a de facto-controlled company now. The takeaway: national-level politics continue to matter in the semiconductor industry. This theme is not going to go away any time soon. Soitec is just the latest and greatest in the series. Goodbye Soitec, Hello French National Semiconductor company.
There are some loose ends. How does NSIG feel about this? Did Paul Boudre know about any of this? There are a lot of other interesting threads in this entire story, but it’s clear what happened. France nationalized their largest semiconductor company!
If you enjoyed this piece, please consider subscribing. Even the free tier gets occasional posts. I try to write about semiconductor companies broadly from an investment perspective, so this investigative journalism is a bit different.
Oh by the way if you are either a past ASML employee during Eric’s time at ASML or a current Soitec employee who would love to talk, you can reach me at Doug@fabricatedknowledge.com.
Some Unfinished Business
I wanted to discuss parts of the story that didn’t flow well with the rest of the power politics at Soitec. The question I presume most people would be wondering is about Pierre Barnabe. Why is he not qualified again?
I really had a problem with this statement from Soitec management.
He draws upon a remarkable track record that includes a threefold revenue increase at theAtosBig Data and Cybersecuritydivision in the space of a few years, in a highly competitive market requiring deepcooperation with the ecosystem.
Part of that three-fold growth was 38 acquisitions along the way. Is that execution or is that just buying three times more revenue? And also the stock and business are horrid. It’s down 63% over the last 5 years and revenue growth is tepid at best.
Oh, and STMicroelectronics has a deep bench of French National semiconductor executives that would have made a perfect fit. Given Pierres in with the government, Pierre definitely did not get the job on the basis of merit.
What about Paul Boudre?
One of the weirdest side exits is Paul Boudre. By voluntarily abdicating his contract agreement and then getting paid to do so, I think Paul had wind of changes but didn’t care as he knew he was on the way out. And even if Paul did want to change things, he was locked out of the committees that matter (compensation and nomination). He wouldn’t be privy to what happened anyways.
I think the biggest blindside is the executive committee. I get why their reaction is so strong, but I think their outrage of Paul’s replacement and the lack of internal promotion is missing the bigger national interest story in what they thought would be a routine succession. The COO likely expected promotion, and other executives in turn would be promoted to COO. I feel bad for them, as it’s frustrating to not be a part of the company they clearly helped build.
Appendix: A bit more on SOI
Silicon on Insulator is a technology that embeds the insulator below the surface of silicon. Usually, this is done via Ion implantation, a “smart cut” and a flip of the substrate so that the buried insulator is now below a device layer. This improves overall performance meaningfully.
Soitec is the only volume manufacturer in the world, and their competitors license their technology. Soitec believes that their market share in SOI wafers is ~77% globally.
For Tesla, 2021 was an amazing year. A blindspot looms in 2022.
Critics cheered the National Highway Traffic Safety Administration for opening multiple investigations into fatal and near fatal Tesla crashes. Legislators decried the de facto beta testing of Tesla’s Full Self-Driving beta on public roads. And in December, click-bait headlines shined a spotlight on a distracting in-dash video game function – which Tesla had enabled and later disabled – ending the controversy.
While the critics howled, fans flocked. Tesla closed out the year by reporting 936,000 units sold globally and garnering a top safety pick from the Insurance Institute for Highway Safety for the Model Y, following up a similar assessment for the Model 3. For now, for Tesla, it looks like nothing but green lights – but there is trouble ahead.
Tesla’s CEO, Elon Musk, is famous for poo-pooing technologies that he holds in ill regard – among them: hydrogen fuel, lidar sensors, and wireless V2X communications. His dim assessment of lidar stands in stark contrast to an industry-wide embrace of the technology which might have served to help Tesla vehicles avoid crashing into police cars and emergency vehicles parked on the shoulders of highways.
Musk claims that lidar is unnecessary. What he is really saying is that lidar is expensive and he is trying to control the cost of his vehicles. For Musk, cameras and perhaps a little bit of radar are good enough.
In the same way, Musk has routinely dismissed vehicle-to-everything wireless communication technology enabled by cellular V2X (C-V2X) or dedicated short range communication (DSRC). Musk’s position is that autonomous or semi-autonomous vehicles must work without a network.
Musk’s opposition to wireless-enhanced autonomous operation puts him and his company outside of a vast industry collaboration working toward the leveraging of wireless technology to enhance vehicle safety. The onset of 5G, which brings with it V2X functionality, promises to enable a wide range of collision avoidance applications including the protection of vulnerable road users and the communication of the signal phase and timing of traffic lights.
These capabilities are arriving today in cars in China and, soon, in the U.S. as C-V2X technology sees swift adoption from car makers including Ford Motor Company and Audi, among others. These car companies understand that C-V2X will allow their cars to communicate their location and avoid collisions.
Meanwhile, manufacturers of infrastructure equipment are increasingly shifting to C-V2X tech to enable infrastructure-to-vehicle communications. Here, too, the objective is safety and collision avoidance.
Musk is resisting and Tesla is steadily diverging from the rest of the industry. While Tesla has led the way toward the widespread adoption of wireless-based over-the-air software updates, the company has neglected using wireless technology for safety purposes.
Tesla still lacks an automatic crash notification function (outside of Europe) – equivalent to General Motors’ OnStar. And Tesla’s traffic light recognition application that is part of the full self driving beta is entirely dependent upon vehicle mounted cameras and the vigilance of the driver for safe operation.
Musk’s camera-centric approach (enhanced with ultrasonic and radar sensors), which has helped propel the company to the forefront of semi-autonomous vehicle development has clearly reached the limits of its efficacy. We’ve all seen the videos of Teslas mistaking Burger King signs or even the moon for a stop sign. The multiple crashes involving fixed objects in the roadway speak volumes – to consumers and regulators.
The market leader in camera technology, Intel’s Mobileye, has dropped the camera-only pretense and is developing its own lidar while cooperating, in the short term, with lidar supplier Luminar. That’s a multimillion dollar shift by Mobileye. Tesla has made amazing strides with camera technology, but has now clearly reached the end of the road.
To keep up with developments coming fast throughout the rest of the automotive industry, Tesla needs to embrace the integration of wireless technology into safety applications for collision avoidance and emergency response. What Tesla is missing by failing to leverage wireless is the ability to extend the safety sensing horizon of the car beyond the line of sight of its cameras – including over hills and around corners.
Without wireless, Tesla will also remain blinded to emerging wireless alerting solutions for everything from the movement of emergency vehicles (Haas Alert Safety Cloud), vulnerable road users, wrong way drivers, and, most notable of all, the presence and signal phase and timing of traffic lights. Tesla has proven that it cannot solve these challenges with cameras alone.
There isn’t much that Tesla is getting wrong – from fast charging to direct sales to software updates to battery gigafactories. Wireless connectivity, for Tesla, remains a weakness.
Too much demand- A “good” problem-Managing supply & capacity-Intel & Hi NA
–ASML great Q4 results-Demand off charts-Supply constrained
-Dealing with chain issues, putting out fires, expediting
-Looking forward to next gen High NA in 2024/2025
-Intel’s order doesn’t give advantage, just joining a long line
Great Q4 results
ASML announced Q4 results of Euro5B in revenues and Euro4.39 per share in earnings. This was $5.656B of revenue and EPS of $4.98 (in dollars) versus street estimates of $5.85B and $4.25 per share, so a strong beat on EPS and a slight miss on revenues.
Most importantly orders for systems came in at Euro7.1B for the quarter including one High NA tool. Demand and business was strong across both EUV as well as DUV tools. 73% was for logic and 27% for memory. Taiwan was far and away number one at 51% of ASML‘s business followed by South Korea at 27% and China at 22% which leaves the US at zero/doughnut hole/bupkis%.
For the year Taiwan was 44%, South Korea 35%, China 16% and USA at 5%. This is obviously very reflective of our most recent article about the dominance of Taiwan/TSMC and how far behind the US is. And 2021’s numbers are prior to the huge jump in TSMC spend recently announced.
This is obviously very reflective of our most recent article about the dominance of Taiwan/TSMC and how far behind the US is. And 2021’s numbers are prior to the huge jump in TSMC spend recently announced
Can’t keep up with demand
ASML is the one and only game in town in defining the leading edge of semiconductor technology. There simply is no alternative, they have a monopoly.
Demand is off the charts and every semiconductor company that cares is ordering as quickly as possible. The geographic mix of business is reflective of the leadership in the semiconductor industry.
Despite supply chain issues in general and a fire at their stage manufacturing in Berlin, they have still managed todo a great job on shipments. The fire in Berlin seems to have little impact as ASML was likely able to move inventory and spares around to make up for the loss.
Its also clear that demand remains strong for DUV systems for second tier applications and memory applications. While ASML has done a great job at maintaining the pace, it is much harder to increase the pace as many subsystems are just very constrained.
Perhaps the biggest constraint is in lens manufacture as Zeiss in Germany doesn’t want to go as fast as demand would otherwise take them, likely for fear of the cyclical nature of the industry.
All of this is quite good for ASML’s gross margins as price is a secondary concern as compared to just getting a tool.
In our view, this limitation of capacity is not at all bad and may help not only pricing over the longer run but also reduce cyclicality as production simply can’t be ramped up as fast as the industry demands thus limiting the inherent volatility.
Expediting may add to confusion
ASML is being asked by customers who are so desperate for tools to short circuit the normal final assembly and test in the Netherlands and instead ship the systems directly to customers for final assembly and test.
This is akin to “don’t bother test driving my new car just deliver the pieces to my driveway and we’ll take it from there”. This adds to confusion on the financials as revenues that are usually counted on shipment now have to wait until final test at the customer site.
Investors will have to learn to focus on shipments rather than revenue and we may see numbers swing back and forth between quarters. At least Euro2B is expected to be delayed in recognition initially.
High NA starting to come into focus
Next generation , High NA tools are coming into view. ASML said they have orders for four model 5000, High NA R&D tools and Intel just placed an order for a 5200 “production” tool. Hopefully we start to see the first of these in 2024 and 2025.
I think we can safely assume that of the four tools on order, TSMC gets one, Samsung gets one and Intel gets one. Maybe someone else gets the fourth tool or someone gets two. If the pattern follows history maybe TSMC ordered two.
Intel claiming High NA EUV “advantage” is simply nonsense
Right now its unclear who will get the first of the four High NA EUV R&D tools on order. It could be TSMC, Samsung or even Intel. The only thing we know is that Intel placed an order for the first “production” tool, the 5200. So who is really first to High NA is a question of semantics. More importantly the first “production” tool is not until at least 2025.
As we have seen with the original roll out of EUV it took more than two iterations to get to real production. More importantly, right now, TSMC has roughly 10 times the number of “real”, “productive” EUV tools. This also means that TSMC likely has 10 time the staff trained on complex EUV, ten times the experience (likely even more).
Most importantly, TSMC has likely more than 10X the capacity and experience in building EUV masks which are the “hyper critical” negatives from which the chips are printed by ASML tools.
Given the combination of the current EUV tool count competition, the recently announced hugely expanded TSMC capex budget and the constrained capacity of ASML it is physically/mathematically not possible for Intel to even come close to catching TSMC.
Even though Intel was the first to order a “production” version its totally unclear if they will get it first let alone get it any meaningful time ahead of TSMC. This isn’t even until 2025 at best! This means that Intel ordering a 5200 is more of a PR stunt which also helps ASML create competitive tension between already desperate customers.
The Stocks
ASML already trades at a very large premium as it is one of the very few successful European large cap tech stocks. It trades at a premium to US semiconductor equipment companies. Stocks have been faltering already and ASML’s announcement was into a soft trading day with the group down.
ASML is down less than the group so the results were taken positively as they should but the results were not overwhelmingly positive such that the stock could break through the general weakness.
Semiconductor stocks in general have been facing more resistance and good news is not driving them which is a reflection of the overall market sentiment. At this point we are less inclined to buy into weakness even though we still like the name. Momentum is a bit negative and we are in a critical earnings period. We think other semiconductor stocks will be focused on execution and supply chain issues with less focus on the positive aspect of huge unprecedented demand and record results.
Daniel Nenni is joined by popular podcast guest Wally Rhines. Dan and Wally explore 2021, some of the expected results and some of the surprises. COVID, supply chains, chip shortages and international trade are just a few of the topics.
Wally and Dan then turn their attention to 2022. What kind of year will it be? What will be the drivers, the successes and the surprises? This is a far-reaching discussion covering many relevant topics. We exceed our 30-minute length on this one by a bit to cover it all.
The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.
There has been a lot of attention in the news recently about AT&T and Verizon rolling out their first implementations of sub-6GHz 5G radio access networks (RAN). Notably, the FAA and airline industries have voiced serious concerns about potential safety issues for aircraft autopilot and landing systems. As a result of these concerns, some international and domestic flights were canceled, and the Customer Services Providers (CSP) did not turn on these new 5G systems near select airports around the U.S.
Here are the details: The wireless spectrum auctioned off by the FCC back in Dec. 2020 in the 3700-3980 MHz C-band frequencies is deemed to be too close to the 4000-4400 MHz bands reserved for aircraft radar altimeter systems. These critical, sensitive radio frequency (RF) systems bounce RF signals off the ground to determine the plane’s relative altitude for use by the aircraft’s autopilot and automated landing systems. Obviously, erroneous altitude information coming from this system could cause significant problems, especially during takeoffs and landings.
On the surface, the problem seems innocuous because the two systems are not operating in the same frequency bands. But, when discussing RF systems composed of active components, this is not so clear-cut. Active devices inside RF systems can generate spurious out-of-band signals. Although these signals may be of a much lower amplitude than the primary, intended, frequency, they can still be large enough to interfere with nearby systems operating in other bands. In addition, the radar altimeter systems are very sensitive by design, since the reflected signal from the terrain can vary considerably in amplitude depending on the altitude as well as features and shapes of the terrain.
A somewhat surprising aspect of this controversy is that it was not anticipated and coincides with the rollout of the 5G networks. Given the significant amount of time between the auction of the frequency band in Dec. 2020 and the rollout in Jan. 2022, it would seem that these concerns could have been raised earlier to avoid flight cancellations and other problems. The crisis is especially surprising considering that there are simulation tools, such as Ansys EMIT, which can predict these interference effects and provide guidance for mitigation.
The image below shows an analysis of a typical RF system, with multiple antenna and radio devices analyzed in one system. The system can be the plane itself or, in this case, a combination of the plane and the terrestrial 5G system near the airport. EMIT uses models from a variety of sources. In this case, it can use rigorous field coupling models generated by field solvers such as Ansys HFSS and its related asymptotic electromagnetic solver, SBR+.
For difficult interference problems, the Ansys EMIT toolkit, an integral component of the Ansys Electronics Desktop and part of the Ansys HFSS portfolio, is designed to consider wideband transmitter emissions and assess their impact on wideband receiver characteristics. In addition, any issues found are visually identified by a trace-back method, allowing the designer to easily understand the source of the interference.
EMIT considers both in-channel and out-of-band effects. Beyond transmitters and receivers, antenna systems must also be taken into account. Ansys HFSS is the industry standard for modeling the physics of antenna systems, their installation effects, and their couplings, even over long distances with ground and terrain reflections.
Hello! The most important semiconductor company in the world reported earnings last night. It’s been something of a tradition to post Taiwan Semiconductor Company (TSMC) earnings posts not behind my paywall, and I think that I’m going to continue that to kickoff each earnings season.
There are so many threads in the TSMC call that I want to talk about, but the big one that I think we will look back on 2022 for TSMC is that this is the year HPC will become the largest part of the business. Let’s expand.
TSMC’s current business mix
Smartphones and HPC are neck and neck in the largest buckets for TSMC, but HPC is growing faster. The reason for sluggish smartphone growth was laid out pretty well by TSMC, and that’s volume growth in smartphones has topped out.
Yes, I think — let me add. The global smartphone unit growth last year is about 6%. So some of the — you see some of the company smartphone revenue may grow, it could be due to the pricing. But we — our pricing strategy, as you understand, is strategic, not optimistic. So we’ll grow with the smartphone units in our business.
Well TSMC just guided for “high 20s” growth, and long-term growth of 15-20% CAGR. They additionally guided to an accelerating 2022, and strong sequential growth into Q1. Given that they think that their business will grow in-line smartphone units, the only logical growth driver is HPC. I did some pretty simple math to back out what the HPC segment should look like given their assumptions on growth.
Let me answer the platform question. In 2022, we expect the HPC and automotive to grow faster than the corporate average. IoT, similar. Smartphones close to the corporate average. That’s the platform growth.
I’m a bit more bearish on smartphones growing 20%+ unit growth so let’s say smartphones grow at 15% next year, and DCE / Other grows at 10% as well. I grew automotive at 50% and IoT at ~28% – and the result is that HPC revenue crosses over smartphones in 2022. I use HPC revenue as the plug to then hit the ~28% revenue number. It looks like this is the year HPC is finally larger than Smartphones at TSMC.
For the longest time, I have believed that the largest incremental dollar pool of revenue growth will be HPC. It’s nice to see it come true and TSMC confirms that this is their belief as well. I first wrote about when I suspected meaningful growth in the data center in 2022 after Facebook’s earnings. I got it confirmed shortly thereafter by AIchip’s results. I had a suspicion that data center would be strong, but hearing the largest fab in the world expect something akin to ~40%+ growth in this segment is pretty mind-boggling even to a huge bull like me.
Another point to the leadership of the data center going forward is that HPC is starting to adopt the smaller nodes faster than smartphones, which used to be the premier first adopters of TSMC’s newest node. In the past, HPC would adopt the newest node a year after smartphone, but now HPC is in the driver seat and will be adopting N3 at the same time smartphone is. Beyond just node adoption, I’m pretty bullish on data center exposed stocks like Marvell and Nvidia.
Speaking of Marvell and Nvidia one of the questions on the call was “how can you grow your revenue faster than your fabless customer’s expected revenue growth”. TSMC answered that they believe it’s pricing and share gains.
This is C.C. Wei. Actually, the growth in 2022 is all the above you just mentioned. It’s a share gain, it’s the pricing and also its a unit growth. Did I answer your question?
Part of this is that Intel is starting to outsource to TSMC and that foundry likely will grow faster than memory this year. But I have a hard time believing another obvious answer is that the fabless estimates are too low.
Given that TSMC just guided to accelerating revenue (25% 2021 growth to 28%+ 2022 growth) and has over 50% of global market share, I have a hard time believing that the industry is going to meaningfully decelerate while TSMC revenue explodes. The numbers don’t reconcile. And that is why I believe that the fabless companies’ revenue estimates are likely a bit too low. Also that their 9% industry growth number is likely too low. Getting the theme here?
I believe that 2022 is going to be another strong year, and that almost every fabless company’s numbers will be revised higher. Let’s turn next to the capex side of the equation.
TSMC Expects to Spend $40-44B on Capex
Not only was growing faster for longer a surprise but the $40-44b capex was a real shocker. For context, the most bullish estimate on the street was at ~$40 billion. The upside is now the new downside case. Given that WFE grew by ~40% last year, and TSMC grew capex by ~77% in 2021 over 2020, this is pretty meaningful growth. In absolute terms, they are adding more spending in 2022 than in 2021. But of course, this is a deceleration on a larger base.
I think that the preliminary read-through is that WFE is going to have yet another good year. I believe that WFE likely is more to the tune of 20% growth than to 10% growth. Speaking of 10% growth – this estimate by SEMI came out on January 11th called for 10% growth and after TSMC’s spending estimates it already seems like this will be false. 2 days and it’s already out of date! The true number is going to be higher.
As we discussed on the VLSI semicap comparison of numbers bottom-up to top-down, it seems like estimates need to move higher. I think this is great for semicap broadly (surprise!). If you’re a long-time reader of the substack, one of the core beliefs is that the rising capital intensity of making a semiconductor accretes to fabs and even more so to semicap companies (ASML, LRCX, AMAT, KLAC, TOELY, etc).
This is just another indication that the thesis is correct given that Capex is growing faster than revenue. Which brings me to an interesting question – how could TSMC ever support this kind of spending indefinitely? The answer is that they are either utterly wrong about their growth and are going to throw the entire market into overcapacity, or that demand is still being underestimated. I believe that it’s the latter, as I wrote in my cyclical to the secular thought experiment. I believe that TSMC believes this as well, and given how they are investing and guiding, I want to call this TSMC’s bold bet.
Growing for Longer – TSMC’s Bold Bet
A recurring theme of the analysts calls with TSMC is that every quarter analysts pepper management with “how can you maintain the margin with this investment?” and “you’re spending a lot on capex will this ever normalize?” questions. The answer that TSMC answers each quarter is somewhere along the lines of “We are going to grow trust us”. This first long-term guidance in a while is an indication of that.
We expect our long-term revenue to be between 15% and 20% CAGR over the next several years in U.S. dollar terms, of course, fueled by all 4 growth platform which are smartphone, HPC, IoT and automotive.
The staggering thing I want to point out to you is their 10-year revenue growth CAGR is 14%. That’s the kind of growth that got them to the largest fab in the industry, yet their long-term revenue guide is now actually a call that their revenue will accelerate on a larger base. It’s impossible for them to gain share at the rate they used to so the only answer is the entire industry must accelerate as well.
TSMC is probably one of the best management teams in the entire industry with the most credibility you can ask for. They are prudent, ROIC focused, conservative in their node shrinks yet aggressive in their capital spending.Simply putthey do not miss. If they are investing in larger amounts for accelerating growth they believe will come, I am going to believe them.
This is the definition of long-term thinking and bold bets. They are pushing forward at an even faster pace at the peak of their dominance in order to ensure they continue to hold share. And everything is pointing to the diversity and strength of the entire semiconductor ecosystem, and I think that the answer is clear. The 2020s are going to be a better decade than the one before it for the entire semiconductor ecosystem.
Passing Price
I want to briefly mention the gross margin part of the equation. Every quarter there is a lot of hang wringing about the sustainability of the gross margin at TSMC. Last quarter analysts got really hung up on “51% or greater” long-term margins and asked in as many ways as possible if that margin was sustainable.
This quarter of course they put up 53% gross margin and now are guiding to “53% or greater” margin longer term. The bar of course has shifted higher. The right answer and framing around the gross margin sustainability debate are that TSMC really is one of the only games in town, and the demand for their capacity is intense. I mentioned briefly that TSMC can just pass price as much as they want in the Rising Tide of Semiconductor Costs and I think that will continue.
No matter how much capex spend is required and how much depreciation and amortization will grow as a part of TSMC’s cost, TSMC is simply not a price taker. They will raise prices and pass their costs onto their customers, and in this case, it seems like they are able to pass on more than just the cost they take. If they can maintain 53%+ margins against rising CoGs, this means that customers will be taking price raises on the chin. Because what other choice do they really have? Intel’s Foundry business is still more of an idea more than a meaningful business, and Samsung is growing but relatively small. TSMC will get the money that they are due.
There’s a lot more in the transcript itself, which I recommend reading if you have some time. TSMC continues to believe I think that the continued prepayments by their customers are another indication that the fabless companies get it as well. They want more capacity because their businesses are well but they are capacity constrained.
An interesting idea I had was that the capacity precommitments in order to secure capacity feels a bit like the ASML investment by INTC / TSMC. It’s clearly a greater good, there is really only one company that can achieve it, and it’s going to cost a lot of money. In order for the economics to work at TSMC will need a lot of money.
Anyways that’s it for today. I just wanted to cover these points for now, and I’ll be posting a lot more content like this but for the ~100s of other semiconductor companies that will be reporting in the next month. I just always love to start with the biggest and baddest first.