Design Considerations for 3DICs

Design Considerations for 3DICs
by Tom Dillinger on 12-14-2020 at 6:00 am

LVS flow phases

The introduction of heterogeneous 3DIC packaging technology offers the opportunity for significant increases in circuit density and performance, with corresponding reductions in package footprint.  Yet, the implementation of a complex 3DIC product requires a considerable investment in methodology development for all facets of the design:

  • system architecture partitioning (among die)
  • I/O assignments for all die, both for signals and the power distribution network (PDN)
  • die floorplanning, driven by the I/O assignments
  • probe card design (with potential reuse between individual die and 3DIC assembly)
  • critical timing path analysis, assessing the tradeoffs between timing paths on-die versus the implementation of vertical paths between stacked die
  • IR drop analysis, a key facet of 3DIC planning due to the power delivery to stacked die using through-silicon or through-dielectric vias
  • a DFT architecture, suitable for 3DIC testing using individual known good die (KGD)
  • reliability analysis of the composite multi-die thermal package model
  • LVS physical verification of the multi-die connectivity model

Whereas 2.5D IC packaging technology has pursued “chiplet-based” die functionality (and potential electrical interface connectivity standards), the complexity of 3DIC implementations requires early and extensive investment in the design and analysis flows listed above – a higher risk than 2.5D IC implementations, for sure, but with a potentially greater reward.

At the recent IEDM 2020 conference, TSMC presented an enlightening paper describing their recent efforts to tackle these 3DIC implementation tradeoffs, using a very interesting testchip implementation.  This article summarizes the highlights of their presentation. [1]

SoIC Packaging Technology

Prior to IEDM, TSMC presented their 3DIC package offering in detail at their Technology Symposium – known as “System on Integrated Chip”, or SoIC  (link).

A (low-temperature) die-to-die bonding technology provides the electrical connectivity and physical attach between die.  The figure below depicts available die attach options – i.e., face-to-face, face-to-back, and a complex combination including side-to-side assembly potentially integrating other die stacks.

For the face-to-face orientation, the backside of the top die receives the signal and PDN redistribution layers.  Alternatively, a third die on the top of the SoIC assembly may be used to implement the signal and PDN redistribution layers to package bumps – a design testcase from TSMC using the triple-stack will be described shortly.

A through-silicon via (TSV) in die #2 provides electrical connectivity for signals and power to die #1.  A through-dielectric via (TDV) is used for connectivity between the package and die #1 in the volumetric region outside of the smaller die #2.

Planning of the power delivery to the SoIC die requires consideration of several factors:

  • estimated power of each die (especially where die #1 is a high-performance, high-power processing unit)
  • TSV/TDV current density limits
  • distinct power domains associated with each die

The figure below highlights the design option of “number of TSVs per power/ground bump”.  To reduce IR drop and observe current density limits through a TSV, an array of TSVs may be appropriate – as an example, up to 8 TSVs are shown in the figure.  (Examples from both FF and SS corners are shown.)

The tradeoff of using multiple, arrayed TSVs is the impact on interconnect density.

As an illustration, TSMC pursued a unique SoIC implementation – a quad-core ARM A72 processor (die #1) where the L2$ cache arrays commonly integrated with each core have been re-allocated to die #2.  The CPU die in process node N5 maintains an L3$ array, while the SRAM die in process node N7 contains the full set of L2$ arrays.  A third die on top of die #2 provides the redistribution layers.  A total of 2700 connections are present between CPU die #1 and the L2$ arrays in die #2.

This is an example of how SoIC technology could have a major impact on system architectures, where a (large) cache memory is connected vertically to a core, rather than integrated laterally on a monolithic die.

PDN Planning

A key effort in the development of an SoIC is the concurrent engineering related to the assignment of bump, pad, and TSV/TDV locations throughout, for both signals and the PDN.

The figures above highlight the series of planning steps to develop the TSV configuration for the PDN – a face-to-face die attach configuration is used as an example.  The original “dummy” bond pads between die (for mechanical stability) are replaced with the signal and PDN TDV and TSV arrays.  (TSMC also pursued the goal of re-using the probe card, between die #1 testing and the final SoIC testing – that goal influenced the assignment of pad and TSV locations.)

The TSV implementations for the CPU die and SRAM die also need to be carefully chosen so as to meet IR goals, without adversely impacting overall die interconnect density.

LVS

Briefly, TSMC also highlighted the (multi-phase) LVS connectivity verification methodology, and unique DFT architecture selected for this SoIC test vehicle, as depicted below.

DFT

Another major consideration is the DFT architecture for the SoIC, and how connectivity testing will be accomplished using cross-die scan, as illustrated below.

 

TSMC demonstrated that the resulting (N5 + N7) SoIC design achieved a 15% performance gain (with suitable L2$ and L3$ hit rate and latency assumptions), leveraging a significant reduction in point-to-point distance afforded by the vertical connectivity between die.  The package areal footprint for the SoIC is reduced by ~50% from a monolithic 2D implementation.

3D SoIC packaging technology will offer system architects with unique opportunities to pursue design partitioning across vertical die configurations.  The density and electrical characteristics of the vertical bond connections may offer improved performance over lateral (monolithic or 2.5D chiplet-based) interconnects.  (The additional power dissipation of “lite I/O” driver and receiver cells between die versus on-chip signal buffering is typically small.)

The tradeoff is the investment required to develop the SoIC die floorplans for TSV and TDV vias to provide the requisite signal count and low IR drop PDN.  Although 2.5D chiplet-based package offerings have been aggressively adopted, the performance and footprint advantages of a 3DIC are rather compelling.  The TSMC test vehicle demonstrated at IEDM will no doubt generate considerable interest.

-chipguy

References

[1]  Cheng, Y.-K., et al., “Next-Generation Design and Technology Co-optimization (DTCO) of System on Integrated Chip (SoIC) for Mobile and HPC Applications”, IEDM 2020.

 


How Intel Stumbled: A Perspective from the Trenches

How Intel Stumbled: A Perspective from the Trenches
by Daniel Nenni on 12-07-2020 at 6:00 am

Stacy and Bob Intel SemiWiki

Bloomberg did an interview with my favorite semiconductor analyst Stacy Rasgon on “How the Number One U.S. Semiconductor Company Stumbled” that I found interesting. Coupled with the Q&A Bob Swan did at the Credit Suisse Annual Technology Conference I thought it would be good content for a viral blog.

Stacy Rasgon and Bob Swan

Stacy Rasgon is an interesting guy and a lot like me when it comes to offering blunt questions, observations, and opinions that sometimes throw people off. As a result, Stacy is not always the first to ask questions during investor calls and sometimes he is not called on at all which is the case for the most recent Intel Call.

Stacy is the Managing Director and Senior Analyst, US Semiconductors, for AB Bernstein here in California. Interestingly, Stacy has a PhD in Chemical Engineering from MIT, not the usual degree for a sell side analyst. Why semiconductors? Stacy did a co-op at IBM TJ Watson Research Center during his post graduate studies and that hooked him.

I thought it was funny back when Brian Krzanich (BK) was CEO of Intel. BK has a Bachelor’s Degree in Chemistry from San Jose State University and he was answering questions by an analyst with a PhD from MIT. The current Intel CEO Bob Swan is a career CFO with an MBA so maybe that explains the communication issues.

In the Bloomberg interview the focus was on the delays in the Intel processes starting with 14nm, 10nm, and now 7nm. Unfortunately they missed the point. In the history of the semiconductor industry leading edge processes were more like wine where in the words of the great Orson Wells “We will sell no wine before its time”. Guided by Moore’s Law, Intel successfully drove down the bumpy process road until FinFETs came along.

The first FinFET Process was Intel 22nm which was the best kept secret in semiconductor history. We don’t know if it was early or late since it was not discussed before it arrived. 14nm followed which was late due to defect density/yield problems. We talked about that on SemiWiki quite and I had a bit of a squabble with BK at a developer conference. I knew 14nm was not yielding and he said it was only to retract that comment at the next investor call. Intel 10nm is probably the most tardy process in the history of Intel and now 7nm is in question as well.

The foundries historically have been 1-2 nodes behind Intel so they got a relative pass on being late with new processes up until 10nm when TSMC technically caught Intel 14nm.

Bottom line: Leading edge processes use new technology and materials which challenges yield from many different directions. This is a very complex business so it’s extremely difficult to predict schedules because “you never know until you know”. So, try as one might, abiding by Moore’s Law in the FinFET era is a fool’s errand, absolutely.

The other major Intel disruption is the TSMC / Apple partnership. Apple requires a new process each year which started at 20nm (iPhone6). As a result TSMC now does half steps with new technologies. At 20nm TSMC introduced double patterning then added FinFETs at 16nm. At 7nm TSMC later introduced limited EUV and called it 7nm+. AT 5nm TSMC implemented full EUV (half steps).

This is a serious semiconductor manufacturing paradigm shift that I call “The Apple Effect” TSMC must have a new process ready for the iProduct launch every year without fail. Which means the process must be frozen at the end of Q4 for production starting in the following Q2. The net result is a serious amount of yield learning which results in shorter process ramps and superior yield.

The other interesting point is that during Bob Swan’s Credit Suisse interview he mentioned the word IDM 33 times emphasizing the IDM advantage over being fabless. Unfortunately this position is a bit outdated. Long gone are the days when fabless companies tossed designs over the foundry wall to be manufactured.

TSMC, for example, has a massive ecosystem of partners and customers who together spend trillions of dollars on research and development for the greater good of the fabless semiconductor ecosystem. There is also an inner circle of partners and customers that TSMC intimately collaborates with on new process development and deployment. This includes Apple of course, AMD, Arm, Applied Materials, ASML, Cadence, and Synopsys just to name a few.

Bottom line: The IDM underground silo approach to semiconductor design and manufacture is outdated. It’s all about the ecosystem and Intel will learn this first hand as they increasingly outsource to TSMC in the coming process nodes.

 

 


No Intel and Samsung are not passing TSMC

No Intel and Samsung are not passing TSMC
by Scotten Jones on 12-02-2020 at 6:00 am

Slide1

Seeking Alpha just published an article about Intel and Samsung passing TSMC for process leadership. The Intel part seems to be a theme with them, they have talked in the past about how Intel does bigger density improvements with each generation than the foundries but forget that the foundries are doing 5 nodes in the time it takes Intel to do 3. They also make a big deal about Horizontal Nanosheets (HNS) Versus FinFETs and yes that is impressive, but at the end of the day what you deliver for power, performance and area (PPA) is what really matters.

I have written about this before here.

In this article I will briefly review where each company is currently and where I expect them to be over the next five years. I do not want to go into too much detail in this article because I will be presenting on leading edge logic at the ISS conference in January and covering this in more depth then.

Intel

Figure 1 illustrates Intel’s node introductions starting at 45nm. After many nodes on a 2-year cadence Intel slipped to 3 years at 14nm and 5 years at 10nm. 10nm has been particularly bad with yield and performance issues, even today it is hard to get 10nm parts. Intel has recently announced 10+ now known as 10SF (Super Fin). The Super Fin provides a 17-18% performance improvement like a full node. There is also a rumor that Intel is using EUV for M0 and M1 although I have not confirmed this. M0 and M1 on the original 10nm process are the most complex metal pattering scheme I have ever seen so this might make sense for yield reasons.

Figure 1. Intel Node Introductions.

Intel’s 7nm was scheduled for 2021 and was supposed to get Intel back on track. At 7nm they are doing a smaller 2x density improvement and the implementation of EUV was supposed to solve their yield issues, but the process is now delayed until 2022.

Seeking Alpha makes an argument that Intel will be back on a 2 year cadence for their 5nm process, I am not sure I believe this given their 14nm, 10nm and 7nm history but even if they are I don’t think this puts them in the lead as I will describe below.

14nm/16nm

14nm was Intel’s second generation FinFET and they took a big jump in density. Intel’s 14nm process came out in 2014, Samsung’s 14nm process also came out in 2014 and TSMC’s 16nm process came out in 2015. Intel’s 14nm process was significantly denser than Samsung or TSMC’s 14nm/16nm processes.

Foundry 10nm

In 2016 both foundries came out with 10nm processes and they both passed Intel for the process density lead.

Foundry 7nm/Intel 10nm

In 2017 TSMC released their 7nm process moving further ahead of Intel and in 2018 Samsung released their 7nm process also moving further ahead of Intel. In 2019 Intel finally started shipping 10nm and the Intel 10nm process was slightly denser that TSMC or Samsung, but in 2018 TSMC’s 7+ process (half node) and in 2019 Samsung’s 6nm (half node) processes passed Intel 10nm density. Samsung’s 7nm is also notable as the industry’s first process with EUV, although TSMC soon had EUV running on their 7+ process and is in my opinion the EUV leader today, in fact TSMC claims to have half of all EUV systems in the world currently.

Foundry 5nm

In 2019 the foundries started risk starts on 5nm pulling further ahead of Intel. TSMC 5nm took a much bigger density jump than Samsung’s 5nm and they opened a lead over Samsung and Intel. TSMC 5nm also introduced a high mobility channel. 5nm has ramped throughout 2020 and utilizes EUV for more layers than 7nm.

Foundry 3nm/Intel 7nm

Risk starts for foundry 3nm are due in 2021 and TSMC will pull further ahead of both Intel and Samsung. Samsung will introduce the industry’s first HNS and that is a great accomplishment and positions them well for the future, but we expect TSMC’s 3nm process to be much denser with better power and performance.

Intel’s 7nm process is currently expected around 2022 and is slated to be their first EUV based process (although there may be some EUV use on 10nm as discussed above). Based on their announced density improvements and the announced density improvements for TSMC and Samsung we expect Intel 7nm and Samsung 3nm to have similar densities but TSMC will be much denser than either company.

Foundry 2nm/Intel 5nm

If Intel gets back onto a two-year node interval, then Intel 5nm using HNS will be due in 2024. I am not sure I believe that but for the sake of argument I will go with it. There is also a question as to whether Intel even does 5nm, they are looking at outsourcing and depending on how that goes they may not go beyond 7nm and may use foundries.

TSMC’s 2nm node is now expected to be available for risk starts in 2023 and production in 2024. TSMC has said it will be a full node and even with modest density improvements it will be denser than Intel’s 5nm process based on announced density improvements, Intel will likely pass Samsung but not TSMC. This would be Intel and TSMC’s first HNS. Possibly because it would be Samsung’s second generation HNS maybe they will take a bigger density jump but I don’t see them catching TSMC who is taking bigger jumps at both 5nm and 3nm.

Conclusion

The bottom line is Intel may be doing bigger density jumps at each node than the foundries but from the 14nm nodes in 2014 through the Intel 7nm node expected in 2022, the foundries have done 5 full nodes while Intel has done 3 full nodes and TSMC in particular has opened up a big process lead.

Also Read:

Leading Edge Foundry Wafer Prices

VLSI Symposium 2020 – Imec Monolithic CFET

SEMICON West – Applied Materials Selective Gap Fill Announcement


Webinar: 5 Reasons Why Others are Adopting Hybrid Cloud and EDA Should Too!

Webinar: 5 Reasons Why Others are Adopting Hybrid Cloud and EDA Should Too!
by Daniel Nenni on 11-27-2020 at 6:00 am

Rescale SemiWiki

With the complexity of transistors at an all time high and growing foundry rule decks, fabless companies consistently find themselves in a game of catch up. Semiconductor designs require additional compute resources to maintain speed and quality of development. But deploying new infrastructures at this current speed is a tall order for IT professionals tasked with supporting development and verification teams. When these  resources can’t keep up, engineers become compute constrained rather than compute empowered..

The semiconductor industry is not alone in the struggle to adopt new technologies that can accelerate the pace of  science and engineering breakthroughs. For that reason, cloud solutions are increasingly being implemented to empower R&D in a way never before seen. Breakthroughs in aerospace design, new drugs and vaccines, alternative energy solutions and much more are now being realized on cloud or hybrid cloud infrastructures. Because of security and IP concerns, EDA companies have primarily maintained on-premise data centers for their compute needs. However, that preference is changing due to manufacturers such as TSMC endorsing cloud. The industry has also seen a rise in startups entering the industry that do not have infrastructure of their own and are turn to the cloud to compete.

So let’s look at the main benefits of expanding EDA to a hybrid cloud environment. Join Rescale’s webinar to further explore how hybrid cloud will drive new levels of performance and efficiency in semiconductor. Register here.

Security

As companies look to move workloads to the cloud, the primary area of focus is how to protect sensitive information and IP. Recent research by Cloud Vision states , two thirds of companies consider this the main roadblock in adopting cloud. In light of this, major cloud providers have put substantial focus and investment to reduce risks and safeguard datacenters from any breach. As you can imagine, with companies like AWS, Microsoft and Google, no expense is spared to ensure they deliver a secure environment. As proof of these security measures, public cloud will experience 60% fewer security incidents compared to typical data centers this year. For organizations that require full stack compliance and security, platforms such as Rescale cover end to end workflows across the hardware and software layers with the highest of industry standards. . Even going as far as obtaining industry leading certifications to meet the strictest compliance requirements.

Agility

Never in our history has technological agility been more important than 2020. Facing a pandemic was the ultimate test of our systems and most companies found themselves not prepared. Being cutoff from typical on-premise infrastructure caused delays across the industry. VPNs became overwhelmed as engineers struggled to access the data and resources needed to continue development and run verification. The need to enable remote teams is not the only consideration. Systems need to have the flexibility to scale with phases of projects and production deadlines . For these reasons, hybrid cloud far out performs traditional infrastructures. It’s accessible anywhere you can find a wifi connection and compute resources scale as needed. The Rescale platform also offers remote desktop solutions and a wide variety of admin controls over budgets and permissions to keep operations running smoothly. With the stability and options of a multi cloud infrastructure and a variety of core types available on the platform, users can match the ideal core type to their workload and be confident in the stability of the infrastructure with a service level agreement that your job will run.

Impact and Productivity

Enabling engineers to focus on design means better products at a quicker pace. IT leaders need to look at the ways in which engineers are distracted or slowed from their core responsibilities. Companies spend top dollar to secure engineering expertise and talent and they should be working on the portion of the business where they will make the biggest impact. Distractions can come in the form of queues, slow workflows, license issues and more. Rescale looks to solve these issues with an intelligent control plane and full stack approach. Having an intelligent control plane for both local and cloud hardware allows R&D the ability to divert workloads to the best infrastructure based on performance and cost. A simple user interface with robust automation allows them to easily setup runs without relying on IT. And if they do come across a challenge, the Rescale support team is stacked with HPC and simulation experts that average a 15 min response time. All of this combines to allow engineers to be hyper focused on what they do best.

Speed to Market

A major component of gaining competitive advantage is to be first to market with a new product. This allows you to gain brand recognition, build customer loyalty and secure market share before competitors are even in play. A hybrid cloud approach enables semiconductor companies to dial up the number of iterations and accelerate speed to answer. Additionally, verification is expedited with the virtually unlimited resources available. When coupled with automated workflows, templates and continuous optimization from the Rescale platform, companies can make substantial improvements.

pSemi used Rescale to substantially speed up their development process, “We were able to use Rescale’s cloud platform to highly parallelize our simulations and bring the simulation time down from 7 days to 15 hours. We’ve demonstrated a 10x speed improvement on numerous occasions in our EM simulations using Rescale…”

The next wave of semiconductor advancements will be powered by hybrid cloud. The foundries have already started to adopt the technology. It is poised to revolutionize the industry by empowering engineers like never before and reaching new levels of performance and efficiency. Join Semiwiki and Rescale as we take a deeper look into the benefits of hybrid cloud and Rescale’s intelligent control plane approach. Register now!

About Rescale
Rescale is the leader in enterprise big compute in the cloud. Rescale empowers the world’s transformative executives, IT leaders, engineers, and scientists to securely manage product innovation to be first to market. Rescale’s hybrid and multi-cloud platform, built on the most powerful high-performance computing infrastructure, seamlessly matches software applications with the best cloud or on-premise architecture to run complex data processing and simulations.


China Semiconductor Bond Bust!

China Semiconductor Bond Bust!
by Robert Maire on 11-25-2020 at 10:00 am

China Semiconductor Bond Bust

– Tsinghua $198M Bond Bust
– Good for memory: Samsung Micron LG Toshiba –
– Not good for chip equipment
– Could China Credit Crunch hit more than foundry embargo?
– Damage to China memory positive for other memory makers
– Not good for chip equip if customers can’t get money

China’s most prestigious leader of the effort to become dominant in semiconductors suffered an embarrassment of defaulting on $198M in bonds that were due Nov 17th. While seemingly a drop in the bucket of overall debt and the fact that they were in the midst of negotiating their way out it still sends shivers through China’s debt market and sent the bonds plummeting.

Tsinghua is not the only state backed Chinese firm with bond troubles which makes the concerns all the more worrisome.

Chinese tech group joins list of companies to default on bond issue

NAND in Wuhan and DRAM in Chongqing
Tsinghua already has a NAND factory in otherwise famous Wuhan and is planning a DRAM fab in Chongqing. They are a spinoff subsidiary of the prestigious Bejing University. They are perhaps the shining star of China’s semiconductor aspirations. Though SMIC has been around a long time it seemed Tsinghua had more potential. Tsinghua Unigroup default tests China’s chipmaking ambitions

Good for non Chinese memory makers like Samsung & Micron, LG & Toshiba

Being in the memory market and having the specter of China entering your market after watching China annihilate the LED & solar cell markets was likely quite chilling. China obviously doesn’t care about profitability (at least not in the beginning) and could easily trash pricing and destroy the commodity memory market just like the commodity LED and solar markets before it.

If I were in Boise I might have a little schadenfreude about the Chinese bond market right now, not unlike TSMC and SMIC.

Anything that slows down China’s aspirations in the memory market is likely positive for other competitors.

Equipment vendors likely between a rock and a hard place in China
Checking accounts receivable.

Semiconductor equipment makers may not be as happy about the bond default and subsequent credit downgrades.

We would bet a lot of money that the equipment makers are likely owed a whole lot more than $198M in equipment purchases and are looking at many times that in future orders and business. So their exposure far exceeds the bondholders.

Unlike the bondholders, equipment makers don’t want to stop shipping to their biggest, best and fastest growing market, that is China.

If equipment makers stop shipping due to credit risk/downgrades or fear of not getting paid then Tsinghua will avoid doing business with them at all costs (its not like there aren’t trying to avoid American equipment already given what happened to their cousins at SMIC).

Equipment vendors have to keep shipping with the hope that the Chinese government will be the backstop, or the company figures it out.

We can only imagine that some CFO’s have to be checking their Tsinghua related accounts receivable exposure.

Credit is all about faith
Too big/important to fail?

Lest anyone forget, the credit market is all about faith. Faith in getting paid back on the loan. The 2008/2009 market collapse was a collapse in the credit market. Faith in ever getting repaid went to zero.

The semiconductor industry is very highly capital intensive and very fickle in cyclical profitability. In addition the would be Chinese chip makers are likely finding out that the semiconductor market is much, much harder than the LED and solar markets which were relative pushovers.

The cost of an LED “fab” and complexity of process is not even a rounding error as compared to making a 128 level NAND chip.

It is likely that getting to yield, meaning getting to revenue, let alone profitability will likely be a lot longer and a lot harder than many in China likely anticipated after the cakewalk in LED and solar.

This means that many Chinese firms could have miscalculated when they would have been able to pay back debt and could find themselves in a cash crunch needing to extend credit terms out some more years/months.

We don’t know what caused Tsinghua’s issue but breaking the faith was not good as their bonds fell all the way down to 68 cents on the dollar at one point. (we don’t think equipment vendors would like to take 68 cents on the dollar owed them).

In the end, Tsinghua, like some US financial firms in 2008/9, is too big/important to fail and the Chinese government will step in at some point. The question is when and how and who will get hurt in the collateral damage

Could the US administer a “Coup de Grace”?
Part of the outgoing, “Scorched earth” policy

It is abundantly clear that the outgoing administration has embarked on a scorched earth policy for various reasons. Much of the scorched earth has been directed at international relations such as potentially attacking Iran, recalling troops and trying to make good on other campaign promises. Trade with China has been talked about as one such target.

The SMIC embargo, announced shortly before the election certainly was effective at hurting China’s chip ambition. Could the embargo be extended to memory, which is certainly capable of potential military “dual use technology” as a parting shot on the way out the door? Or maybe a blanket embargo? If there were a time to hurt China, the lame duck session is it.

The stocks

Most all semi stocks have been super hot as demand continues to be strong. The Tsinghua news is mildly positive for other memory makers as it will likely weaken and or slow China’s memory ambitions and ability to crush memory pricing.

It is likely not all that negative for equipment companies as they have even survived the SMIC embargo without so much as a scratch.

If anything, it may be a hidden positive as it will likely moderate memory spending which drives the notorious boom bust cycles in memory.

TSMC continues to be a huge winner. Micron seems in fine shape as well and would be happy to see Tsinghua go the way of Jinhua, even though we don’t think that will happen.

Equipment companies may see a hiccup or two in revenue recognition but not likely more than that unless things really go off the tracks, like the US upping the ante. While a possibility, we think the administration seems too pre-occupied with other fights with too little time left on the clock.

Also Read:

Is Apple the Most Valuable Semiconductor Company in the World?

2021 will be the year of DRAM!

Post Election Fallout-Let the Chips Fall / Rise Where They May


Is Apple the Most Valuable Semiconductor Company in the World?

Is Apple the Most Valuable Semiconductor Company in the World?
by Robert Maire on 11-25-2020 at 6:00 am

Apple M1

– The new M1 chip unveils previously hidden asset
– Could/should Apple sell semiconductors?
– Are servers next?
– The M1 chip appears to be a rousing success and the beginning of a new era

Essentially 100% of all early reports on the performance of the M1 chip have come back with stellar reviews. Great performance across the board across a multitude of applications while barely sipping power. It checks all the boxes of speed, power , memory, graphics, neural engine etc; Of course this is all enabled by TSMC’s 5NM process which squeezes 16 billion transistors into the M1.

The question from an investors point of view is what this does to the competition as well as what does it do for Apple. The impact cannot be underestimated and could easily extend well beyond what investors currently think and expect.

What would a stand alone “Apple Semiconductor” be worth?

“Apple Semiconductor”, Intel, AMD & Nvidia etc; all seem a lot alike as they are essentially all “fabless” design houses (or will be soon in the case of Intel) that outsource manufacture to TSMC. Nvidia leads with a market cap of $325B, Intel is at $190B and AMD is $103B. Qualcomm is $163B and Broadcom is $157B. Nvidia could get even larger with an ARM acquisition.

Apple didn’t just get into the semiconductor business recently. It’s has been in the business for many years, well over 13 years. Going back to the first iPhone introduced in 2007. Apple’s history and line of of semiconductors would easily rival any current chip maker out there.

Apple’s Chip History

Apple’s breath and depth in semiconductor design and manufacture put it firmly in the big leagues next to any of the top chip makers today. We could easily argue that “Apple Semiconductor” would be worth more than either Intel or AMD. Both Intel and AMD’s primary line of business is making X86 compatible processors for PCs and servers.

The X86 architecture goes way, way back to the original 8086 which was released an astounding 41+ years ago back in 1979. Backward X86 compatibility of todays processors made by Intel and AMD is a blessing and a curse at the same time. It brings a wealth of software that will run on anything X86 compatible but can also act as a drag on overall performance based on a 40+ year old architecture compatibility.

Chip design engineers at both AMD and Intel have never been able to erase the entire blackboard and literally “start with a clean slate”. Apple obviously does not have quite as much history to lug around and in fact just started with a relatively clean slate for the M1 design even while maintaining IOS compatibility.

Back to the days of “Big Hair”

X86 was built in the days of desktop PCs that morphed into laptops and servers even though saddled with power hungry CPUs that were always plugged into a wall socket.

The M1 is designed in the era of smart phones and cloud computing. AI & ML. Stunning graphics and a purpose built parallel architecture. In summary we think that Apple Semiconductor would be worth more than either Intel or AMD.

When we compare Apple Semiconductor to Nvidia and Qualcomm, Broadcom. Apple clearly has much of the capability of Qualcomm, Broadcom in communications and other support semiconductors but perhaps more importantly has one foot, perhaps both feet firmly planted in the future of computers and semiconductors as Nvidia does with AI & graphics capabilities.

Today we can say without exaggeration that Apple makes both the best smart phone chip as well as the best laptop/desktop chip versus anyone.

All this implies that “Apple Semiconductor” as a standalone company would likely surpass the market cap of any and all chip companies currently out there.

This is all well and fine you may say but its merely an academic exercise as “Apple Semiconductor” is inside Apple never to be let out of its “gilded cage” But what if it were free?

Could/Should Apple Attack the Server Market?

Apple’s recent M1 roll out never mentioned the word server. However we think the M1 begs the question as to whether it would and could be dominant in the highly sought after and cash cow market that is the server/cloud market.

Its Intel’s sacred cow and obviously already in AMD’s crosshairs but could Apple swoop in and clean up?

We don’t think that Apple really wants to crank out servers but it could do very well selling CPUs to all the server makers such as Intel does. Heck, Apple could start conversion on their own huge server farm. Maybe sell processors to Google, Amazon and Facebook etc; or all the huge Chinese server farms.

Power & cooling are perhaps the biggest deals in the server world and so far the one of the biggest selling points of the M1 is its low power, fans never go on, design.

The power savings alone could be the reason to switch

We think the idea is not so far fetched as the server / cloud business is an attractive target that Apple has yet to tap and now they clearly have the ammunition to do so.

Of course both Intel and AMD will improve once they start producing parts on TSMC’s 5NM or 3NM and beyond but right now Apple has a pretty big lead over both and is TSMC’s biggest customer which gets them an advantage.

From a strategic game, this could even foil Nvidia’s plan for ARM and data center conquest, thus placing Apple Semiconductor well above even Nvidia.

Why stop at server chips?

Of course we can follow the logic of entering the server market with moving into the AI or other markets such as automotive etc; The list and opportunities are long. We do doubt that Apple would ever sell its crown jewel chip technology to competitors but you never know.

Does “Apple Semiconductor” add to $2T market cap?

Its hard to move the needle on a company that’s already pushing a $2T market cap, even a few hundred billion or so.

While its hard to do some additive math here, we think more importantly that it just further underscores Apple’s value and perhaps previously hidden value while also exposing some potential vulnerabilities of existing competitors in the semiconductor business.

Apple is still somewhat limited to being a customer of TSMC but its a very symbiotic relationship much like the “Wintel”, Microsoft/Intel relationship which dominated tech for so long “Apple/TSM Semiconductor Inc” is obviously very formidable and much more so than Wintel ever was.

The stocks

Even though Intel has been very beat up we still remain concerned about how they get out of the current predicament and differentiate from AMD. AMD stock has done well at Intel’s expense but is stuck in a similar technology / market trap that has good short term dynamics but less so longer term.

We think Apple’s move to its own processors will be much faster than expected . Why in the world would I buy a dead end architecture? This could help margins even more. A faster move obviously benefits them.

Levering semiconductors further is currently just a dream but a pretty good one that could easily be executed and makes sense especially in Apple’s quest for growth as its a market that could move the needle even for them.

Sometimes its better to be lucky than smart. Apple’s timing couldn’t be much better given COVID-19, work/school remotely and Intel falling on its own sword. The M1 chip introduction is likely to be a huge success. I’ve been waiting to buy one and now I’m convinced.

Also Read:

2021 will be the year of DRAM!

Post Election Fallout-Let the Chips Fall / Rise Where They May

Downplaying SMIC – Uplaying TSMC


Can Samsung Foundry Really Compete with TSMC?

Can Samsung Foundry Really Compete with TSMC?
by Daniel Nenni on 11-20-2020 at 6:00 am

Samsung TSMC 3nm Battle SemiWiki

The semiconductor foundry business has been front page news of late and for good reason, it’s an exciting time in the semiconductor industry and the foundries are where it all begins. Unfortunately, most of the “exciting” news has been overblown but this topic is of great interest, to me at least. Having been intimately involved with the foundries for the last 30 years and covering them for SemiWiki over the last ten years I may have a different view on things so you may want to read on.

Bloomberg recently published an article “Samsung Intensifies Chip Wars With Bet It Can Catch TSMC by 2022”. This is a follow-on article to “Samsung Takes Another Step in $116 Billion Plan to Take on TSMC”. The author of both articles is Sohee Kim who works for Bloomberg out of Korea. She has zero semiconductor education or experience but certainly knows Samsung and has a direct line at the executive levels. So, you can expect these articles are straight from the horse’s mouth so to speak.

According to Sohee Kim Samsung and TSMC will compete for business at 3nm which means high volume manufacturing (HVM) in 2022. A very important point here is that TSMC 3nm and Samsung 3nm will be very different technologies. TSMC is extending their 5nm FinFET based process and Samsung is launching a new process technology (GAA) at 3nm.

Three Challenges for Samsung Foundry at 3nm:

The first challenge is ecosystem! TSMC is using a tried and true technology that is supported by a very large ecosystem of EDA, IP, and services companies. Hundreds of silicon proven IP will be immediately available for TSMC 3nm customers while Samsung must build a new ecosystem for GAA. Not as easy as it sounds, believe me.

The second challenge is trust! Foundry trust comes in different forms: Trust that your IP is safe and sound. Trust that the foundry will not compete unfairly with you. Trust that the foundry will deliver the PPA (power/performance/area) technology that was first described in the early releases of the process design kit (PDK).

The third challenge is yield! GAA is a new process technology and Samsung is well known for brute force yield problems. Being the first company to a new technology is certainly a badge of honor and I have great respect for Samsung’s technological prowess. I do however have direct experience with Samsung’s struggles to get a new process into HVM. Customers must trust that a foundry can deliver on the promise of good die/wafer capacity to meet the agreed upon chip delivery schedule.

In closing Sohee Kim suggests: “If Samsung succeeds, that will be a breakthrough for its ambition to become the chipmaker of choice for the likes of Apple Inc. and Advanced Micro Devices Inc. that now rely on foundries like TSMC.”

To be clear Apple and AMD today are exclusive to TSMC. This exclusivity gets Apple and AMD into the inner TSMC circle where collaboration is at the highest levels. Samsung’s big customers are Nvidia, Qualcomm, and IBM, none of which are in the TSMC inner circle.

From an insiders point of view, QCOM and Nvidia used to be TSMC besties but QCOM competes with Apple and Nvidia competes with AMD so there was dissension in the ranks. IBM used GF 14nm which was licensed from Samsung so they continued on with Samsung 7nm.

Bottom line: Can Samsung Foundry Really Compete with TSMC? Sorry, not today, not at 3nm. The TSMC 3nm PDK is already in use at the top semiconductor companies around the world and have the full support of the ecosystem. The Samsung 3nm PDK on the other hand is still evolving as are the tools and IP that will support it. Just my observation, experience, and opinion of course.

It really is all about trust, absolutely.


TSMC to Build first US Fab in Arizona!

TSMC to Build first US Fab in Arizona!
by Daniel Nenni on 11-15-2020 at 10:00 am

TSMC North America SemiWiki

Well, it’s official, the TSMC Board of Directors approved an investment to establish a wholly-owned subsidiary in Arizona with a paid-in capital of $3.5 billion. As history shows the investment may be more than that but $3.5B is a great starting point. This is being discussed in the SemiWiki Forum  and I have been gathering inside intelligence from the ecosystem so let me offer my experience, observation, and opinion.

This is a GREAT political move by TSMC that will help insure the independence of Taiwan, absolutely. It’s only 20,000 wafers per month to start but it can be expanded quite rapidly as TSMC expertly does. Consider this first fab a “toe in the water” test to see how the US Government responds.

In my opinion the target customers would be the US Government and suppliers. Xilinx for example does quite a bit of government business with their FPGAs. Intel is shipping 16nm products today so a US based 5nm fab in 2024 would be perfect timing for Xilinx “made in the USA” customers.

And yes I know that TSMC built a fab (WaferTech) in the United States in 1996 but that was a joint partnership with three other companies. TSMC bought out the partners and now runs it as a wholly owned subsidiary.

Unfortunately, this “toe in water” move is certainly not a guarantee of political success. TSMC did a similar toe in water test in China with Fab 16 in Nanjing (2016) which did not go as planned. Rumor has it the China Government took this olive branch and used it to advance the China semiconductor initiative by “monitoring” construction and recruiting TSMC employees:

China hires over 100 TSMC engineers in push for chip leadership, Emerging chipmakers offer lavish pay packages to snap up talent.

TSMC also has an older 200mm fab in Shanghai but competing against the China Government backed SMIC is now rather challenging for foreign owned manufacturing companies inside of China.

The ultimate goal of course is for TSMC to be an active part of the H.R.7178 – CHIPS for America Act introduced in Congress on June 11th, 2020. Given the importance of semiconductors to modern life let’s hope this bill passes and ushers in a new era of global semiconductor collaboration, absolutely.

Creating Helpful Incentives to Produce Semiconductors for America Act or the CHIPS for America Act

This bill establishes investments and incentives to support U.S. semiconductor manufacturing, research and development, and supply chain security.

Specifically, the bill provides an income tax credit for semiconductor equipment or manufacturing facility investment through 2026. The bill also establishes a trust fund to be allocated upon reaching an agreement with foreign government partners to promote (1) consistency in policies related to microelectronics, (2) transparency in microelectronic supply chains, and (3) alignment in policies towards nonmarket economies.

The Department of Commerce shall, through the National Institute of Standards and Technology (NIST), carry out a program of research and development investment to accelerate the design, development, and manufacturability of next generation microelectronics, including through the creation of a Manufacturing USA institute for semiconductor manufacturing. Commerce shall also establish a program to match state and local government incentives offered to private entities for the purposes of building fabrication facilities relating to semiconductor manufacturing. Further, Commerce shall assess the capabilities of the U.S. industrial base to support the national defense in light of the global nature of supply chains and interdependencies between the industrial bases of the U.S. and foreign countries with respect to the manufacture and design of semiconductors.

The Department of Defense shall prioritize the use of specified available amounts for programs, projects, and activities in connection with semiconductor and related technologies.

The President shall establish within NIST a subcommittee on matters relating to U.S. leadership in semiconductor technology and innovation, which shall develop a national strategy on semiconductor research.

“Semiconductors were invented in America and U.S. companies still lead the world in chip technology today, but as a result of substantial government investments from global competitors, the U.S today accounts for only 12 percent of global semiconductor manufacturing capacity,” said Keith Jackson, President, CEO, and Director of ON Semiconductor and 2020 SIA chair. “The CHIPS for America Act would help our country rise to this challenge, invest in semiconductor manufacturing and research, and remain the world leader in chip technology, which is strategically important to our economy and national security. We applaud the bipartisan group of leaders in Congress for introducing this bill and urge Congress to pass bipartisan legislation that strengthens U.S. semiconductor manufacturing and research.”


AMD and Intel Update with Xilinx

AMD and Intel Update with Xilinx
by Daniel Nenni on 11-06-2020 at 10:00 am

AMD Xilinx Acquisition

The AMD acquisition of Xilinx is certainly big news but as an insider looking at the media coverage I think there are a few more points to consider. While most of the coverage has been positive there will always be negatives and we can look at that as well.

Intel acquired Altera in 2015 for $16.7B at a 50% premium which was a major disruption for the FPGA industry. Altera and Xilinx were in a heated battle for manufacturing supremacy when Xilinx joined Altera at TSMC for 28nm and beat Altera to first Silicon. Altera responded by moving manufacturing to Intel at 14nm which resulted in Intel acquiring Altera. Looking back, it was a great move which provided Intel with a larger cloud footprint. Rumors of a Xilinx acquisition swirled afterwards but a 50%+ price premium was expected and the motivation on either side was not strong enough.

AMD and Intel are also in a heated battle for manufacturing supremacy. With AMD’s move to TSMC at 7nm the battle has shifted in AMD’s favor. Based on the latest investor calls AMD is in a very strong position against Intel for 7nm and 5nm products. Xilinx also reported a great quarter with beats at the top and bottom line with the Data Center Group hitting record revenue, up 23% Q/Q and logging 30% annual growth. This is another one of those 1+1=3 acquisitions.

And for those naysayers who think AMD will abandon the mainstream FPGA market there really is a simple solution: Keep Xilinx as a separate business unit, FPGA business as usual but also as a leverage for AMD chip business and vise versa.

The other negative I heard is that AMD and Xilinx will be fighting for leading edge wafers which is not true. Xilinx designs leading edge products but it takes time for Xilinx customers to get systems developed, qualified and shipped in volumes. Xilinx stayed on 28nm for the longest time and the new Xilinx Vertex Ultrascale+ products utilize 14/16nm process technology.

From the CEOs:

“Our acquisition of Xilinx marks the next leg in our journey to establish AMD as the industry’s high performance computing leader and partner of choice for the largest and most important technology companies in the world,” says AMD President and CEO Dr. Lisa Su in a press release.

“This is truly a compelling combination that will create significant value for all stakeholders, including AMD and Xilinx shareholders who will benefit from the future growth and upside potential of the combined company. The Xilinx team is one of the strongest in the industry and we are thrilled to welcome them to the AMD family.”

“We are excited to join the AMD family. Our shared cultures of innovation, excellence and collaboration make this an ideal combination. Together, we will lead the new era of high performance and adaptive computing,” adds Victor Peng, Xilinx president and CEO.

“Our leading FPGAs, Adaptive SoCs, accelerator and SmartNIC solutions enable innovation from the cloud, to the edge and end devices. We empower our customers to deploy differentiated platforms to market faster, and with optimal efficiency and performance. Joining together with AMD will help accelerate growth in our data center business and enable us to pursue a broader customer base across more markets.”

Sounds good to me. Now let’s talk about the other insider synergies. First and foremost is the Xilinx – TSMC relationship. The Xilinx foundry group is one of the best I have seen. I’m not saying AMD has a bad foundry group, but Xilinx has been with TSMC since 28nm and has been first to silicon on each and every node since. There is only upside for AMD here. And this includes packaging. Remember, Xilinx is a close packaging partner with TSMC (CoWoS).

Another interesting synergy is company culture. Since the beginning of AMD their marketing has outpaced engineering. Blame Jerry Sanders (AMD’s founding CEO and showman extraordinaire).  Thankfully, Lisa Su embraced that culture and brought products to market that now more evenly pace marketing.

With Xilinx, on the other hand, engineering always outpaced marketing. We included a chapter on the history of Xilinx in our first book “Fabless: The Transformation of the Semiconductor Industry” as they were one of the first fabless companies. This engineering centric culture is a biproduct of highly technical CEOs of course.

If Lisa Su is able to combine the two cultures it will be a big part of the 1+1=3 acquisition equation for sure.

Another interesting question, what is next for the FPGA industry? Programmability has never been a more critical part of the semiconductor industry as a whole. In my opinion another acquisition is looming. No, not Lattice semiconductor or Microchip. I see Achronix as being the next hot FPGA property and hopefully Nvidia has enough money left after acquiring Arm. Achronix is a $200M or so 150+ person company that is located conveniently close to Nvidia. If you combine their speedy high capacity FPGAs with the Nvidia AI/HPC software ecosystem it will be a 1+1=300 acquisition, absolutely.


Leading Edge Foundry Wafer Prices

Leading Edge Foundry Wafer Prices
by Scotten Jones on 11-06-2020 at 6:00 am

Slide1

I have seen several articles recently discussing foundry wafer selling prices for leading edge wafers, these articles all quote estimates from a paper by the Center for Security and Emerging Technology (CSET). The paper is available here.

My company IC Knowledge LLC is the world leader in cost and price modeling of semiconductors and MEMS. We have been selling commercial cost and price models for over twenty years and our customer base is a who’s who of system companies, fabless, foundries and IDMs, OEMS, materials companies and analysts. I thought it would be interesting to examine how the estimates in the paper were produced and how realistic they are.

Capital Costs

CSET begins their analysis looking at TSMC’s financial releases and find from 2004 to 2018 that revenue can be broken down into 24.93% depreciation, 36.16% other costs and 35.91% operating profit. They also come up with a 25.29% capital depreciation rate. They then go on to calculate capital consumed per wafer and then use these percentages to infer other costs. I see a couple of problems with this approach, one, it assumes these ratios are the same for all nodes, they aren’t, and two, the depreciation rate makes no sense as I will explore further below.

The capital consumed calculation is as follows:

“To obtain capital consumed per wafer, we first calculate capital investment per wafer processed per year. TSMC currently operates three GigaFabs (Fabs 12, 14, and 15) with a fourth (Fab 18) scheduled to come online in 2020 with expansion thereafter.”

This ignores TSMC’s Fab 16 with two phases in China.

“These four fabs include a total of 23 fab locations each with a known initial capital investment in 2020 USD— representing investments in facilities, clean rooms, and purchase of SME—and annual 300 mm wafer processing capacity.”

Fabs 12, 14 and 15 are each 7 phases, Fab 18 is planned to be 6 phases, apparently they are considering the 21 phases from Fabs 12, 14, and 15 plus 2 phases from Fab 18 that have recently come on-line and ignoring Fab 16 (although Fab 16 is relatively small and therefore less significant than the GigaFabs).

They plot capital investment per 300mm wafer processed per year and fit an exponential trend line to the plot.

I do not know what their specific data source is, TSMC sometimes announces fab capacity and initial investment but not always and these are often more aspirational numbers than actual costs. These fabs also often have an initial cleanroom build and then are equipped over time as they are ramped up with ramps covering more than one year. The ultimate fab capacity is often the result of additional investments. It is not clear to me how this becomes a cost per wafer per year with this approach. These values eventually get converted to capital investment per wafer per year by node based on the year and quarter each node was introduced and then assuming the capital investment per wafer by year represents that node. The problem is TSMC is not always only ramping one node in any given year plus the other issues discussed above.

The way we address capital cost in our models is fundamentally different and more detailed.

  1. For each node we built a representative process flow, this is done based on our own experience, consultation with industry experts, conference papers, patents, and actual construction analysis from our strategic partner TechInsights.
  2. We maintain a database of every 300mm wafer fab in the world tracking the initial and all upgrade states. This database is a combination of public and private sources.
  3. We maintain a database of equipment throughput, cost and footprint by node and wafer size. Once again this is based on public and private sources. Our Strategic Cost and Price Model is in use at all the major equipment OEMs and we have an extensive network of sources for this information.
  4. For each 300mm fab we calculate a fab size and cost and equipment set based on the specifics of the process, and the fab states. We calculate this for the initial fab state and up to twelve upgrades or expansions per fab.

With the amount of information going into these calculations and the complex methods used we need to validate our methods. Around 2000 – 300mm fab began to come on-line and quickly accounted for most of the capital spending at all the major semiconductor companies. For TSMC as an example, we have taken their publicly disclosed capital spending each year since 2000 and plotted it versus year as a cumulative number. We have then modeled all their 300mm fabs and spending by fab by year and added that up to create a cumulative plot. After accounting for some residual 200mm spending in the early years and any spending not yet on-line (our spending calculations are based on on-line dates) we get the following plot.

Figure 1. TSMC calculated versus actual cumulative capital spending.

 The resulting plot shows excellent match. We have done this same analysis for Samsung, Intel, Micron Technology and many others with equally good correlation.

TSMC typically focuses a fab on a single node so we now have capital costs per wafer-out estimates by node. Comparing our estimates by node to the estimates in row 2 of table 9 in the CSET paper we find that at the 90nm node the values are similar, but they steadily diverge as the nodes get smaller.

In the CSET paper rows 3 and 4 provide a net capital depreciated and undepreciated capital at the start of 2020 that are then used with a 25.29% depreciation rate to get the capital consumed per wafer value presented in row 5. This whole calculation makes no sense to me. TSMC has disclosed they use 5-year straight-line depreciation for equipment and 10-year straight-line for facilities. What this means is that if you put a piece of equipment on-line you write-off 20% of the equipment investment each year for the first 5 years and then the depreciation goes to zero in year 6. For facilities you write-off 10% of the value each year for 10 years and then the depreciation goes to zero. 90nm in 2020 is fully depreciated and even brand new 5nm investment is only depreciating at something less than 20% after blending equipment and facility depreciation.

Applying five-year straight-line for equipment, equipment installation and automation and ten-year straight-line depreciation to facilities values from our calculation we get the following depreciation by node plot. Also, on the plot is TSMC’s reported depreciation and as in the previous figure you can see the match is excellent.

Figure 2. TSMC calculated depreciation by node and quarter versus TSMC reported depreciation.

Based on these plots and other comparisons we have made it is clear our capital calculations are highly accurate.

Other Costs and Markup

This bring us to the other elements that add up to revenue.

First, to complete the wafer cost calculation:

  1. Starting Wafer – starting wafers are purchased from wafer suppliers and we have contacts at wafer brokers and wafer suppliers who provide us with the open market pricing.
  2. Labor – we have an extensive database of direct and indirect labor rates by country and year built up from a network of private and public sources.
  3. Equipment maintenance – we use a percentage of the capital investment in equipment to estimate the equipment maintenance cost. The percentage varies depending on wafer size and product type being made in the fab, for example memory is different than logic.
  4. Facilities – we do detailed facilities operating cost calculation accounting for electric and natural gas rates by country and year and equipment requirements, ultrapure water cost, waste disposal, facilities maintenance, insurance costs, and more. Once again, we have public and private data sources.
  5. Consumables – based on the process flow we calculate the usage of hundreds of individual consumables and combing that with a database of cost by consumable and year calculate the total consumable costs. We get consumable usage and cost data from our strategic partner Linx Consulting as well as an extensive network of materials suppliers.

The summation of these values and the depreciation results in manufacturing costs per node.

To get to selling price, a gross margin must be applied where the gross margin includes Selling, General and Administrative Costs (SG&A), Research and Development Costs (R&D) and Operating Profit. TSMC discloses average gross margin in their filings, however gross margin is not flat across their product line (it also varies with fab utilization). When a new process comes on-line, depreciation costs are high but then as the equipment becomes fully depreciated the wafer manufacturing costs drops more than in-half. TSMC and other foundries typically do not pass all the cost reduction that occurs when equipment becomes fully depreciated on to the customer, the net result is that gross margins are lower for newer processes and higher for older fully depreciated processes. We account for this in our calculation, but once again the calculation disclosed in CSET the paper assumes the other wafer costs and gross margin are consistent from node to node.

In our case we have a variety of ways to check our wafer prices including customers who buy wafers and compare them to our calculations, and our ability to use proprietary methods to compare our results to company filings. For example, we have compared our calculated results to TSMC’s filings every quarter from Q1-2000 to Q2-2020 with excellent match every quarter.

This brings us to the key question, how accurate are the row 7 “Foundry sale price per wafer” values in the paper and the answer is not very. There is basically an error slope to the results with the 90nm prices being too low and at 5nm the prices are too high.

Conclusion

Although the value in the CSET are not off by an order of magnitude, they are off. I have customers frequently ask me for rules of thumb and I tell them my rule of thumb is that all rules of thumb are wrong.  Accurate estimates of wafer manufacturing costs and selling prices require detailed calculations such as are embodied in our commercial cost and price models. We currently offer five cost and price models targeting different segments of the semiconductor and MEMS industries.

For more information on our models please go to www.icknowledge.com

Also Read:

VLSI Symposium 2020 – Imec Monolithic CFET

SEMICON West – Applied Materials Selective Gap Fill Announcement

Imec Technology Forum and ASML