TSMC Foundry 2.0 and Intel IDM 2.0

TSMC Foundry 2.0 and Intel IDM 2.0
by Daniel Nenni on 07-22-2024 at 10:00 am

TSMC 2Q2024 Investor Call

When Intel entered the foundry business with IDM 2.0 I was impressed. Yes, Intel had tried the foundry business before but this time they changed the face of the company with IDM 2.0 and went “all-in” so to speak. The progress has been impressive and today I think Intel is well positioned to capture the NOT TSMC business by providing a trusted alternative to the TSMC leading edge business. The one trillion dollar questions is: Will Intel take business away from TSMC on a competitive basis? I certainly hope so, for the greater good of the semiconductor industry.

On the most recent TSMC investor call, which is the first call with C.C. Wei as Chairman and CEO, TSMC branded their foundry strategy as Foundry 2.0. It is not a change of strategy, it is a new branding based on what TMSC has been successfully doing for years now, adding additional products and services to keep customers engaged. 3D IC packaging is a clear example but certainly not the only one. The Foundry 2.0 brand is well earned and is clearly targeted at Intel IDM 2.0 which I think is funny and a great example of CC Wei’s sharp wit.

I thought for sure that Intel 18A would be the breakout foundry node for Intel but according to the TSMC investor call, that is not the case. TSMC N3 was a runaway hit with 100% of the major design wins. Even Intel used TSMC N3. I hadn’t seen anything like this since TSMC 28nm which was on allocation as a result of being the only viable 28nm HKMG node out of the gate. History repeated itself with N3 due to the delay of 3nm alternatives. This made the TSMC ecosystem the strongest I have ever witnessed with both the domination of N3 and TSMC’s rapidly expanding packaging success. I had originally thought that some customers would stick with N3 until the second generation of N2 appeared but I was wrong. On yesterday’s investor call:

CC Wei: We expect the number of the new tape-outs for 2-nanometer technologies in its first two years to be higher than both 3-nanometer and 5-nanometer in their first two years. N2 will deliver full load performance and power benefit, with 10 to 15 speed improvement at the same power, or 25% to 30% power improvement at the same speed, and more than 15% chip density increase as compared with the N3E.

CC had mentioned this before but I can now confirm this based on my hallway discussions inside the ecosystem at recent conferences: N2 designs are in progress and will start taping out towards the end of this year.

I really don’t think the TSMC ecosystem gets enough credit, especially after the overwhelming success of N3, but the N2 node is a force in itself:

CC Wei: N2 technology development is progressing well, with device performance and yield on track or ahead of plan. N2 is on track for volume production in 2025 with a ramp profile similar to N3. With our strategy of continuous enhancement, we also introduce N2P as an extension of our N2 family. N2P features a further 5% performance at the same power or 5% to 10% power benefit at the same speed on top of N2. N2P will support both smartphone and HPC applications, and volume production is scheduled for the second half of 2026. We also introduce A16 as our next nanosheet-based technology, featuring Super Power Rail, or SPR, as a separate offering.

And, of course, the TSMC freight train continues:

CC Wei: TSMC’s SPR is an innovative, best-in-class backside power delivery solution that is forcing the industry to incorporate another backside contact scheme to preserve gate density and device with flexibility. Compared with N2P, A16 provides a further 8% to 10% speed improvement at the same power, or 15% to 20% power improvement at the same speed, and additional 7% to 10% chip density gain. A16 is best suited for specific HPC products with complex signal routes and dense power delivery network. Volume production is scheduled for the second half of 2026. We believe N2, N2P, A16, and its derivative will further extend our technology leadership position and enable TSMC to capture the growth opportunities way into the future.

Congratulations to TSMC on their continued success, it is well deserved. I also congratulate the Intel Foundry team for making a difference and I hope the 14A foundry node will give the industry a trusted alternative to TSMC out of the starting gate.  In my opinion, had it not been for Intel and of course CC Wei’s leadership and response to Intel’s challenge, we as an industry would not be quickly approaching the one trillion dollar revenue mark. Say what you want about Nvidia, but as Jensen Huang openly admits, TSMC and the foundry business is the real hero of the semiconductor industry, absolutely.

Also Read:

Has ASML Reached the Great Wall of China

The China Syndrome- The Meltdown Starts- Trump Trounces Taiwan- Chips Clipped

SEMICON West- Jubilant huge crowds- HBM & AI everywhere – CHIPS Act & IMEC


VLSI Technology Symposium – Intel describes i3 process, how does it measure up?

VLSI Technology Symposium – Intel describes i3 process, how does it measure up?
by Scotten Jones on 06-28-2024 at 6:00 am

Figure 1. Process Key Dimensions Comparison.

At the VLSI Technology Symposium this week Intel released details on their i3 process. Over the last four nodes Intel has had an interesting process progression. In 2019, 10nm finally entered production with both high performance and high-density standard cells. 10nm went through several iterations eventually resulting in i7, a high-performance cell only process. When we characterize process density, we always talk about the highest density logic standard cell, 10nm achieved just over 100 million transistors per millimeter squared density (MTx/mm2), i7 in in 2022 only achieved approximately 64 MTx/mm2 density because it only had high performance cells. i4 entered production in 2023 and is once again a high-performance cell only process and achieves approximately 130 MTx/mm2. Finally, i3 will enter production in 2024 on multiple Intel products providing both high performance and high-density cells. The high-density cells achieve approximately 148 MTx/mm2 transistor density.

The key dimensions for the processes are compared in figure 1.

Figure 1. Process Key Dimensions Comparison.

In figure 1 the values for 10nm and i7 are actual values measured by TechInsights on production parts, the i4 and i3 values are from the VLSI Technology papers on i4 [1], and i3 [2]. The cell height for i3 of 210nm is for high density cells, there is also a 240nm height high performance cell with the same density as the i4 process. 240nm height high performance cells are 3 fin devices the same at i4 and the 210nm high density cells are 2 fin devices with wide metal zero.

Figure 2 presents the density changes between the processes in graphics form.

From 32nm through 10nm Intel accelerated from  2.0x to 2.4x and then to 2.7x density improvements, but as is the case with other companies pushing the leading edge, i3 is a less than 2x density jump.

Figure 2. Intel Process Density Comparison.

Figure 3 is from the Intel presentation and presents more details on the i4 to i3 process shrink.

Figure 3. i4 to i3 Process Shrink.

The i3 process will offer multiple variants targeted at different applications.

  • i3 base process and i3-T with TSVs targeted at client, server and base die for chiplet applications.
  • i3-E offer native 1.2 volt I/O devices, deep N-wells, and long channel analog devices, and is targeted at chipsets and storage applications.
  • i3-PT targets high performance computing and AI with 9μm pitch TSVs and hybrid bonding.

Figure 4 summarizes the process variants.

Figure 4. i3 Process Variants.

i3 features:

  • Smaller M2 pitch than i4.
  • Better fin profile.
  • Utilizes dipoles to set threshold voltages, i4 does not use dipoles. Dipoles improve gate oxide reliability.
  • Offer 14, 18, and 21 metal layer options (counts include metal 0).
  • 4 threshold voltages, V:VT, LVT, SVT, HVT.
  • Contact optimization to provide less overlap capacitance.
  • More effective EUV usage, i4 was Intel’s first EUV process, i3 EUV processes are less complex.
  • Lower line resistance and capacitance than i4.
  • 5x lower leakage at the same drive current as i4.
  • Increased frequency and drive current with no hot carrier increase.
  • Interconnect delay is now approximately half of overall delay and the base process has better RC delay, the PT process is even better.
  • At the same power i3 HD cells provide 18% better performance than i4 HP cells.

Figure 5 presents the interconnect pitches for the 14, 18, and 21 metal options.

Figure 5. Interconnect Pitches.

Figure 6 illustrates the improvement in interconnect RC delay.

Figure 6. Interconnect RC Delay.

And finally, figure 7 illustrates the 18% performance improvement over i4.

Figure 7. Interconnect Delay Improvement.

During an analysts briefing session questions and answers session Intel disclosed the channels are all silicon, no silicon germanium channels. Also, i4 designs have been ported to i3 and they are seeing PPA improvements on the same designs.

i3 is currently in high volume manufacturing with multiple Intel products.

i3 clearly represents a significant improvement over i4.

Comparisons to competitors

i3 is a significant improvement over i4 but how does it compare to competitors?

TechInsights has analyzed density, performance, and cost of i3 versus Samsung and TSMC processes. That analysis is available in the TechInsights platform here (free registration required):

Conclusion

Intel’s i3 process is a significant step forward from Intel’s i4 process with better density and performance. Intel’s i3 process is a more competitive foundry process than previous generations. Cost is more in-line with other foundry processes, density is slightly lower than Samsung 3nm and much lower than TSMC 3nm, but it has the best performance of the “3nm” processes.

Also Read:

What’s all the Noise in the AI Basement?

The Case for U.S. CHIPS Act 2

Intel is Bringing AI Everywhere


TSMC Advanced Packaging Overcomes the Complexities of Multi-Die Design

TSMC Advanced Packaging Overcomes the Complexities of Multi-Die Design
by Mike Gianfagna on 06-10-2024 at 6:00 am

TSMC Advanced Packaging Overcomes the Complexities of Multi Die Design

The TSMC Technology Symposium provides a worldwide stage for TSMC to showcase its advanced technology impact and the extensive ecosystem that is part of the company’s vast reach. These events occur around the world and the schedule is winding down. TSMC covers many topics at its Technology Symposium, including industry-leading HPC, smartphone, IoT, and automotive platform solutions, 5nm, 4nm, 3nm, 2nm processes, ultra-low power, RF, embedded memory, power management, sensor technologies, and AI enablement. Capacity expansion and green manufacturing achievements were also discussed, along with TSMC’s Open Innovation Platform® ecosystem. These represent significant achievements for sure. For this post, I’d like to focus on another set of significant achievements in advanced packaging. This work has substantial implications for the future of the semiconductor industry. Let’s examine how TSMC advanced packaging overcomes the complexities of multi-die design.

Why Advanced Packaging is Important

Advanced packaging is a relatively new addition to the pure-play foundry model. It wasn’t all that long ago that packaging was a not-so-glamorous finishing requirement for a chip design that was outsourced to third parties. The design work was done by package engineers who got the final design thrown over the wall to fit into one of the standard package configurations. Today, package engineers are the rock stars of the design team. These folks are involved at the very beginning of the design and apply exotic materials and analysis tools to the project. The project isn’t real until the package engineer signs off that the design can indeed be assembled.

With this part of the design process becoming so critically important (and difficult) it’s no surprise that TSMC and other foundries stepped up to the challenge and made it part of the overall set of services provided. The driver for all this change can be traced back to three words: exponential complexity increase. For many years, exponential complexity increase was delivered by Moore’s Law in the form of larger and larger monolithic chips. Today, it takes more effort and cost to get to the next process node and when you finally get there the improvement isn’t as dramatic as it once was. On top of that, the size of new designs is so huge that it can’t fit on a single chip.

These trends have catalyzed a new era of exponential complexity increase, one that relies on heterogeneous integration of multiple dies (or chiplets) in a single package, and that has created the incredible focus and importance of advanced packaging as critical enabling technology. TSMC summarizes these trends nicely in the diagram below.

TSMC’s Advanced Packaging Technologies

TSMC presented many parts of its strategy to support advanced packaging and open the new era of heterogenous integration. These are the technology building blocks for TSMC’s 3DFabric™ Technology Portfolio:

  • CoWoS®: Chip-on-Wafer-on-Substrate is a 2.5D wafer-level multi-chip packaging technology that incorporates multiple dies side-by-side on a silicon interposer to achieve better interconnect density and performance. Individual chips are bonded through micro-bumps on a silicon interposer forming a chip-on-wafer (CoW).
  • InFO: Integrated Fan-Out wafer level packaging is a wafer level system integration technology platform, featuring high density RDL (Re-Distribution Layer) and TIV (Through InFO Via) for high-density interconnect and performance. The InFO platform offers various package schemes in 2D and 3D that are optimized for specific applications.
  • TSMC-SoIC®: Is a service platform that provides front-end, 3D inter-chip (3D IC) stacking technologies for re-integration of chiplets partitioned from a system on chip (SoC). The resulting integrated chip outperforms the original SoC in system performance. It also affords the flexibility to integrate additional system functionalities. The platform is fully compatible with CoWoS and InFO, offering a powerful “3Dx3D” system-level solution.

The figure below summarizes how the pieces fit together.

Getting all this to work across the ecosystem requires collaboration. To that end, TSMC has established the 3DFabric Alliance to enable work with 21 industry partners to cover memory, substrate, testing and OSAT collaborations to lower 3DIC design barriers, improve STCO and accelerate 3DIC adoption. The group also drives 3DIC development in tools, flows, IP, and interoperability for the entire 3Dfabric stack. The figure below summarizes the group of organizations that are involved in this work.

There is so much effort going on to support advanced packaging at TSMC. I will conclude with one more example of this work. 3Dblox™ is a standard new language that will help make designing 3D ICs much easier. TSMC created 3Dblox alongside its EDA partners such as Ansys, Cadence, Intel, Siemens, and Synopsys to unify the design ecosystem with qualified EDA tools and flows for TSMC 3DFabric technology. The figure below shows the progress that has been achieved with this effort.

3Dblox Roadmap

To Learn More

I have touched on only some of the great work going on at TSMC to create advanced packaging solutions to pave the way for the next era of multi-die, heterogeneous design. You can get more information about this important effort at TSMC here. And that’s how TSMC advanced packaging overcomes the complexities of multi-die design.

Also Read:

What’s all the Noise in the AI Basement?

The Case for U.S. CHIPS Act 2

Real men have fabs!


Ncredible Nvidia

Ncredible Nvidia
by Claus Aasholm on 05-24-2024 at 8:00 am

https3A2F2Fsubstack post media.s3.amazonaws.com2Fpublic2Fimages2Fd3f2ef0b a749 4074 a463 c81a9ebf1cd3

This article previews Nvidia’s earnings release and will be updated during and after the earnings release. As usual, we will compare and contrast the Nvidia earnings with our supply chain glasses to identify changes and derive insights. Please return to this article, as it will be updated over the next week as we progress with our analysis.

After three insane quarters, Nvidia’s guidance suggests that this quarter will be calmer. I am not sure the stock market can handle growth rates like Nvidia is enjoying once more without going insane. The analysts’ consensus is slightly above Nvidia’s at $24.6B. Our survey shows that the industry expectations are somewhat more optimistic.

Thanks for reading Semiconductor Business Intelligence! Subscribe for free to receive new posts and support my work.

From humble beginnings as a graphics company adored by hardcore gamers only, Nvidia is now the undisputed industry champion and has made the industry famous far beyond us wafer nerds.

When people hear you are in the semiconductor industry, they want to know what you think of Nvidia’s stock price (which is insane but could be even more insane). Obviously, this is driven by the irresistible hunger for AI in the data centre, but this is not our expertise (we recommend Michael Spensers: AI Supremacy for that). We will also refrain from commenting on stock prices and concentrate on the business and supply chain side of the story.

The supply chain has already created a frenzy amongst analysts as TSMC reported April Revenue up almost 60%. Our analysis of the TSMC revenue numbers aligned to an April quarter end shows that the TSMC trend is relatively flat and does not reveal much about Nvidia’s numbers. However, TSMC’s revenue numbers do not have to change much for Nvidia’s numbers to skyrocket. The value is not in the silicon right now as we will be diving into later.

The most important market for Nvidia is the data center and its sky-high demand for AI servers. Over the last couple of years, Nvidia and AMD have been chipping away at Intel’s market share until three quarters ago, when Nvidia’s Datacenter business skyrocketed and sucked all value out of the market for the other players. Last quarter Nvidia capture over 87% of all operating profit in the processing market.

This has faced in particular Intel with a nasty dilemma:

Nvidia has eaten Intel’s lunch.

Intel has recently unveiled a bold and promising strategy, a testament to its resilience and determination. However, this strategy comes with significant financial challenges. As illustrated in the comparison below, Intel has historically been able to fund its approximately $4B/qtr Capex as its Operating profits hovered around $11B. But as the market changed, Intel’s operating profit is now approaching zero while its CapEx spending is increasing as a result of the new strategy of also becoming a foundry ased to the area of $6B$/qtr. The increased spending is now approximately $6M and is not a temporary situation but a reality that will persist

 

Intel can no longer finance its strategy through retained earnings and must engage with the investor community to obtain financing. Intel is no longer the master of its destiny.

Intel is hit by two trends in the Datacenter market:

  1. The transition from CPU to GPU
  2. The transition from Components to Systems.

Not only did Intel miss the GPU transition business, but it also lost the CPU business because of the system transition. Nvidia GPU systems will use their CPUs, Intel is not invited.

The revolution of the semiconductor supply chain

There are two main reasons the AI revolution is changing the data center part of the supply chain.

One is related to the change from standard packaged DRAM to High-Bandwidth Memory (HBM), and the other is related to new packaging technologies (CoWoS by TSMC). Both are related and caused by the need for bandwidth. As the GPU’s computational power increases, it must have faster memory access to deliver the computational advantage needed. The memory needs to be closer to the GPU and more of it, a lot more.

A simplified view of the relevant packaging technologies can be seen below:

The more traditional packaging method (2D) involves mounting the die on a substrate and connecting the pads with bond wires. 2.3D technology can bring the chips closer by flipping them around and mounting them on an interposer (often Silicon).

The current NVIDIA GPUs are made with 2.5D technology. The GPUs are flanked by stacks of DRAM die controlled by a base Memory Logic Die.

3D technology will bring memory to the top of the GPU and introduce many new problems for intelligent semiconductor people to solve.

This new technology is dramatically changing the supply chain. In the traditional model standard of the rest of the industry, the server card manufacturer procured all components from the suppliers individually.

The competition between the Processing, Memory and the Server companies kept pricing in check for the cloud companies.

Much has become more complex in the new AI server supply chain, as seen below.

This change again makes the Semiconductor supply chain more complex, but complexity is our friend.

What did Nvidia report, and what does it mean?

Nvidia posted $26B$ revenue, significantly above guidance and below the $27.5B we believed was the current capacity limit. It looked like Nvidia could squeeze the suppliers to perform at the maximum.

The result was a new record in the Semiconductor industry. Back in ancient history (last year), only Intel and Samsung could break quarterly records, as can be seen below.

Nvidia also disclosed their networking revenue for the first time, through the earlier calls we had a good idea of the size but now it is confirmed.

As we believe almost all of the networking revenue is in the data center category, we expected it to grow as the processing business but networking revenue was down down just under 5% quarterly suggesting the bill of material is shifting in the AI server products.

Even though the networking revenue was down, the growth from same quarter last year was up, making Nvidia the fastest growing networking company in the industry. More about that later.

The most important market for Nvidia is the data center processing market and its rapid uncontrolled disassembly of the old market structure. From being a wholly owned subsidiary of Intel back in 2019, the painful story unfolds below.

In Q1-2024, Nvidia generated more additional processing revenue in the data center than Intel’s total revenue. From an operating profit perspective, Nvidia had an 87% market share and delivered a new record higher than the combined operating profit in Q1-24.

Although Nvidia reported flat networking revenue, the company’s dominance is spreading to Data center networking. Packaging networking into server systems ensures that the networking components are not up for individual negotiation and hurts Nvidia’s networking competitors. It also provides an extraordinary margin.

We have not yet found out what is behind the drop in networking, but it is likely a configuration change in the server systems or a change in categorization inside Nvidia.

Nvidia declared a 10-1 stock split.

Claus Aasholm @siliconomy Spending my time dissecting the state of the semiconductor industry and the semiconductor supply chain. “Your future might be somebody else’s past.”

Also Read:

Tools for Chips and Dips an Overview of the Semiconductor Tools Market

Oops, we did it again! Memory Companies Investment Strategy

Nvidia Sells while Intel Tells

Real men have fabs!


Nvidia Sells while Intel Tells

Nvidia Sells while Intel Tells
by Claus Aasholm on 05-01-2024 at 8:00 am

AMD Transformation 2024

AMD’s Q1-2024 financial results are out, prompting us to delve into the Data Center Processing market. This analysis, usually reserved for us Semiconductor aficionados, has taken on a new dimension. The rise of AI products, now the gold standard for semiconductor companies, has sparked a revolution in the industry, making this analysis relevant to all.

Jenson Huang of Nvidia is called the “Taylor Swift of Semiconductors” and just appeared on CBS 60 Minutes. He found time for this between autographing Nvidia AI Systems and suppliers’ memory products.

Lisa Su of AMD, who has turned the company’s fate, is now one of only 26 self-made female billionaires in the US. Later, she was the CEO of the year in Chief Executive Magazine and has been on the cover of Forbes magazine. Lisa Su still needs to be famous in Formula 1

Hock Tan of Broadcom, desperately trying to avoid critical questions about the change of WMware licensing, would rather discuss the company’s strides in AI accelerator products for the Data Center, which has been significant.

An honorable mention goes to Pat Gelsinger of Intel, the former owner of the Data Center processing market. He has relentlessly been in the media and on stage, explaining the new Intel strategy and his faith in the new course. He has been brutally honest about Intel’s problems and the monumental challenges ahead. We deeply respect this refreshing approach but also deal with the facts. The facts do not look good for Intel.

AMD’s reporting

While the AMD result was challenging from a corporate perspective, the Data Center business, the topic of this article, did better than the other divisions.

The gaming division took a significant decline, leaving the Data Center business as the sole division likely to deliver robust growth in the future. As can be seen, the Data Center business delivered a solid operating profit. Still, it was insufficient to take a larger share of the overall profit in the Data Center Processing market. The 500-pound gorilla in the AI jungle is not challenged yet.

The Data Center Processing Market

Nvidia’s Q1 numbers have been known for a while (our method is to allocate all of the quarterly revenue in the quarter of the last fiscal month), together with Broadcom’s, the newest entry into the AI processing market. With Intel and AMD’s results, the Q1 overview of the market can be made:

Despite a lower growth rate in Q1-24, Nvidia kept gaining market share, keeping the other players away from the table. Nvidas’ Data Center Processing market share increased from 66.5% to 73.0% of revenue. In comparison, the share of Operating profit declined from 88.4% to 87.8% as Intel managed to get better operating profits from their declining revenue in Q1-24.

Intel has decided to stop hunting low-margin businesses while AMD and Broadcom maintain reasonable margins.

As good consultants, we are never surprised by any development in our area once presented with numbers. That will not stop us from diving deeper into the Data Center Processing supply chain. This is where all energy in the Semiconductor market is concentrated right now.

The Supply Chain view of the Data Center Processing

A CEO I used to work for used to remind me: “When we discuss facts, we are all equal, but when we start talking about opinions, mine is a hell of a lot bigger than yours.”

Our consultancy is built on a foundation of not just knowing what is happening but also being able to demonstrate it. We believe in fostering discussions around facts rather than imposing our views on customers. Once the facts are established, the strategic starting point becomes apparent, leading to more informed decisions.

“There is nothing more deceptive than an obvious fact.” Sherlock Homes

Our preferred tool of analysis is our Semiconductor Market model, seen below:

The model has several different categories that have proven helpful for our analysis and are described in more detail here:

We use a submodel to investigate the Data Center supply chain. This is also an effective way of presenting our data and insights (the “Rainbow” supply and demand indicators) and adding our interpretations as text. Our interpretations can undoubtedly be challenged, but we are okay with that.

Our current findings that the supply chain is struggling to get sufficient CoWoS packaging technology and High Bandwith Memory is not a controversial view and is shared by most that follow the Semiconductor Industry.

This will not stop us from taking a deeper dive to be able to demonstrate what is going on.

The Rainbow bars between the different elements in the supply chain represent the current status.

The interface between Materials & Foundry shows that the supply is high, while the demand from TSMC and other foundries is relatively low.

Materials situation

This supply/demand situation should create a higher inventory position until the two bars align again in a new equilibrium. The materials inventory index does show elevated inventory, and the materials markets are likely some distance away from recovery.

Semiconductor Tools

The recent results of the semiconductor tools companies show that revenues are going down, and the appetite of IDMs and foundries indicates that the investment alike is saturated. The combined result can be seen below, along with essential semiconductor events:

The tools market has flatlined since the Chips Act was signed, and there can certainly be a causal effect (something we will investigate in a future post). Even though many new factories are under construction, these activities have not yet affected the tools market.

A similar view of the subcategory of logic tools which TSMC uses shows an even more depressed revenue situation. The tools revenue is back to a level of late 2021, in a time with unprecedented expansion of the semiconductor manufacturing foot print:

This situation is confirmed on the demand side as seen in the TSMC Capital Investments chart below.

Right after the Chips Act was signed, TSMC lowered the capex spend to close to half, making life difficult for the tools manufacturers.

The tools foundry interface has high supply and low demand as could be seen in the supply chain model. The tools vendors are not the limiting factor of GPU AI systems.

The Foundry/Fabless interface

To investigate the supply demand situation between TSMC and it’s main customers we choose to select AMD and Nvidia as they have the simplest relationship with TSMC as the bulk of their business is processors made by TSMC.

The inventory situation of the 3 companies can be seen below.

As TSMC’s inventory is building up slightly does not indicate a supply problem however this is TSMC total so their could be other moving parts. The Nvidia peak aligns with the introduction of the H100.

TSMC’s HPC revenue aligns with the Cost of Goods sold of AMD and Nvidia.

As should be expected, these is no surpises in this view. As TSMC’s HPC revenue is growing faster than the COGS of Nvidia and AMD, we can infer that a larger part of revenue is with other customers than Nvidia and AMD. This is a good indication that TSMC is not supply limited from a HPC silicon perspective. Still, the demand is still outstripping supply at the gate of the data centers.

The Memory, IDM interface

That the skyhigh demand for AI systems is supply is limited, can be seen by the wild operating profit Nvidia is enjoying right no. The supply chain of AI processors looks smooth as we saw before. This is confirmed by the TSMC’s passivity in buying new tools. If there was a production bottle neck, TSMC would have taken action from a tools perspective.

An anlysis of Memory production tools hints at the current supply problem.

The memory companies put the brakes on investments right after the last downcycle began. The last two quarters the demand has increased in anticipation of the High Bandwidth Memory needed for AI.

Hynix in their rececent investor call, confirmed that they had been underinvesting and will have to limit standard DRAM manufacturing in order to supply HBM. This is very visible in our Hynix analysis below.

Apart from the limited supply of HBM, there is also a limitation of advanced packaging capacity for AI systems. As this market is still embryonic and developing, we have not yet developed a good data method to be able to analyze it but are working on it.

While our methods does not prove everything, we can bring a lot of color to your strategy discussions should you decide to engage with our data, insights and models.

Thanks for reading Semiconductor Business Intelligence! Subscribe for free to receive new posts and support my work.

Also Read:

Real men have fabs!

Intel High NA Adoption

Intel is Bringing AI Everywhere

TSMC and Synopsys Bring Breakthrough NVIDIA Computational Lithography Platform to Production


TSMC and Synopsys Bring Breakthrough NVIDIA Computational Lithography Platform to Production

TSMC and Synopsys Bring Breakthrough NVIDIA Computational Lithography Platform to Production
by Daniel Nenni on 04-02-2024 at 6:00 am

nvidia culitho

NVIDIA cuLitho Accelerates Semiconductor Manufacturing’s Most Compute-Intensive Workload by 40-60x, Opens Industry to New Generative AI Algorithms.

An incredible example of semiconductor industry partnerships was revealed during the Synopsys User Group (SNUG) last month. It started with a press release but there is much more to learn here in regards to semiconductor industry dynamics.

I saw a very energized Jensen Huang, co-founder and CEO of Nvidia, at GTC which was amazing. It was more like a rock concert than a technology conference. Jensen appeared at SNUG in a much more relaxed mode chatting about the relationship between Nvidia and Synopsys. Jensen mentioned that in exchange for Synopsys software, Nvidia gave them 250,000 shares of pre IPO stock which would now be worth billions of dollars. I was around back then at the beginning of EDA, Foundries, fabless, and it was quite a common practice for start-ups to swap stock for tools.

Jensen said quite clearly that without the support of Synopsys, Nvidia would not have not gotten off the ground. He has said the same about TSMC. In fact, Jensen and TSMC founder Morris Chang are very close friends as a result of that early partnership.

The new cuLitho product has enabled  a 45x speedup of curvilinear flows and a nearly 60x improvement on more traditional Manhattan-style flows. These are incredible cost savings for TSMC and TSMC’s customers and there will be more to come.

“Computational lithography is a cornerstone of chip manufacturing,” said Jensen Huang, founder and CEO of NVIDIA. “Our work on cuLitho, in partnership with TSMC and Synopsys, applies accelerated computing and generative AI to open new frontiers for semiconductor scaling.”

“Our work with NVIDIA to integrate GPU-accelerated computing in the TSMC workflow has resulted in great leaps in performance, dramatic throughput improvement, shortened cycle time and reduced power requirements,” said Dr. C.C. Wei, CEO of TSMC. “We are moving NVIDIA cuLitho into production at TSMC, leveraging this computational lithography technology to drive a critical component of semiconductor scaling.”

“For more than two decades Synopsys Proteus mask synthesis software products have been the production-proven choice for accelerating computational lithography — the most demanding workload in semiconductor manufacturing,” said Sassine Ghazi, president and CEO of Synopsys. “With the move to advanced nodes, computational lithography has dramatically increased in complexity and compute cost. Our collaboration with TSMC and NVIDIA is critical to enabling angstrom-level scaling as we pioneer advanced technologies to reduce turnaround time by orders of magnitude through the power of accelerated computing.”

“There are great innovations happening in computational lithography at the OPC software layer from Synopsys, at the CPU-GPU hardware layer from NVIDIA with the cuLitho library, and of course, we’re working closely with our common partner TSMC to optimize their OPC recipes. Collectively, we have been able to show some dramatic breakthroughs in terms of performance for one of the most compute-intensive semiconductor manufacturing workloads.” — Shankar Krishnamoorthy, GM of the Synopsys EDA Group

Collaboration and partnerships are still critical for the semiconductor industry, in fact collaborative partnerships have been a big part of my 40 year semiconductor career. TSMC is an easy example with the massive ecosystem they have built. Synopsys is in a similar position as the #1 EDA company, the #1 IP company, and the #1 TCAD company. All of the foundries closely collaborate with Synopsys, absolutely.

Also Read:

Synopsys SNUG Silicon Valley Conference 2024: Powering Innovation in the Era of Pervasive Intelligence

2024 DVCon US Panel: Overcoming the challenges of multi-die systems verification

Synopsys Enhances PPA with Backside Routing


No! TSMC does not Make 90% of Advanced Silicon

No! TSMC does not Make 90% of Advanced Silicon
by Scotten Jones on 03-11-2024 at 2:00 pm

Slide1

Throughout the debate on fab incentives and the Chips Act I keep seeing comments like; TSMC makes >90% of all advanced silicon, or sometimes Taiwan make >90% of all advanced silicon. This kind of ill-defined and grossly inaccurate statement drives me crazy. I just saw someone make that same claim in the SemiWiki forums and I decided it was time to comment on this.

Let’s start with defining what is an advanced semiconductor. Since the specific comment is about TSMC, let’s start with the TSMC definition, TSMC breaks out 7nm and below as advanced. This is a good break point in logic because Samsung and TSMC 7nm both have densities of approximately 100 million transistor per millimeter squared (MTx/mm2). Intel 10nm also has approximately 100 MTx/mm2, therefore we can count Samsung and TSMC 7nm and below and Intel 10nm and below.

That all works for logic, but this whole discussion ignores other advanced semiconductors. I would argue that there are three truly leading edge advanced semiconductors in the world today where state-of-the-art equipment is being pushed to the limits of what is achievable: 3DNAND, DRAM, and Logic. In each case there are three or more of the worlds largest semiconductor companies pushing the technology as far and as fast as humanely possible. Yes, the challenges are different, 3DNAND has relatively easy lithography requirements but deposition and etching requirements are absolutely at the edge of what is achievable. DRAM has  a mixture of lithography, materials and high aspect ratio challenges. Logic has the most EUV layers and process steps but they are all equally difficult to successfully produce with good yield.

Including 3DNAND and DRAM means we need an “advanced semiconductor” limits for these two processes. When 7nm was first being introduced for logic, 3DNAND was at the 96/92 layer generation and DRAM was at 1y. We will use those as the limits for advanced semiconductors.

In order to complete this analysis without spending man-days that I don’t have to spare, I simply added up the worldwide installed capacity for 3DNAND 96/92L layers and greater, DRAM 1y and smaller and Logic 7nm (i10nm) and smaller. Furthermore I broke out logic into TSMC and other.

Figure 1 illustrates the worldwide installed capacity in percentage broken out by those categories.

Figure 1. Worldwide Advanced Silicon Installed Capacity by Category.

From figure 1 it can be seen that TSMC only represents 12% of worldwide “advanced silicon”, way off the 90% number being thrown around. Now utilization could change these numbers some and I haven’t included that due to time constraints, but I don’t think it would change this that much and as the memory sector recovers it will become a non issue.

I also looked at this a second way which is just worldwide advanced logic, see figure 2.

Figure 2. Worldwide Advanced Logic Installed Capacity by Category.

From figure 2 we can see that even when we look at Advanced Logic TSMC is only 64% versus “90%”.

The only way we would get to 90% is if we defined “advanced silicon” as 3nm logic. This would require a good definition of what 3nm logic is. On a density basis TSMC is the only 3nm logic process in the world, Samsung and Intel are really 5nm processes on a density basis, although Intel i3 is in my estimation the highest performing process available.

In conclusion, TSMC actually only makes up 12% of worldwide Advanced Silicon and only 64% of Advanced Logic. This is not to minimize the importance of TSMC to the global electronics supply chain, but when debating things as important as the worldwide semiconductor supply chain we should at least get the numbers right.

Also Read:

ISS 2024 – Logic 2034 – Technology, Economics, and Sustainability

How Disruptive will Chiplets be for Intel and TSMC?

2024 Big Race is TSMC N2 and Intel 18A


Intel and TSMC IDM 2024 Discussions

Intel and TSMC IDM 2024 Discussions
by admin on 03-10-2024 at 8:00 am

TSMC Intel

In December 2023, we published the Intel Revenue forecast for external wafer sales, gave a breakdown on how customers plan to ramp the foundry. The forecast is still valid (it assumes Intel executes on all plans) but since then we have a better understanding of Intel’s strategy and scenarios that could unfold.

The scenarios are based on Intel’s strengths and weaknesses which are quite different than TSMC and quite different than what we expected 2-3 years ago.

Background:

In 2019-2021, it became clear that Intel was a distant follower to TSMC in technology and that they needed to catch up or just outsource everything to TSMC/Samsung/others. Intel BUs complained about technology delay and cost and wanted to work with TSMC.

• It seemed like Intel would move to outsource, but Pat changed the plans based on discussions in 2021. Intel would allow BUs to choose Internal or TSMC. They would (and still do) come up with dual sourcing options and plans until later in the product development lifecycle.

• Intel cannot lead in tech with the small scale of current Intel (Times change, Intel is the third priority for equipment companies). Equipment vendors do much of the process and all of the tool development. You need scale to get their support. So Intel needs to offer foundry services to roughly double the scale of Intel wafer output. Intel needed to go “all in” on being a leading foundry.

• Pat [hypothetically] said: “…Business units say manufacturing is the problem. Manufacturing say BU is the problem. Fine …. Each of you can do what you want…. BUT we will make major decisions based on your execution.”

Hence where we are today: Intel is ramping TSMC on chips for processors of all types. Some leading products are 100% TSMC. And Intel is promoting foundry for others at the same time. 5 nodes in 4 years (not really, but that is a different report).

The BUs are extremely happy with this. So far multiple products have been moved to TSMC and the flexibility in using N5,N3,N2 is something they love. TSMC price is about the same as Intel’s cost, so BU margins will increase.

But how does Intel compete cost effectively with TSMC and ramp foundry and pay for all these fabs?

We overlooked a couple things until our IEDM discussions with various people in December 2023.

• Intel still wants to win and be better than TSMC. It seems unlikely… but it might not matter.

• The US government buys chips for internal products and DoD items. No strategic DoD product has TSMC parts in it. TSMC does not meet the criteria. As a result, those products have technologies that are not close to leading edge. IBM (past), GF and other defense approved companies make chips for those products but they are nowhere near leading edge. They would love to use leading edge but they need a DoD approved US company. While DoD parts are relatively low volume, the government could expand this to any Government supply chain (they track detailed supply chain and factories for all parts). IRS, Social Security, etc. TSMC cannot fill this today and it would require massive regulation to even have Samsung US or TSMC US support it. Trust me, I have done the audits with government products before, it can be extremely painful.

Also, While Intel is not set up from a scale or from a cultural perspective to be a leader in cost, US Government pays cost plus and incredibly high prices for products. Intel could have half filled fabs and still have great margins. You can see this at some government suppliers today.

• The third one also could have been predicted but was missed. Leading edge is too expensive and complex. So many foundries…. GF, UMC, SMIC, Grace, Tower have no ability to provide leading edge or even 2 generations behind technology. Intel can partner with them, provide “more modern” technologies, provide scale etc. All companies not named TSMC or Samsung could GREATLY benefit from partnering with Intel and this allows them to compete with Samsung and TSMC.

Based on the above strategies. Intel could outsource most of its silicon to TSMC to keep the BUs happy and STILL be a leader in foundry just based on being the “US Fab company” and “advanced fabs to other foundries”. These customers are much more compatible with Intel than selling to Apple, AMD, Nvidia, and Broadcom.

This is a different foundry model but one where Intel has a strength and can potentially dominate. This all may or may not work. We have quantitative milestones you can track to see if Intel is successful.

The Three Potential Foundry Scenarios are:

*Intel Foundry Success*: Intel has competitive processes at competitive prices and ramps up to be another dominant leading edge foundry. Intel is leader and Intel BUs use Intel processes. Revenue and profits grow.

*Intel fills TSMC gaps*: Intel supplies all other foundries, Intel supplies government. Both have few other options so they pay the price needed. Revenue grows steadily over then next 10-15 years.

*Intel is IDM2.0 = IBM2.0*: Intel struggles to ramp government work and factories. Intel’s foundry partners decide it’s not worth it to work with them and the processes are unsuccessful. The fabs are given away, or cancelled, or underloaded. Eventually Intel foundry is absorbed.

We have more details on each and in the next few years, the probability of each scenario will change. We have updates on the probability and what tactics, models, and strategies Intel is using. More importantly we provide milestones so others can track progress…. and we track the impact to P&L and Capex.

Foundry Day Update (BREAKING NEWS): All of the presentations and commitments support the background we show, the strategies, and the scenarios.

Mark Webb
www.mkwventures.com

Also Read:

Intel Direct Connect Event

ISS 2024 – Logic 2034 – Technology, Economics, and Sustainability

Intel should be the Free World’s Plan A Not Plan B, and we need the US Government to step in

How Disruptive will Chiplets be for Intel and TSMC?


ISS 2024 – Logic 2034 – Technology, Economics, and Sustainability

ISS 2024 – Logic 2034 – Technology, Economics, and Sustainability
by Scotten Jones on 02-19-2024 at 8:00 am

Slide4

For the 2024 SEMI International Strategy Symposium I was challenged by members of the organizing committee to look at where logic will be in ten years from a technology, economics, and sustainability perspective. The following is a discussion of my presentation.

To understand logic, I believe it is useful to understand what makes up leading edge logic devices. TechInsights produces detail footprint analysis reports, and I took reports for ten 7nm and 5nm class devices including Intel and AMD microprocessors, Apple A series and M series processors, an NVIDIA GPU, and other devices. Figure 1 illustrates what makes up the die area.

Figure 1. Logic Layouts

From figure 1 logic makes up slightly less than one half of the die area, memory slightly less that one third of the die and I/O, analog, and other the balance. I find it interesting that the SRAM memory areas actually measured is a lot smaller than the percentage I typically hear people talk about for System On a Chip (SOC) products. The plot on the bottom right shows that there is one outlier but otherwise the values are tightly clustered.

Singe logic makes up almost half the die area, it makes sense to start with the logic part of the design. Logic designs are done with standard cells and figure 2 is a plan view of a standard cell.

Figure 2. Standard Cells

The height of a standard cell is typically characterized as the Metal 2 Pitch (M2P) multiplied by the number of tracks, but looking at the right side of the figure there is a cross sectional view of the device structure that must also match the cell height and is constrained by device physics. The same is the case for the cell width that depends on the Contacted Poly Pitch (CPP) and looking at the bottom of the figure there is a cross sectional view of the device structure that once again is constrained by physics.

Figure 3 presents the result of an analysis to determine the practical limits of cell width and cell height scaling. I have a presentation that details the scaling constraints and in that presentation there are dozens of slides between figure 2 and figure 3, but with limited time I could only show the conclusion.

Figure 3. Logic Cell Scaling

Cell width scaling depends on CPP, and the left side of the figure illustrates how CPP is made up of Gate Length (Lg), Contact Width (Wc) and two Contact to Gate Spacer Thicknesses (Tsp). Lg is constrained by leakage and the minimum Lg with acceptable leakage depends on the device type. Planar devices with a single gate controlling the surface of a channel with an unconstrained thickness, is limited to approximately 30nm. FinFETs and horizontal Nanosheets (HNS) constrain the channel thickness (~5nm) and have 3 and 4 gates respectively. Finally, 2D materials introduce <1nm channel thickness, non silicon materials and can produce Lg down to ~5nm. Both Wc and Tsp have limited ability to scale due to parasitics, The bottom line is a 2D device can likely produce a ~30nm CPP versus todays CPPs that are ~50nm.

Cell height scaling is illustrated on the right side of the figure. HNS offers single nanosheet stacks in place of multiple fins. Then the evolution to stacked devices with a CFET eliminates the horizontal n-p spacing and stacks the nFet and pFET. Cell heights that are currently 150nm to 200nm can be reduced to ~50nm.

The combination of CPP and Cell Height scaling can produce transistor densities of ~1,500 million transistor per millimeter squared (MTx/mm2) versus todays <300MTx/mm2. It should be noted that 2D materials is a likely a mid to late 2030 technology so 1,500 MTx/mm2 is outside of the timing discussed here.

Figure 4 presents a summary of announced processes from Intel, Samsung, and TSMC.

Figure 4. Announced Processes

For each company and year, the device type, whether or not backside power is used, density, power and performance are displayed if available. Power and performance are relative metrics and power is not available for Intel.

In figure 4, leading performance and technology innovations are highlighted in bold. Samsung is the first to put HNS in production in 2023 where Intel won’t introduce HNS until 2024 and TSMC until 2025. Intel is the first to introduce backside power into production in 2024 and Samsung and TSMC won’t introduce it until 2026.

My analysis concludes Intel is the performance leader with i3 and maintains that status for the period illustrated, TSMC has the power lead (Intel data not available) and density leadership.

Figure 5 presents our logic roadmaps and includes projected SRAM cell sizes (more on this later).

Figure 5. Logic Roadmap

From figure 5 we expect CFETs to be introduced around 2029 providing a boost in logic density and also cutting SRAM cell sizes nearly in half (SRAM cell size scaling has virtually stopped at the leading edge). We expect logic density to reach ~757MTx/mm2 by 2034.

Both the logic transistor density projections and SRAM transistor density projections are illustrated in figure 6.

Figure 6. Transistor Density Projections

Both logic and SRAM transistor density scaling is slowing but SRAM to a greater extent and logic now has similar transistor density to SRAM.

Slide 7 summarizes TSMC data on analog scaling in comparison to Logic and SRAM. Analog and I/O scaling are both slower than logic scaling as well.

Figure 7. Analog and I/O Scaling

A possible solution to slower SRAM and analog and I/O scaling is chiplets. Chiplets can enable less expensive – more optimized processes to be utilized to make SRAM and I/O.

Figure 8. Chiplets

The figure on the right side of figure 8 comes from a 2021 paper I coauthored with Synopsys. Our conclusion was breaking apart a large SOC into chiplets could cut the cost in half even after accounting for increased packaging/assembly costs.

Figure 9 presents normalized wafer and transistor costs for logic, SRAM and I/O (please note the figure has been updated from the original presentation).

Figure 9. Cost Projections

In the right figure the normalized wafer cost is shown. The logic wafer cost are for a full metal stack that is increasing in number of metals layers. The SRAM wafers are the same nodes but limited to 4 metals layers due to the more regular layout of SRAM. The I/O wafer cost is based on a 16nm – 11 metal process. I selected 16nm to get a minimum cost FinFET node to ensure adequate I/O performance.

The figure on the right is the wafer cost converted to transistor cost. Interestingly the I/O transistor are so large that even on a low cost 16nm wafer they have the highest cost (the I/O transistor size is based on TechInsights measurements of actual I/O transistors). Logic transistor costs go up at 2nm the first TSMC HNS sheet node where the shrink is modest. We expect the shrink at 14A to be larger as a second-generation HNS node (this is similar to what TSMC did with their first FinFET node). Once again, the cost of the first CFET node also increases transistor cost for one node. SRAM transistor cost trends upward due to limited shrink except for a one time CFET shrink. The bottom line of this analysis is that transistor cost reduction will be modest although Chiplets can provide a onetime benefit.

Moving on to sustainability, figure 10 explains the different “scopes” that make up carbon footprint.

Figure 10. Carbon Footprint Scopes

Scope 1 is the direct site emissions due to process chemicals and combustion (electric can also be scope 1 if generated on-site), Scope 2 is due to the carbon footprint of purchased electricity. Scope 3 is not included in this analysis but is due to the carbon footprint of purchased materials, the use of the manufactured product and things like vehicles driven by employees of a company.

A lot of companies in the semiconductor industry are claiming that they have no carbon emission due to electricity because the electricity is all renewable. Figure 11 compares renewable to carbon free.

Figure 11. Carbon Intensity of Electricity

The key problem is that 84% of renewable energy in the semiconductor industry in 2021 was found by Greenpeace to be renewable energy certificates where a company purchases the rights to claim reductions someone else already did. This is not same as installing low carbon electric sources or paying others to provide low carbon electricity and does not in-fact lower the global carbon footprint.

Figure 12 illustrates how process chemical emissions take place and are characterized.

Figure 12. Process Chemical Emissions

Process chemicals enter a process chamber where a percentage of the chemicals are utilized in an etching or deposition reaction that breaks down the chemicals or incorporates them into a deposited film. 1-uitlization is the amount of chemical that escapes out the exhaust of the tool. The tool exhaust then may go into an abatement chamber further breaks down a percentage of the chemicals and the emissions to the atmospheres from abatement are 1-abatement. Finally, a Global Warming Potential (GWP) is applied to calculate the carbon equivalency of the emission. GWP takes into account how long the chemical persists in the atmosphere and how much heat the chemical reflects back in comparison to carbon dioxide. Carbon dioxide has a GWP of 1, semiconductor process chemicals such as SF6 and NF3 have GWP values of 24,300 and 17,400 respectively (per IPCC AR6).

Figure 13 presents some options for reducing emissions.

Figure 13. Reducing Emissions

 Electricity sources such as coal produce 820 grams of CO2 equivalent emissions per kilowatt hour (gCO2e/KWh) whereas solar, hydroelectric, wind, and nuclear power produce 48, 24, 12 and 12, gCO2e/KWh respectively.

More efficient abatement systems can breakdown process gases more effectively. Fab abatement system range in efficiency from 0% for some reported US sites (no abatement) to ~90%. We estimate the worldwide 300mm fabs average is ~70% and that most 200mm and smaller wafer size fabs have no abatement. Systems with up to 99% efficiency are available.

Lower emission chemistry can also be used. Tokyo Electron has announced a new etch tool for 3D NAND that uses gases with zero GWP. Gases such as SF6 and NF3 are primarily used to deliver Fluorine (F) into chambers for cleaning, substituting F2 (GWP 0) or COF2 (GWP 1) can essentially eliminate this source of emissions.

Figure 14 illustrates a Carbon Footprint Forecast for logic.

Figure 14. Carbon Footprint Forecast

In the figure, the first bar on the left is a 3nm process run in Taiwan in 2023 assuming Taiwan’s electricity carbon footprint and 70% abatement. The second bar is a 5A process and the emission that would result if the same 2023 Taiwan electricity carbon intensity and 70% abatement were used. The increase in process complexity would drive up the overall footprint by 1.26x. Looking forward to 2034 Taiwan’s electricity is expected to decarbonize significantly, also 90% abatement should be common, and the third bar shows what a 5A process would look like under this condition. While this represents cutting emissions by more than half, growth in the number of wafers run by the industry for 2034 would likely overwhelm this improvement. The final bar on the right is what is possible with sufficient investment, it is based on low carbon electricity, 99% abatement, and using F2 for chamber cleaning.

Figure 15 presents our conclusions:

Figure 15. Conclusion.

Transistor density, and wafer and die cost estimates were generated using the TechInsights Strategic Cost and Price Model, an industry roadmap that produces cost and price estimates as well as detailed equipment and materials requirements. The GHG emission estimates were produced using the TechInsights Semiconductor Manufacturing Carbon Model. For more information, please contact sales@techinsights.com

I would like to acknowledge my colleagues in the Reverse Engineering Business Unit at TechInsights, their digital floorplan and process reports were very helpful in creating this presentation. Also, Alexandra Noguera at TechInsights for extracting I/O transistor sizing data for this work.

Also Read:

IEDM 2023 – Imec CFET

IEDM 2023 – Modeling 300mm Wafer Fab Carbon Emissions

SMIC N+2 in Huawei Mate Pro 60

ASML Update SEMICON West 2023


How Disruptive will Chiplets be for Intel and TSMC?

How Disruptive will Chiplets be for Intel and TSMC?
by Daniel Nenni on 01-15-2024 at 10:00 am

UCIe Consortium

Chiplets (die stacking) is not new. The origins are deeply rooted in the semiconductor industry and represent a modular approach to designing and manufacturing integrated circuits. The concept of chiplets has been energized as a response to the recent challenges posed by the increasing complexity of semiconductor design. Here are some well documented points about the demand for chiplets:

Complexity of Integrated Circuits (ICs): As semiconductor technology advanced, the complexity of designing and manufacturing large monolithic ICs increased. This led to challenges in terms of yield, cost, skilled resources, and time-to-market.

Moore’s Law: The semiconductor industry has been following Moore’s Law, which suggests that the number of transistors on a microchip doubles approximately every two years. This relentless scaling of transistor density poses challenges for traditional monolithic designs.

Diverse Applications: Different applications require specialized components and features. Instead of creating a monolithic chip that tries to cater to all needs, chiplets allow for the creation of specialized components that can be combined in a mix-and-match fashion.

Cost and Time-to-Market Considerations: Developing a new semiconductor process technology is an expensive and time-consuming endeavor. Chiplets provide a way to leverage existing mature processes for certain components while focusing on innovation for specific functionalities. Chiplets also aid in the ramping of new process technologies since the die sizes and complexity are a fraction of a monolithic chip thus easing manufacturing and yield.

Interconnect Challenges: Traditional monolithic designs faced challenges in terms of interconnectivity as the distance between components increased. Chiplets allow for improved modularity and ease of interconnectivity.

Heterogeneous Integration: Chiplets enable the integration of different technologies, materials, and functionalities on a single package. This approach, known as heterogeneous integration, facilitates the combination of diverse components to achieve better overall performance.

Industry Collaboration: The development of chiplets often involves collaboration between different semiconductor companies and industry players. Standardization efforts, such as those led by organizations like the Universal Chiplet Interconnect Express Consortium (UCIe) for chiplet integration.

Bottom line: Chiplets emerged as a solution to address the challenges posed by the increasing complexity, cost, time-to-market, and staffing pressures in the semiconductor industry. The modular and flexible nature of chiplet-based designs allows for more efficient and customizable integration of chips, contributing to advancements in semiconductor technology, not to mention the ability to multi source die.

Intel

Intel really has capitalized on chiplets which is key to their IDM 2.0 strategy.

There are two major points:

Intel will use chiplets to deliver 5 process nodes in 4 years which is a critical milestone in the IEDM 2.0 strategy (Intel 7, 4, 3, 20A, 18A).

Intel developed the Intel 4 process for internal products using chiplets. Intel developed CPU chiplets which is much easier to do than the historically monolithic CPU chips. Chiplets can be used to ramp a process much quicker and Intel can claim success without having to do a full process for complex CPUs or GPUs. Intel can then release a new process node (Intel 3) for foundry customers which can design monolithic or chiplet based chips. Intel is also doing this for 20A and 18A thus the 5 process nodes in 4 years milestone. This accomplishment is debatable of course but I see no reason to.

Intel will use chiplets in order to outsource manufacturing (TSMC) when business dictates.

Intel signed a historical outsourcing agreement with TSMC for chiplets. This is a clear proof of concept to get us back to the multi sourcing foundry business model that we enjoyed up until the FinFET era. I do not know if Intel will continue to use TSMC beyond the N3 node but the point has been made. We are no longer bound by a single source for chip manufacturing.

Intel can use this proof of concept (using chiplets from multiple foundries and packaging them up) for foundry business opportunities where customers want the freedom of multiple foundries. Intel is the first company to do this.

TSMC

There are two major points:

Using chiplets customers can theoretically multi source where their die comes from. Last I heard TSMC would not package die from other foundries but if a whale like Nvidia asked them to I’m sure they would.

Chiplets will challenge TSMC and TSMC is always up for a challenge because with challenge comes innovation.

TSMC quickly responded to chiplets with their 3D Fabric comprehensive family of 3D Silicon Stacking and Advanced Packaging Technologies. The greatest challenge for chiplets today is the supporting ecosystem and that is what TSMC is all about, ecosystem.

Back to the original question “How Disruptive will Chiplets be for Intel and TSMC?” Very much so. We are in the beginning of a semiconductor manufacturing disruption that we have not seen since FinFETs. All pure-play and IDM foundries now have the opportunity to get a piece of the chips that the world depends on, absolutely.

Also Read:

2024 Big Race is TSMC N2 and Intel 18A

IEDM: What Comes After Silicon?

IEDM: TSMC Ongoing Research on a CFET Process

IEDM Buzz – Intel Previews New Vertical Transistor Scaling Innovation