Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/index.php?threads/tsmc-reportedly-to-halt-7nm-and-below-chip-shipments-to-china%E2%80%99s-ai-firms-next-week.21419/
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2021370
            [XFI] => 1050270
        )

    [wordpress] => /var/www/html
)

TSMC Reportedly to Halt 7nm and Below Chip Shipments to China’s AI Firms Next Week

Maximus

Active member
[News] TSMC Reportedly to Halt 7nm and Below Chip Shipments to China’s AI Firms Next Week
2024-11-08

Following previous controversies of supplying 7nm chips to Huawei through proxies, TSMC has reportedly notified all its AI chip customers in China by formal emails that starting next week (November 11), it would halt shipments of all the 7nm and more advanced chips to its AI/GPU clients there, according to Chinese media outlet ijiwei.

While this decision may temporarily reduce TSMC’s business in China, in the long run, TSMC could gain more opportunities in the U.S. market by complying with American regulations, the report says.

According to ijiwei’s analysis, TSMC’s move, which highlights the foundry giant’s delicate position in the global semiconductor supply chain amid the heating chip war between the world’s two superpowers, could become a watershed moment in the future of technology development, with long-lasting impacts.

According to the ijiwei report, with the newly elected Trump claiming that TSMC should pay a “protection fee,” the company’s latest move seems to be an effort to align itself with the U.S. Department of Commerce. The two parties, together, have created a stringent review system to completely block advanced process from China’s reach, the report notes.

According to another media outlet SEMICONVoice, the U.S. Department of Commerce has reportedly instructed TSMC to make the move, as production could only proceed after being reviewed and approved by the U.S. Department of Commerce’s BIS (Bureau of Industry and Security) and receiving a license. This would effectively tighten the availability of advanced 7nm and below processes for all Chinese AI chips, GPUs, and autonomous driving ADAS systems, the report notes.

According to the latest report by Bloomberg and Reuters, TSMC has almost finalized binding agreements for multi-billion dollar grants and loans to back its U.S. factories, which may allow it to receive the funding from the Biden administration soon.

TSMC’s package, announced in April, includes USD 6.6 billion in grants and up to USD 5 billion in loans to aid the construction of three semiconductor factories in Arizona.

On the other hand, TSMC’s decision would be a major blow to China’s AI ambition, as AI and GPU companies in China will no longer have access to TSMC’s advanced process, which could lead to higher costs and longer time-to-market, and significantly impact their product performance and market competitiveness, the ijiwei report states.

A supply chain reshuffle is likely to follow, as Chinese chip design companies may need to seek alternative foundries, according to the report.

China’s SMIC, currently the world’s third largest foundry, is said to successfully produce 5nm chips using DUV lithography instead of EUV. However, as previously reported by the Financial Times, industry sources have indicated that SMIC’s prices for 5nm and 7nm processes are 40% to 50% higher than TSMC’s, while the yield less than one-third of TSMC’s.

According to TrendForce, as of the second quarter of 2024, SMIC maintains a solid 5.7% market share, securing its position in third place, after TSMC (62.3%) and Samsung (11.5%).

 
China finds itself in a difficult position, and TSMC does not appear to have an easier situation either. Without sufficient leverage, skilled corporations are merely pawns in larger political games. 🥂
 
Do (at least some) smartphone chips count as "AI chips"? Like Snapdragon advertises "on-chip AI". Maybe it is a design necessity now. In that case, the revenue loss to TSMC could be larger than expected for just server-targeted chips.
 
TSMC has reportedly notified all its AI chip customers in China by formal emails that starting next week (November 11), it would halt shipments of all the 7nm and more advanced chips to its AI/GPU clients there, according to Chinese media outlet ijiwei.
More customers for SMIC I guess. It seems the US will force companies like Horizon Robotics to use Chinese foundries instead of Taiwanese ones like TSMC. China consumes like 30 million cars a year so a tremendous market opportunity.

While this decision may temporarily reduce TSMC’s business in China, in the long run, TSMC could gain more opportunities in the U.S. market by complying with American regulations, the report says.
I am fairly certain TSMC will find some way to keep its fabs used. But 7nm seems to be falling into a pit where no one wants it. Neither high performance enough, nor cheap enough.

China’s SMIC, currently the world’s third largest foundry, is said to successfully produce 5nm chips using DUV lithography instead of EUV.
SMIC has no 5nm process. They are still at 7nm. At least there is no evidence of anything they produced at 5nm process.

However, as previously reported by the Financial Times, industry sources have indicated that SMIC’s prices for 5nm and 7nm processes are 40% to 50% higher than TSMC’s, while the yield less than one-third of TSMC’s.
I do not know SMIC's prices. But there is no way in hell that SMIC's yield at 7nm is less than one-third that of TSMC. That claim falls flat on its face by anyone doing the most basic math.

Even if TSMC's yield was 100%, which it is not, one third of that is 33%. If SMIC was making 14.5 x 14.3 mm Kirin 9000S chips that would be 90 good dies per wafer. SMIC had a 35,000 wpm FinFET fab. So that would be 3,150,000 chips per month. Huawei is selling 11.7 million smartphones per quarter. Huawei fabs other FinFET chips there. And SMIC has other FinFET customers like Loongson and Phytium.

Here you have a Pythium S5000C server processor with 64 ARM cores launched in 2023.
Pythium was put into the Entity List in 2021. Guess where they make these chips?

According to TrendForce, as of the second quarter of 2024, SMIC maintains a solid 5.7% market share, securing its position in third place, after TSMC (62.3%) and Samsung (11.5%).
SMIC has been working on doubling their wafer capacity.
 
Last edited:
Even if TSMC's yield was 100%, which it is not, one third of that is 33%. If SMIC was making 14.5 x 14.3 mm Kirin 9000S chips that would be 90 good dies per wafer. SMIC had a 35,000 wpm FinFET fab. So that would be 3,150,000 chips per month. Huawei is selling 11.7 million smartphones per quarter. Huawei fabs other FinFET chips there. And SMIC has other FinFET customers like Loongson and Phytium.
Yields are probably a little better since the FT article was written. But this is exactly the story the business news is telling. A 7nm chip supply-constrained Huawei selling far fewer phones than they would like to, and having to deal with contention for the same limited capacity with their far larger and more poorly yielding Ascend 910B die (20% yield this summer). That potentially helps explain the phantom TSMC-fabbed 910Bs and subsequent cutoff of all 7nm to Chinese entities. I’m going to be watching the launch of the Ascend 910C carefully. Lots of fanfare, but volumes for the 910B were minuscule compared to NVIDIA H20s. Perhaps the same for 910C.
 
Yields are probably a little better since the FT article was written. But this is exactly the story the business news is telling. A 7nm chip supply-constrained Huawei selling far fewer phones than they would like to, and having to deal with contention for the same limited capacity with their far larger and more poorly yielding Ascend 910B die (20% yield this summer). That potentially helps explain the phantom TSMC-fabbed 910Bs and subsequent cutoff of all 7nm to Chinese entities. I’m going to be watching the launch of the Ascend 910C carefully. Lots of fanfare, but volumes for the 910B were minuscule compared to NVIDIA H20s. Perhaps the same for 910C.
I cannot read the TechInsights article myself. I would love to look at it, and figure out if this claim of Huawei using TSMC fabbed chips via a 3rd party holds water or not. I personally kind of doubt it.

The Kirin 9000S they use in the smartphones fabbed at SMIC is 207 mm2. They can easily make a NPU competitive with the original Ascend 910 with that kind of die area using chiplets. AI processing is embarrassingly parallel. And that is assuming yields have not improved since last year. Which I kind of doubt.

Ascend does not need to be competitive with the latest NVIDIA chip because with the US sanctions the performance of the AI chips which can be sold to China by NVIDIA is nerfed.

I expect FinFET output at SMIC to vastly increase starting in the middle of next year. Like I said they doubled the production area at the fab site.
 
Last edited:
They can easily make a NPU competitive with the original Ascend 910 with that kind of die area using chiplets.
Sorry, but there’s a reason NVIDIA, AMD and even Huawei (OK, only 410mm2, not 800 due to yield issues), but develop AI processor dies and interposers pretty much at their respective reticle limits (and why Blackwell moved to CoWoS_L) - greater interconnect bandwidth and lower latency. No matter how fast the off chip interconnect, it’s a bottleneck. AI is a case where economics and performance favor as high a density as possible.
 
If it scales that bad and interconnect is such a bottleneck why are the AI hyperscalers using clusters and connecting machines together? It is just matrix multiplication with short number formats.
 
If it scales that bad and interconnect is such a bottleneck why are the AI hyperscalers using clusters and connecting machines together?
Because it’s the next best alternative, albeit far less efficient, especially when one can’t fit a GenAI model into a single chip/package. This paper is probably a self-serving and is focused on training rather than inference, but gives a pretty good picture of what happens when models get split between chips/pacakges.

 
Back
Top