Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/index.php?threads/with-shrink-ending-whats-next.16255/
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2021370
            [XFI] => 1050270
        )

    [wordpress] => /var/www/html
)

With Shrink Ending, What's Next?

Arthur Hanson

Well-known member
With shrink coming to an end, what will be the next major advances in semis? What will be the future impact of silicon on fabric, layering, and advanced architectures among others? Will dropping costs of leading-edge technologies become the next driver? Will chips using light for computing be in the future? Any thoughts or views on the future pathways of the semi sector are welcome and appreciated.
 
In some ways, shrinking is already over. Foundries barely make 10~20% of PP, MP shrink per node, which is not enough to achieve x2 density improvements. So they're already relying on alternative approaches like fin count reduction, COAG...etc with transistor changes(Planar => Fin => Nanosheet => Forksheet).
This results in an increased cost in leading-edge nodes. Chiplets(tiles), advanced interconnects...etc are here to mitigate this situation.

So how do we handle this in the long run? In my personal opinion, software companies' influence will become stronger. There is a lot of room to improve if chips built are non general-purpose(ASICs). Google VCU is extremely efficient when it comes to YouTube compared to CPUs. This needs new infrastructure and learning, but big software companies can handle this since they have tons of clever people and internal demands. Combined with IP-rich fabless environments(thanks to TSMC), now they can design their own ASICs.
 
Mono-3D will resume node shrinking

Planar will likely hit 1000 per mm² with new devices and then stall
 
If you want to talk about semiconductor scaling talk to Apple. They put their money where their scaling is. Graphs and charts from foundries or analysts are fine but this is where the rubber meets the road, absolutely. From 1B transistors in 2013 to 15B in 2021 is simply amazing!

Apple SoC History 2022.jpg
 
Last edited:
If you want to talk about semiconductor scaling talk to Apple. They put their money where their scaling is. Graphs and charts from foundries or analysts are fine but this is where the rubber meets the road, absolutely. From 1B transistors in 2013 to 15B in 2021 is simply amazing!

View attachment 805
But now look at the density improvements each year, starting from 16nm A10 in 2016 (first one with an accurate gate count) -- x1.86, x1.69, x1.05, x1.54, x1.04.

Put another way, the density improvement from the 7nm A12 in 2018 to the N5P A15 in 2021 was only 1.67x over 3 years, similar to what happened previously in 1 year (or maybe 1.5 years).

The density increase per year has slowed down drastically since the introduction of 7nm, and the wafer costs have also been rising rapidly. And there's no sign of this trend changing in the near future... :-(
 
It appears that DTCO will grow in importance. Things like Backside Power Delivery will allow for scaling of cell sizes even if traditional scaling like lithography improvements are slowing down.

The net effect is a way to scale transistor size that doesn’t rely on EUV, and what’s more, this shrink will be the equivalent of 2 generations of EUV shrinkage.. That’s huge and now it’s pretty much the roadmap for future semiconductor scaling
 
But now look at the density improvements each year, starting from 16nm A10 in 2016 (first one with an accurate gate count) -- x1.86, x1.69, x1.05, x1.54, x1.04.
Put another way, the density improvement from the 7nm A12 in 2018 to the N5P A15 in 2021 was only 1.67x over 3 years, similar to what happened previously in 1 year (or maybe 1.5 years). The density increase per year has slowed down drastically since the introduction of 7nm, and the wafer costs have also been rising rapidly. And there's no sign of this trend changing in the near future... :-(

True, the FinFET era is coming to an end as N3 is another half node from N5/4. My guess is that Apple will see 20B+ transistors with N3x before moving to GAA which hopefully will bring a new level of innovation, like FinFETs did. And remember, we had this same discussion before moving to FinFET and we will have it again when GAA moves to CFET, CNT, etc... There is too much money at risk if we do not scale which is the great motivator.

But again, 1B transistors to 20B+ in 10 years is an impressive achievement.
 
True, the FinFET era is coming to an end as N3 is another half node from N5/4. My guess is that Apple will see 20B+ transistors with N3x before moving to GAA which hopefully will bring a new level of innovation, like FinFETs did. And remember, we had this same discussion before moving to FinFET and we will have it again when GAA moves to CFET, CNT, etc... There is too much money at risk if we do not scale which is the great motivator.

But again, 1B transistors to 20B+ in 10 years is an impressive achievement.
The problem with reduced payback from scaling (lower cost per transistor) coupled with exponentially increasing development cost is that fewer and fewer companies/projects can afford to jump to the newest technology because it doesn't make business sense, and this risks excluding large parts of the market from using them and ending up with a monopolistic marketplace where one (or maybe two) vendors get most of it and everyone else gets the crumbs and loses money.

Even a device revenue of hundreds of millions of dollars doesn't justify moving to the next node for device cost reasons, the only reason then becomes lower power consumption if this is absolutely critical -- and even the power savings per node are dropping year on year, so this gets harder to justify too, often you need to skip a node to get enough reduction to be worth it.
 
The problem with reduced payback from scaling (lower cost per transistor) coupled with exponentially increasing development cost is that fewer and fewer companies/projects can afford to jump to the newest technology because it doesn't make business sense, and this risks excluding large parts of the market from using them and ending up with a monopolistic marketplace where one (or maybe two) vendors get most of it and everyone else gets the crumbs and loses money.

Even a device revenue of hundreds of millions of dollars doesn't justify moving to the next node for device cost reasons, the only reason then becomes lower power consumption if this is absolutely critical -- and even the power savings per node are dropping year on year, so this gets harder to justify too, often you need to skip a node to get enough reduction to be worth it.

TSMC tape outs prove that theory wrong. At one time people said that about N7 (only 7 companies could afford 7nm) yet TSMC has a record amount of tape outs and N3 will be a whole new record.

Example: System companies now dominate EDA, IP, and the foundry business as customers and system companies don't have the budget issues that fabless companies have since the end products have huge margins versus just the chip. Look at the cloud companies. With trillions of dollars in market cap, spending millions on chip design to be more competitive is an easy value proposition to justify. Systems companies were also hit hard with the chip shortage so they will want to control their silicon at all costs. Systems companies are 50-60% of chip business today and that number will increase every year pushing the fabless chip companies into a corner and they will have to innovate or die, my opinion.

Chiplets and other clever chip design and packaging hacks will help keep the cost down so expect more of that in the coming years, absolutely.
 
TSMC tape outs prove that theory wrong. At one time people said that about N7 (only 7 companies could afford 7nm) yet TSMC has a record amount of tape outs and N3 will be a whole new record.

Example: System companies now dominate EDA, IP, and the foundry business as customers and system companies don't have the budget issues that fabless companies have since the end products have huge margins versus just the chip. Look at the cloud companies. With trillions of dollars in market cap, spending millions on chip design to be more competitive is an easy value proposition to justify. Systems companies were also hit hard with the chip shortage so they will want to control their silicon at all costs. Systems companies are 50-60% of chip business today and that number will increase every year pushing the fabless chip companies into a corner and they will have to innovate or die, my opinion.

Chiplets and other clever chip design and packaging hacks will help keep the cost down so expect more of that in the coming years, absolutely.
That's funny, I've worked on leading-edge custom ASIC design for many years and I see exactly this trend -- even though I now work for a systems company where (as you say) the total business is much larger than just selling a chip.

Fabless chip companies are being hit harder, as you say the business is moving more towards vertically-integrated companies just like in "the old days". But it's also moving more towards a smaller number of large systems companies, like the hyperscalars that you mention or the big systems companies who supply them -- the little guys are getting squeezed out or acquired.

Especially given the supply chain problems at the moment, smaller companies are also finding it difficult to access advanced packaging techniques because the substrate/back-end suppliers don't want to talk to them unless they want huge volumes. For a custom BGA substrate that we used to pay about $5 for we were recently quoted more than $100 -- take it or leave it... :-(
 
Fabless chip companies are being hit harder, as you say the business is moving more towards vertically-integrated companies just like in "the old days". But it's also moving more towards a smaller number of large systems companies, like the hyperscalars that you mention or the big systems companies who supply them -- the little guys are getting squeezed out or acquired.
I agree, and the trend has been forming for several years now. When you have in-house demand for millions of chips per year and the business case stretches all the way to end-user applications, like Amazon S3, it becomes difficult for merchant vendors at any layer to compete, hardware or software. The big cloud vendors can even extend their superior business case to merchant chips. An interesting example that has been known to insiders for a few years is Microsoft's use of FPGAs to improve performance of network virtualization in servers.


Think about the scale here. The Azure cloud is currently estimated to have over four million servers in their datacenters. I'm not sure how many of their servers have these NICs in them, but even at about 50% of the population (I'm sure after 4-5 years of deployment it's higher than that, but I'm being conservative) and assuming two NICs per server for redundancy, Microsoft's demand for these FPGAs has been at least two million chips and probably some multiple of that higher. I suspect their server population grows by at least 30% per year, since their revenues have been growing by 40%+ per year. I haven't been able to easily find FPGA unit sales for the industry, but I'm guessing Microsoft is largest single FPGA buyer on the planet. I expect the pricing they get is very favorable, much like the pricing I know the cloud vendors get from merchant CPU vendors. Smaller buyers can't compete.
 
Last edited:
That's funny, I've worked on leading-edge custom ASIC design for many years and I see exactly this trend -- even though I now work for a systems company where (as you say) the total business is much larger than just selling a chip.

Fabless chip companies are being hit harder, as you say the business is moving more towards vertically-integrated companies just like in "the old days". But it's also moving more towards a smaller number of large systems companies, like the hyperscalars that you mention or the big systems companies who supply them -- the little guys are getting squeezed out or acquired.

Especially given the supply chain problems at the moment, smaller companies are also finding it difficult to access advanced packaging techniques because the substrate/back-end suppliers don't want to talk to them unless they want huge volumes. For a custom BGA substrate that we used to pay about $5 for we were recently quoted more than $100 -- take it or leave it... :-(

Also, System companies 'know what they want' quite well. Look at Google VCU. They can replace millions of YouTube CPUs by using chips with large encoders and decoders. They rule workloads and end-users. Who really cares what's behind the Amazon RDS(x86 server? ARM server? some devil? angel?). As long as they can hide details below the carpet, users don't care.

Semiconductor companies ruled the world because their evolution was fast. Next-generation Intel CPU was twice faster, compared to the previous one while keeping backward compatibility. Jumping in to the chip business makes little sense for software(or system) companies.

Now they're too big. Enough budget, enough manpower, enough customer demands, and slowed down semiconductor advancement. No wonder they try to 'repurpose' budge of transistors for their own uses.
 
TSMC tape outs prove that theory wrong. At one time people said that about N7 (only 7 companies could afford 7nm) yet TSMC has a record amount of tape outs and N3 will be a whole new record.

Example: System companies now dominate EDA, IP, and the foundry business as customers and system companies don't have the budget issues that fabless companies have since the end products have huge margins versus just the chip. Look at the cloud companies. With trillions of dollars in market cap, spending millions on chip design to be more competitive is an easy value proposition to justify. Systems companies were also hit hard with the chip shortage so they will want to control their silicon at all costs. Systems companies are 50-60% of chip business today and that number will increase every year pushing the fabless chip companies into a corner and they will have to innovate or die, my opinion.

Chiplets and other clever chip design and packaging hacks will help keep the cost down so expect more of that in the coming years, absolutely.

The number for new tape out on N7/N6/N5/N4 is very active at TSMC. IMO, adoption of advanced node technologies is spreading wide and not limited to system companies or hyperscalers.

According to Tom Dillinger's report: "TSMC 2022 Technology Symposium Review – Process Technology Development", Source: https://semiwiki.com/semiconductor-...posium-review-process-technology-development/

  • N7/N6
  • ~ Over 400 NTOs by year-end 2022, primarily in the smartphone and CPU markets
N5/N4
~ In the 3rd year of production, with over 2M wafers shipped, 150 NTOs by year-end 2022
 
The number for new tape out on N7/N6/N5/N4 is very active at TSMC. IMO, adoption of advanced node technologies is spreading wide and not limited to system companies or hyperscalers.

According to Tom Dillinger's report: "TSMC 2022 Technology Symposium Review – Process Technology Development", Source: https://semiwiki.com/semiconductor-...posium-review-process-technology-development/

  • N7/N6
  • ~ Over 400 NTOs by year-end 2022, primarily in the smartphone and CPU markets
N5/N4
~ In the 3rd year of production, with over 2M wafers shipped, 150 NTOs by year-end 2022

So most NTOs are in the very high-volume high-revenue CPU markets -- which is what I said, other markets with smaller volumes and lower TAM are being squeezed out, even if they want to take advantage of the technology (e.g. for low power) it's too expensive (huge NRE).

Nobody is disputing that there are still a large number of NTOs (and that 3nm will be *very* successful and huge volume), but these are increasingly becoming concentrated into a few high-TAM segments.
 
So most NTOs are in the very high-volume high-revenue CPU markets -- which is what I said, other markets with smaller volumes and lower TAM are being squeezed out, even if they want to take advantage of the technology (e.g. for low power) it's too expensive (huge NRE).

Nobody is disputing that there are still a large number of NTOs (and that 3nm will be *very* successful and huge volume), but these are increasingly becoming concentrated into a few high-TAM segments.
Although it's a business secret that only TSMC and TSMC's customers know, I believe TSMC intentionally reduced MOQ and other requirement for certain small players. For example TSMC claims 85% semiconductor startups worldwide work with them. For a typical startup, the quantity may not be huge.

In a way Apple, Qualcomm, Mediatek, and AMD take the lead (and pay the high cost) to use TSMC advanced processes and at the same time they pave the way for the rest of smaller players. This is the beauty of foundry business model.
 
Last edited:
Although it's a business secrete that only TSMC and TSMC's customers know, I believe TSMC intentionally reduced MOQ and other requirement for certain small players. For example TSMC claims 85% semiconductor startups worldwide work with them. For a typical startup, the quantity may not be huge.

In a way Apple, Qualcomm, Mediatek, and AMD take the lead (and pay the high cost) to use TSMC advanced processes and at the same time they pave the way for the rest of smaller players. This is the beauty of foundry business model.

You really need to talk to actual chip developers...

MOQ and process availability (paid for by the big guys you listed) are not the problem, NRE and design costs (masks, tools, hardware and software) are. Speaking from experience, an absolute minimum development NRE cost for a chip in advanced nodes is maybe $50M, $100M+ is more common, and can be up to $500M for complex devices.

When you consider NRE vs. die cost together with typical GM and ROI a useful rule-of-thumb is that you need sales (chip or product) over lifetime to be at least 5x (preferably 10x) NRE, meaning an absolute minimum of a few hundred million dollars and can be up to the billions -- and unless you get the majority of the market, the TAM has to be even bigger. There are simply not that many markets this big, especially given the short production lifetime of many devices which are superseded within a couple of years.

In recent years in leading-edge technologies I have many times seen chip developments which would make perfect sense technically and for a great product canned because the ROI just didn't justify the development.
 
Although it's a business secrete that only TSMC and TSMC's customers know, I believe TSMC intentionally reduced MOQ and other requirement for certain small players. For example TSMC claims 85% semiconductor startups worldwide work with them. For a typical startup, the quantity may not be huge.
I think startups like TSMC because of the best shuttle service in the industry (though I've been told GF is trying hard to copy it) and the best ecosystem in the industry. Startups don't typically have the resources for any extra complexity.
 
Last edited:
I laugh out loud when Gartner or IBIS tell us what wafers cost or how much it costs to tape out a leading edge design. The estimates are always high to get the chicken little sky is falling thing going so they can sell reports.

For fabless companies you can put them in three buckets: Fabless chip companies (QCOM, BCRM, MVL, etc...) Fabless ASIC companies (Alchip, Sondrel, OPenFive, etc...), and fabless systems companies (Apple, Google, Tesla, etc...).

ASIC companies do design on the cheap since they are very margin constrained.
Fabless companies also do design on the cheap but not as cheap as the ASIC companies.
System companies blow out the design cost curve and spend huge amounts of money in comparison.

For an SoC, let's say it's $100M for ASIC, $200M for Fabless, and $300M for Systems. I guess you can take the average and say it costs $200M for a 7nm design but would that sell reports? No, but if you add them together and say $600M that will sell reports!

The question I have is how many chances does a Chicken Little or the Boy Who Cried Wolf get before they tarred and feathered?
 
I laugh out loud when Gartner or IBIS tell us what wafers cost or how much it costs to tape out a leading edge design. The estimates are always high to get the chicken little sky is falling thing going so they can sell reports.

For fabless companies you can put them in three buckets: Fabless chip companies (QCOM, BCRM, MVL, etc...) Fabless ASIC companies (Alchip, Sondrel, OPenFive, etc...), and fabless systems companies (Apple, Google, Tesla, etc...).

ASIC companies do design on the cheap since they are very margin constrained.
Fabless companies also do design on the cheap but not as cheap as the ASIC companies.
System companies blow out the design cost curve and spend huge amounts of money in comparison.

For an SoC, let's say it's $100M for ASIC, $200M for Fabless, and $300M for Systems. I guess you can take the average and say it costs $200M for a 7nm design but would that sell reports? No, but if you add them together and say $600M that will sell reports!

The question I have is how many chances does a Chicken Little or the Boy Who Cried Wolf get before they tarred and feathered?

Having worked for both ASIC and small/large system companies, I think you're a bit off the mark -- the biggest cost driver (apart from process) is design complexity, especially software.

The analyst "scaremongers" always pick the most complex devices with the most complex software, which are mobile phone SoCs -- and regardless of who develops them (Mediatek, Samsung, Apple, Qualcomm) the overall development cost (especially design verification and software) is astronomical, certainly north of $500M and rising -- because complexity is rising as more and more functions (AI, ML...) are crammed onto the same chip, after all you need to do *something* with all those extra gates. For these developments mask costs (rising rapidly) are a small part of the overall project budget, which is why these are nowadays always the first devices in a new process -- and the most expensive to develop, hence Gardner figures.

At the other end you have devices which are functionally much simpler but which need the leading-edge processes because they still consume a lot of gates and power consumption is crucial. If these are functionally simple so the design and verification is easy then it might be possible to get away with $50M, though very few devices are this simple -- I would expect most to cost at least $100M nowadays. These devices usually access the latest node a year or so after the likes of Apple, after costs have dropped somewhat and the process/libraries/DKs are more mature.

And yes, all this is speaking from actual experience of developing such devices and seeing what the overall cost really is, and being involved in decisions about whether to do device variants for different applications or indeed develop such a device at all -- and nowadays the answer is quite often "it doesn't make business sense", and this is happening more and more often, especially if a customer comes in and says "I need this variant/custom chip, can you develop it?". The trend is to make a single device more complex (multiple modes/features) to serve more applications, which of course puts the cost up and delays TTM... :-(
 
Last edited:
The problem with reduced payback from scaling (lower cost per transistor) coupled with exponentially increasing development cost is that fewer and fewer companies/projects can afford to jump to the newest technology because it doesn't make business sense, and this risks excluding large parts of the market from using them and ending up with a monopolistic marketplace where one (or maybe two) vendors get most of it and everyone else gets the crumbs and loses money.

Even a device revenue of hundreds of millions of dollars doesn't justify moving to the next node for device cost reasons, the only reason then becomes lower power consumption if this is absolutely critical -- and even the power savings per node are dropping year on year, so this gets harder to justify too, often you need to skip a node to get enough reduction to be worth it.

Yes, top SoC makers are ones who survived through the 15 years SoC marathon. The ship has sailed for Facebook, Amazon, Google etc.
 
Back
Top