Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/threads/tsmcs-risky-bet-could-be-great-news-for-intel-investors.19434/page-2
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2021770
            [XFI] => 1050270
        )

    [wordpress] => /var/www/html
)

TSMC's Risky Bet Could Be Great News for Intel Investors

The 26mm x 33 mm field would be filled out with 3 x 3 Meteor Lake dies to maximize productivity, but the High-NA would only be able to scan a single row 1 x 3; it would need three of these to give 3 x 3. I only use Meteor Lake as a representative die size example.

View attachment 1608
Some inputs:
1. To become 3x1 field which means 3 scan steps not three reticles and no stitching required.
2. Even up to now, pellicle is not default in EUV lithography, not like DUV. Pellicle issue could be critical, but not road blocker.
3. eBeam Direct Write Lithography seems gone for a while. To build the fire would need a long time and money. It is questionable who will re-initiate it with billions of investment. Will David Lam in MultiBeam make it happen? Let's see.
 
Per IBM at IEDM, >28 nm pitch due to stochastic effects.
1705487007854.png
 
Min feature is a bit too generic a question. For example the width of a single fin on N5 is single digit nm and Gox thickness was like 5 atoms thick pre high-K. It wouldn't surprise me if it is back to being sub nanometer for advanced high-K nodes like N3B.

But assuming you mean min metal pitch or fin/min nanosheet pitch: i3, 20A, and N2 are not publicly known. From memory N3B M0 pitch from white papers is 23nm. Can't recall if N3E is either presumed or known to have a 28nm M0 pitch. Given techinsights is a paid for service and I don't know if any of their N3 coverage is out there for free (I think they have some of it out there for folks without a subscription), I don't want to quote their numbers here. Since I can freely speculate on N2. My gut says N2 has the same minimum CDs as N3E with the density shrink coming from libraries being similar in size to a bit shorter than with N3B. This would be due to the higher drive per unit area of GAAFET vs finFET. My bet is that the HP lib is similar in size to a 2 fin and the HD lib is smaller than the 2 fin and similar to the size of the 1 fin lib. I'm guessing they will also use a 48nm pp like N3E.

Fred has said ~36nm for 1.5D directprint, and I think the current min was Samsung doing 30nm unidirectional direct print back in like 2021. As for when double becomes not enough and you have to go to triple and quad depends on the method chosen. If it is LE^n then you have to go to triple and quad pretty fast. If it is pitch multiplication then on paper 15-7.5nm needs SAQP. But if you were doing that you would want to back off on the backbone pitch to reduce LER/stochastics. Also those cuts for a SAQP 7.5nm would be hell. SALE^n I think it is somewhere in the middle.
lets not make this too political and confusing. feature is the smallest line or pitch in the final product. sounds like it is 10nm or greater? even on "3" technology?
 
if the pitch is 28nm, the half pitch (feature) is 14. correct?
Not necessarily. If you have a 28nm pitch you can have a 8nm feature with a 20nm space. There are some features where you might do the opposite.

||||||||-----------------||||||||
||||||||-----------------||||||||
||||||||-----------------||||||||
||||||||-----------------||||||||
[______________]
28nm pitch
lets not make this too political and confusing. feature is the smallest line or pitch in the final product. sounds like it is 10nm or greater? even on "3" technology?
If you want the smallest CD of any feature that makes up a transistor, then sub nanometer. But those CDs aren't defined lithographically. If we are talking lithographic features then I guess fins are your answer, which are well below 10nm. 10FF had 7nm wide fins, N7 was 6nm wide, and I think N5 was like 4nm wide.
 
Last edited:
Not necessarily. If you have a 28nm pitch you can have a 8nm feature with a 20nm space. There are some features where you might do the opposite.

||||||||-----------------||||||||
||||||||-----------------||||||||
||||||||-----------------||||||||
||||||||-----------------||||||||
[________________]
28nm pitch

If you want the smallest CD of any feature that makes up a transistor, then sub nanometer. But those CDs aren't defined lithographically. If we are talking lithographic features then I guess fins are your answer, which are well below 10nm. 10FF had 7nm wide fins, N7 was 6nm wide, and I think N5 was like 4nm wide.
100% agree with you and thanks for the info on fins. I didn't realize the Fins were that small from the reports I saw.

Since
 
Yes, it is a little analogous to Tennant's Law: https://lithoguru.com/life/?p=234
Tennant's Law was not claimed to be a law by Tennant, and Chris Mack's rationale which you refer to is quite murky. For example the R^2 part does not directly control time, since you can write a large number of pixels in parallel (the reticle, so from 0.33 to 0.55 you just halve the amount of parallel). And it is not clear at all why smaller cubes (the R^3 part) should be slower instead of faster. At first glance it should be no effect at all or even a speed-up, if all the energy is absorbed in a thinner resist. As you point out the High NA resists will need to be thinner, and their chemistry is likely to be same as 0.33 EUV, so they will absorb less and stochastic goals means they are slower. But nothing like R^3 slower.

Tennant was originally writing about year-2000 ebeam technology but the reasons those scaling observations apply were mostly due to the ebeam design, and that includes the true explanation of why R^3 affected those designs, and not necessarily true of other ebeam designs. But, we digress.

Leave TL out of it. The heart of the High NA throughput problem is:
- more stop/start on the writing wastes more time between fields. A small effect since those intervals will still be a minor fraction of all exposure time.
- thinner resists with equivalent chemistry will need higher mJ/cm2 totals to reach even stricter stochastic goals. Stochastic requirements scale as square of the desired precision, if we ignore the additional problem of electron blur (which forces stochastics into a smaller budget).

The anamorphic projection helps because it doubles the dose intensity on target for constant intensity on reticle and mask. But doubling is not going to be enough.
 
Last edited:
Min feature is a bit too generic a question. For example the width of a single fin on N5 is single digit nm and Gox thickness was like 5 atoms thick pre high-K. It wouldn't surprise me if it is back to being sub nanometer for advanced high-K nodes like N3B.
Gox thickness is often cited in "SiO2 equivalent" which is how you get to sub-nm, maybe. That can never actually be the thickness because of quantum tunneling, and is why HfO2 or other higher dielectric constant materials are used. A 6nm thick HfO2 is roughly as effective as 1nm of SiO2, but the 6nm thickness is enough to have minimal tunneling. Citing the equivalent thickness allows companies to make performance claims without completely revealing their material secrets.
 
If you want the smallest CD of any feature that makes up a transistor, then sub nanometer. But those CDs aren't defined lithographically. If we are talking lithographic features then I guess fins are your answer, which are well below 10nm. 10FF had 7nm wide fins, N7 was 6nm wide, and I think N5 was like 4nm wide.
Strictly speaking that is not true. The fin pitch is defined by lithography, including pitch splitting, but the fin thickness is defined by a deposition step from the etch stopping power of what was deposited on the side of the litho feature.
 
Tennant's Law was not claimed to be a law by Tennant, and Chris Mack's rationale which you refer to is quite murky. For example the R^2 part does not directly control time, since you can write a large number of pixels in parallel (the reticle, so from 0.33 to 0.55 you just halve the amount of parallel). And it is not clear at all why smaller cubes (the R^3) part should be slower instead of faster. At first glance it should be no effect at all or even a speed-up, if all the energy is absorbed in a thinner resist. As you point out the High NA resists will need to be thinner, and their chemistry is likely to be same as 0.33 EUV, so they will absorb less and stochastic goals means they are slower. But nothing like R^3 slower.

Tennant was originally writing about year-2000 ebeam technology but the reasons those scaling observations apply were mostly due to the ebeam design, and not necessarily true of other ebeam designs. But, we digress.

Leave TL out of it. The heart of the High NA throughput problem is:
- more stop/start on the writing wastes more time between fields. A small effect since those intervals will still be a minor fraction of all exposure time.
- thinner resists with equivalent chemistry will need higher mJ/cm2 totals to reach even stricter stochastic goals. Stochastic requirements scale as square of the desired precision, if we ignore the additional problem of electron blur (which forces stochastics into a smaller budget).

The anamorphic projection helps because it doubles the dose intensity on target for constant intensity on reticle and mask. But doubling is not going to be enough.
Good comments: I add some inputs as below:
"The heart of the High NA throughput problem is:
- more stop/start on the writing wastes more time between fields. A small effect since those intervals will still be a minor fraction of all exposure time."

HS> To catchup the overhead and justify the throughout to tool cost (2x), the stage speed should be increased. If run at the same dose, it means higher EUV light intensity needed. This bring challenges to be solved.

"- thinner resists with equivalent chemistry will need higher mJ/cm2 totals to reach even stricter stochastic goals. Stochastic requirements scale as square of the desired precision, if we ignore the additional problem of electron blur (which forces stochastics into a smaller budget)."
HS> Due to DOF limit and PR collapse concern, the PR thickness needs to be reduced. Problems:
a. When go to EUV, photon shot noise induced stochastic effect also getting worse. We might need to increase CAR PAG density, quantum efficiency or image sharpness. If
we reduce thickness which means less volume to react, the shot noise will be even worse. Another way would be increasing dose which means more cost. MOR, dry resist
or dry develop, DSA for edge roughness optimization, etc. would be the candidates.
b. Typically stochastics effect will not scale with dimension. It is due to dimension shrink, CD/CDU requirement will shrink proportionally and stochastic effects become very
critical.
c. Current viable CD metrology tool are CDSEM and OCD. CDSEM can provide more local (nm scale) CD information, but OCD provides um range average. For thinner PR
(thk <20nm), the CD and edge roughness metrology becomes not reliable. If we can not measure it precisely, it will be hard to quantify and control the processes. There
are some innovations needed in this field to solve the problems. You might think I miss AFM here for CD metrology. AFM is slow, especially for very small
features(<40nm pitch) and it will be very challenge to use CD mode and tip down to pattern space which will be <20nm. For small AFM tip, it will be very expense and the
repeatability would be not so good also.
 
Strictly speaking that is not true. The fin pitch is defined by lithography, including pitch splitting, but the fin thickness is defined by a deposition step from the etch stopping power of what was deposited on the side of the litho feature.
Good point. The question asking about smallest feature or line so I just spit out the smallest lines I could think of since the smallest features are purely etch/dep defined. My mind totally glossed over that while pitch multiplication is partially lithographically defined, that it is also etch/dep defined. :LOL:
Gox thickness is often cited in "SiO2 equivalent" which is how you get to sub-nm, maybe. That can never actually be the thickness because of quantum tunneling, and is why HfO2 or other higher dielectric constant materials are used. A 6nm thick HfO2 is roughly as effective as 1nm of SiO2, but the 6nm thickness is enough to have minimal tunneling. Citing the equivalent thickness allows companies to make performance claims without completely revealing their material secrets.
My statement was based on intel's 45nm paper where they said their 65nm had like 8 atoms of thickness (or something comical like that). They even had an image where you could literally count them. Considering that HKMG is a one time benefit Gox needs to be scaled if you want better channel control. Of course eventually even this started to hit it's limits (hence finFET and GAAFET). My assumption was that at this point HFO must be getting to thicknesses of the old SiO2 Goxs; especially with the smaller space available for WFM on GAA nodes. When you look at the images from the manufactures of the top of those fins and see how thin the ALD dep is around the surface of the fin I had thought "okay looks about like I'd expect" and left it at that. But I dunno I could totally be wrong, I have never really checked what sort of Gox thickness can be found with modern processes.

1705539212249.png


If HFO is the light grey between the metal and white space, then it is like ~4.5nm based on my ruler and the Mk I eyeball. If it is the white strip that is actually conformal to the fin, then that is way to small for me to measure and prob sub nm. I don't know which is which though. Intel 10nm image courtesy of wikichip (unfortunately no image for N5 or 4LPE was available on wikichip to use :().
 
HS> To catchup the overhead and justify the throughout to tool cost (2x), the stage speed should be increased. If run at the same dose, it means higher EUV light intensity needed. This bring challenges to be solved.
The stage does not need to scan faster (if we just assume the magic is the faster acceleration, image scan on wafer stays same). The mask however is going toscan 2x linear speed, corresponding to the 8x shrink in that direction.
HS> Due to DOF limit and PR collapse concern, the PR thickness needs to be reduced. Problems:
a. When go to EUV, photon shot noise induced stochastic effect also getting worse. We might need to increase CAR PAG density, quantum efficiency or image sharpness. If
we reduce thickness which means less volume to react, the shot noise will be even worse. Another way would be increasing dose which means more cost. MOR, dry resist
or dry develop, DSA for edge roughness optimization, etc. would be the candidates.
Yup. And remember, whatever you do for MOR works same for both 0.33 and 0.55 so in terms of relative advantage, the thinner resist for 0.55 is always going to need the higher exposure due to absorbing less and while also needing better stochastics for the goal dimensions.

Amplification in the resist does not help with stochastics. It can even make them worse by amplifying weaker electrons, further from the EUV event, and if you increase the electron blur you strangle the stochastic budget.
b. Typically stochastics effect will not scale with dimension. It is due to dimension shrink, CD/CDU requirement will shrink proportionally and stochastic effects become very
critical.
👍 - and squared dose for linear stochastic improvement.
c. Current viable CD metrology tool are CDSEM and OCD. CDSEM can provide more local (nm scale) CD information, but OCD provides um range average. For thinner PR
(thk <20nm), the CD and edge roughness metrology becomes not reliable. If we can not measure it precisely, it will be hard to quantify and control the processes. There
are some innovations needed in this field to solve the problems. You might think I miss AFM here for CD metrology. AFM is slow, especially for very small
features(<40nm pitch) and it will be very challenge to use CD mode and tip down to pattern space which will be <20nm. For small AFM tip, it will be very expense and the
repeatability would be not so good also.
I would assume AFM can only be used for statistical sampling and maybe detailed forensics of specific defect issues.
 
If HFO is the light grey between the metal and white space, then it is like ~4.5nm based on my ruler and the Mk I eyeball. If it is the white strip that is actually conformal to the fin, then that is way to small for me to measure and prob sub nm. I don't know which is which though. Intel 10nm image courtesy of wikichip (unfortunately no image for N5 or 4LPE was available on wikichip to use :().
Where is @Scotten Jones when we need him :)
 
Interesting. I was under the assumption that everyone and their mother would be chomping at Inpiria and LAMs (I was under the impression their dry PR was MOR not CAR, but maybe it is a dry CAR) heels for the stuff. Also I have never heard of Dongjin Semichem, cool to see more players in that space. Also nice to see Korean companies who can keep up with their American and Japanese counterparts.
 
Back
Top