Tennant's Law was not claimed to be a law by Tennant, and Chris Mack's rationale which you refer to is quite murky. For example the R^2 part does not directly control time, since you can write a large number of pixels in parallel (the reticle, so from 0.33 to 0.55 you just halve the amount of parallel). And it is not clear at all why smaller cubes (the R^3) part should be slower instead of faster. At first glance it should be no effect at all or even a speed-up, if all the energy is absorbed in a thinner resist. As you point out the High NA resists will need to be thinner, and their chemistry is likely to be same as 0.33 EUV, so they will absorb less and stochastic goals means they are slower. But nothing like R^3 slower.
Tennant was originally writing about year-2000 ebeam technology but the reasons those scaling observations apply were mostly due to the ebeam design, and not necessarily true of other ebeam designs. But, we digress.
Leave TL out of it. The heart of the High NA throughput problem is:
- more stop/start on the writing wastes more time between fields. A small effect since those intervals will still be a minor fraction of all exposure time.
- thinner resists with equivalent chemistry will need higher mJ/cm2 totals to reach even stricter stochastic goals. Stochastic requirements scale as square of the desired precision, if we ignore the additional problem of electron blur (which forces stochastics into a smaller budget).
The anamorphic projection helps because it doubles the dose intensity on target for constant intensity on reticle and mask. But doubling is not going to be enough.
Good comments: I add some inputs as below:
"The heart of the High NA throughput problem is:
- more stop/start on the writing wastes more time between fields. A small effect since those intervals will still be a minor fraction of all exposure time."
HS> To catchup the overhead and justify the throughout to tool cost (2x), the stage speed should be increased. If run at the same dose, it means higher EUV light intensity needed. This bring challenges to be solved.
"- thinner resists with equivalent chemistry will need higher mJ/cm2 totals to reach even stricter stochastic goals. Stochastic requirements scale as square of the desired precision, if we ignore the additional problem of electron blur (which forces stochastics into a smaller budget)."
HS> Due to DOF limit and PR collapse concern, the PR thickness needs to be reduced. Problems:
a. When go to EUV, photon shot noise induced stochastic effect also getting worse. We might need to increase CAR PAG density, quantum efficiency or image sharpness. If
we reduce thickness which means less volume to react, the shot noise will be even worse. Another way would be increasing dose which means more cost. MOR, dry resist
or dry develop, DSA for edge roughness optimization, etc. would be the candidates.
b. Typically stochastics effect will not scale with dimension. It is due to dimension shrink, CD/CDU requirement will shrink proportionally and stochastic effects become very
critical.
c. Current viable CD metrology tool are CDSEM and OCD. CDSEM can provide more local (nm scale) CD information, but OCD provides um range average. For thinner PR
(thk <20nm), the CD and edge roughness metrology becomes not reliable. If we can not measure it precisely, it will be hard to quantify and control the processes. There
are some innovations needed in this field to solve the problems. You might think I miss AFM here for CD metrology. AFM is slow, especially for very small
features(<40nm pitch) and it will be very challenge to use CD mode and tip down to pattern space which will be <20nm. For small AFM tip, it will be very expense and the
repeatability would be not so good also.