Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/index.php?threads/intel-10nm-process-problems-my-thoughts-on-this-subject.10535/page-4
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2021370
            [XFI] => 1050270
        )

    [wordpress] => /var/www/html
)

Intel 10nm process problems -- my thoughts on this subject

On EUV at 5nm - I'd expect 7nm to be a long life node, and many companies will likely end up skipping 5nm altogether. So I'd expect volumes at 5nm to be maybe a bit lower than expected. Then at 3nm, which may well be the last node shrink that makes any economic sense, there will be tool reuse from 5nm. I think there may be tightness in the EUV tools market like many are suggesting, but I think demand for 5nm will be low and between that, a little more time and tool reuse for 3nm, the foundries should be able to get the tools they need.

Yes, very possible. Intel 10nm and the 3 foundries' 7nm are all around 56 nm CGP and 40 nm MMP. A TRUE Moore's law full node shrink would entail 0.7x to both of those values, leading to 40 nm CGP and 28 nm MMP. That could be too big a change. 40 nm CGP is believed to be a FinFET cliff, and 28 nm MMP is probably a consideration for beyond copper. 28 nm pitch also poses severe imaging problems with EUV, which the anamorphic setup is supposed to fix. So SAQP metallization is needed sooner or later.
 
Last edited:
Are you confused ? SADP limit is 40nm and thats exactly the minimum metal pitch (MMP) chosen by TSMC and GF. Samsung went with EUV for metal and has a 36nm MMP. Intel is the only foundry who went below 40nm for metal and were forced to use SAQP immersion litho for metal.

No confusion on my part. Please understand that 36nm vs 40nm is not the same as SAQP vs SADP.

Using Fred's, Scotten's and your logic, which is using well-known litho resolution limit, can easily lead one to wrong conclusions as exemplified by Intel's 45nm process. Needless to say, it can be as misleading if applied to any other process as well.
 
Last edited:
You're misunderstanding what I said; it's not that cobalt has got slower since Intel did their early pathfinding work (when they decided to use cobalt), it's that copper has got faster as people have found ways of reducing the effective resistivity of copper at very fine pitches (or reducing the resistivity increase which amounts to the same thing).

If you look back four years IMEC were saying that cobalt had a speed advantage at these geometries, now they're saying it doesn't any more and in fact there are no cases where it's the best choice -- copper is still best down to some point, below that ruthenium is better, and cobalt is never the best choice.

Ruthenium does have safety issues, but then so do many of the other materials used for IC fabrication and these have been dealt with -- it's not as if living breathing humans get anywhere near the wafers nowadays, if they did the yield would be zero for everybody. Supply is by far the biggest problem if Ru becomes widely used for ICs, world reserves are estimated at 5k tonnes (12t mined per year) compared to 7Mt for Co (100kt mined per year) and ~1Gt for Cu (20Mt/year). All of which just increases the pressure to stay with Cu and try and improve it further instead of switching interconnect material...

One penalty of being very early into a market is that the goalposts may have moved by the time everyone else joins in, especially if there are big delays in getting to market; this is what has happened with Intel vs. the foundries and Cu/Co.

You do realize, I presume, that the rate of improvement changes from very high in the beginning, when there's a lot of low-hanging fruit to pick to eventually levelling off on a plateau?

Copper has been used in beol for about 20 years now. That's for how long both industry and research community have been working on Cu resistivity among other related things. The resistivity of 3-4 µOhm·cm I mentioned in the writeup is not some breakthrough achievement of few recent years -- it was achieved many years ago. If you're unaware of this, it simply casts some doubts on your claims wrt expertise.

So many years ago Intel had on one hand copper resistivity of 3-4 µOhm·cm (that's published research, and one is, of course, free to draw any conclusions as to how much Intel looks at that), and on the other they had bulk resistivity of cobalt of 6 µOhm·cm.

See what I'm getting at? Even if we imagine that Intel found a way to deposit cobalt so the resistivity is very close to bulk, it still loses to copper. Is there anything to miss here?

That's why I say the viewpoint on Intel's cobalt decision that suggests there were good reasons to believe cobalt is a better choice misses a lot. Or at least it heavily misses the knowledge of state of the art in copper resistivity -- whether by Intel or by anyone who says their decision is justified.
 
Having been involved in high-end ASIC design for far too many years, I've done extensive thermal simulations of real devices using Cu BEOL, including all thermal resistances on the die, in the package, TIM1, heatspreader, TIM2, heatsink and PCB and so on, and in every case (for flip-chips with heatsinks, which is what we're talking about) the amount of heat dissipated up through the Cu BEOL through the package substrate/balls/socket (if used) and into the PCB (remember the die is upside down) is negligible compared to the amount of heat dissipated down through the Si substrate/package lid and into the heatsink -- and you only have to do a quick estimate of the relative thermal resistances and cross-sectional areas to realise this.

So adding cobalt on the bottom couple of via/interconnect layers will reduce the heat flux through the BEOL from negligible to even more negligible -- do you see the point? Measurements are not needed to "prove" this because the numbers make it obvious.

You can call this "thermal gasket effect" or anything else you want, but in real life it has no significant impact in high-power devices.

If numbers make it obvious, where are they then? Show us if not peer-reviewed research, then at least something credible. Both wrt to dissipation into mb and amount of heat dissipated in the interconnect, which becomes separated by the cobalt gasket from the cooler (or are you [dis]missing the latter factor entirely?). Tell us, after all, something more specific about your own research you mentioned than very general words, as that would be interesting indeed.

What really makes it obvious though and indeed doesn't take measurements to realize -- as I have already pointed out at RWT -- is that motherboard is in fact a 2nd cooler attached to the cpu from the other side. Pretty lowsy, but still a cooler (the lower the power dissipated by the cpu, the higher the effect). It's enough to get hold of ir cam and take it on the backside of mb to see the picture quite clearly.

And what's probably more important here, is the combination of good thermal conductivity and thermal capacitance formed by Cu interconnect in the chip, its package and mb wires coming to the socket. It's this combo (with some contribution from silicon) which allows to absorb short bursts of Icc exceeding max average Icc (sustainable in long term). Use of poor thermal conductor in the IC violates function of both mechanisms.
 
No confusion on my part. Please understand that 36nm vs 40nm is not the same as SAQP vs SADP.

Using Fred's, Scotten's and your logic, which is using well-known litho resolution limit, can easily lead one to wrong conclusions as exemplified by Intel's 45nm process. Needless to say, it can be as misleading if applied to any other process as well.

At the 65nm node Intel used Argon Fluoride (ArF) exposure tools for gate. At the 45nm node they had two choices, one, buy a bunch of Argon Fluoride Immersion (ArFi) tools, or two, use multi-patterning and stay with ArF. What Intel did was use LE2 with ArF to print the gate and hold off on buying ArFi tools until the 32nm node.

ArFi has a BEOL pitch limit of approximately 80nm. I have seen that pushed a bit with restrictive design rules but practically speaking that is the limit. SADP is pitch doubling, e.g. it exactly cuts the pitch in half. An ArFi exposure creates an 80nm pitch mandrel and depositing spacers on the side walls gives you a 40nm pitch. For a SADP process 40nm is literally a cliff. I have discussed this with some of the leading lithography experts in the world and this is generally agreed upon, in fact one expert said 40nm is actually considered pushing SADP.

GF has a 40nm minimum metal pitch (MMP) and uses SADP. TSMC has a 40nm MMP and I believe they are actually using a litho-etch variant with spacers process. Intel couldn't do 36nm with SADP so they went to SAQP. There are other alternatives but standard SADP can't do 36nm, 80/2 = 40. Samsung is using EUV and we will see what their actual MMP is, I expect 36nm but it could be bigger depending on how aggressive they are. I can assure you Intel would have used SADP for 36nm if they could because SAQP is more complex, more expensive and has more restrictive design rules. I suggest you go look at some of the imec lithography presentations where they discuss the lithography cliffs and how people design to not go over them. Intel for example limited their MMP at the 22nm node to 80nm specifically to avoid multi-patterning in the BEOL.
 
Last edited:
You do realize, I presume, that the rate of improvement changes from very high in the beginning, when there's a lot of low-hanging fruit to pick to eventually levelling off on a plateau?

Copper has been used in beol for about 20 years now. That's for how long both industry and research community have been working on Cu resistivity among other related things. The resistivity of 3-4 µOhm·cm I mentioned in the writeup is not some breakthrough achievement of few recent years -- it was achieved many years ago. If you're unaware of this, it simply casts some doubts on your claims wrt expertise.

So many years ago Intel had on one hand copper resistivity of 3-4 µOhm·cm (that's published research, and one is, of course, free to draw any conclusions as to how much Intel looks at that), and on the other they had bulk resistivity of cobalt of 6 µOhm·cm.

See what I'm getting at? Even if we imagine that Intel found a way to deposit cobalt so the resistivity is very close to bulk, it still loses to copper. Is there anything to miss here?

That's why I say the viewpoint on Intel's cobalt decision that suggests there were good reasons to believe cobalt is a better choice misses a lot. Or at least it heavily misses the knowledge of state of the art in copper resistivity -- whether by Intel or by anyone who says their decision is justified.

Bulk resistivity of copper is higher than bulk resistivity of cobalt or ruthenium, no argument there, but, as dimensions scale down the long - electron mean free path in copper causes copper resistivity to rise due to scattering. Both cobalt and ruthenium have shorter electron mean free paths and therefore have less increase in resistivity versus cross sectional area than copper, e.g. the resistivity gap narrows although without barriers they never get better than copper.

In practice copper requires high resistivity barriers with around 2nm as the limit in barrier thickness. As cross sectional area decreases the barrier that doesn't scale becomes a bigger and bigger part of the line cross section. Cobalt needs a thin adhesion layer and no barrier and ruthenium doesn't need either. At a small enough line cross section copper and ruthenium both beat copper although this is at a smaller linewidth than Intel is using.

You also have to consider vias and electromigration.

Via resistance is a big problem in the lower levels of interconnect and barrier-less cobalt and ruthenium are much better than copper!

Cobalt and copper have much better electromigration than copper as well!

imec just did a comparison of copper, cobalt and ruthenium and published at IITC. I the paper plus interviewed one of the authors. Imec's position is that considering line resistance, via resistance and electromigration cobalt can out perform copper below 40nm pitch depending on the design. Intel has the smallest known MMP for "7nm class" processes (Intel 10nm is similar to foundry 7nm) and the highest currents so likely most stringent electromigration requirements and therefore they choose cobalt.

I have been told that TSMC has recently bought cobalt plating equipment. I am not aware of any disclosures yet on whether TSMC has cobalt filled contacts or any cobalt interconnect at 7nm, but if they don't I would expect cobalt at 5nm (due next year) for TSMC based on the equipment purchases.

You can read my write up on the imec article and interview here: https://www.semiwiki.com/forum/cont...er-cobalt-ruthenium-interconnect-results.html
 
Last edited:
Then at 3nm, which may well be the last node shrink that makes any economic sense, .

Hmmm. Nah, sortof, but not entirely. IMEC seems to expect a row of once-off enhancements to be made beyond 3nm. Although we'd hopeto see the industry use something other than nanometers round about then, because the node sizes have less and less real-world relevance. I'd expect the foundaries to keep an almost-yearly cadence going for the next 10 years at least, with smaller gains and easier portability between generations. Until we hit something like 5% improvements and packaging becomes the dominant factor in scaling. Embedded SRAM anyone?
 
Actually, I think it would be the end of Intel as we know it.

Good point. But who will define a "real" Intel should be or can be? Should it be Intel's managers or the market/customers or both?

I believe "Intel Inside" doesn't mean every Intel's product has to be "Made by Intel's own factories". There are many semiconductor companies who made decent profit from their excellent products that are "manufactured" by other companies. Why Intel needs to limit its own options?
 
Good point. But who will define a "real" Intel should be or can be? Should it be Intel's managers or the market/customers or both?

I believe "Intel Inside" doesn't mean every Intel's product has to be "Made by Intel's own factories". There are many semiconductor companies who made decent profit from their excellent products that are "manufactured" by other companies. Why Intel needs to limit its own options?
Why ? I would say because of decades of marketing.

Intel doesn't need to make all their products indeed, but they already don't. The 4G modems they sell to Apple are made by TSMC, as I think anything else they have with radio is (WiFi chipsets, etc). It's still an embarrassment to them when this is mentioned in the press (and it rarely is).

But for decades, they based all their marketing (to the press and investors) on their purported technology leadership (2 year lead, etc). Using a foundry for their "core" product would send a terrible message, especially after such delays on 10nm and so far unfulfilled promises. It would be interpreted as a capitulation of sorts, and markets would probably punish that quite harshly, even if it was the "rational" choice, I would argue (and think of the message it sends to their own "foundry" effort, however that is going).

I also think that even if it was their best path forward to use foundry as a stopgap for their Core lineup, it may not be that easy for them to do so. Their CPU architecture has significant requirements in terms of frequency which I'm not sure the foundries could accommodate easily, and there might be some significant internal learning needed for them to do so (AFAIK they never used bleeding edge processes from other foundries).

Someone else probably knows a lot more about this than I do but, as far as I've heard, there's so much tight integration between design/process at Intel (which has caused them a lot of issues for their "foundry" business) that it might be an internal "cultural" issue too for engineers to work on that.

I would not be surprised to see the new CEO spin a new strategy forward for them, a (even superficially) modified 10nm renamed as 7nm (to look more competitive) looks quite plausible to me (and according to Scotten's charts, not that incorrect) as a marketing spin. Whether they can even deliver on that, though, is probably up for debate.
 
Last edited:
At the 65nm node Intel used Argon Fluoride (ArF) exposure tools for gate. At the 45nm node they had two choices, one, buy a bunch of Argon Fluoride Immersion (ArFi) tools, or two, use multi-patterning and stay with ArF. What Intel did was use LE2 with ArF to print the gate and hold off on buying ArFi tools until the 32nm node.

ArFi has a BEOL pitch limit of approximately 80nm. I have seen that pushed a bit with restrictive design rules but practically speaking that is the limit. SADP is pitch doubling, e.g. it exactly cuts the pitch in half. An ArFi exposure creates an 80nm pitch mandrel and depositing spacers on the side walls gives you a 40nm pitch. For a SADP process 40nm is literally a cliff. I have discussed this with some of the leading lithography experts in the world and this is generally agreed upon, in fact one expert said 40nm is actually considered pushing SADP.

GF has a 40nm minimum metal pitch (MMP) and uses SADP. TSMC has a 40nm MMP and I believe they are actually using a litho-etch variant with spacers process. Intel couldn't do 36nm with SADP so they went to SAQP. There are other alternatives but standard SADP can't do 36nm, 80/2 = 40. Samsung is using EUV and we will see what their actual MMP is, I expect 36nm but it could be bigger depending on how aggressive they are. I can assure you Intel would have used SADP for 36nm if they could because SAQP is more complex, more expensive and has more restrictive design rules. I suggest you go look at some of the imec lithography presentations where they discuss the lithography cliffs and how people design to not go over them. Intel for example limited their MMP at the 22nm node to 80nm specifically to avoid multi-patterning in the BEOL.

Right, and thank you for expanding in much detail, but it seems you keep missing my point: while 36nm pitch certainly excludes the possibility of patterning technique used as it's just below resolution limit even with dipole ilumination, you can't be as certain wrt 40nm. Making assumptions such as yours can easily lead one to wrong conclusions. See? Facing a product whose measured pitch is close to litho equpment resolution limit, you just can't say for certain what patterning technique was used -- and good case in point is Intel's 45nm process. While certainly doable with single patterning, they chose double nonetheless -- that's the illustration of my point.


Such assumption as yours can simply fail any time it's applied in practice to make judgements about a process/product with known pitches but unknown technology.
 
Bulk resistivity of copper is higher than bulk resistivity of cobalt or ruthenium, no argument there, but, as dimensions scale down the long - electron mean free path in copper causes copper resistivity to rise due to scattering. Both cobalt and ruthenium have shorter electron mean free paths and therefore have less increase in resistivity versus cross sectional area than copper, e.g. the resistivity gap narrows although without barriers they never get better than copper.

In practice copper requires high resistivity barriers with around 2nm as the limit in barrier thickness. As cross sectional area decreases the barrier that doesn't scale becomes a bigger and bigger part of the line cross section. Cobalt needs a thin adhesion layer and no barrier and ruthenium doesn't need either. At a small enough line cross section copper and ruthenium both beat copper although this is at a smaller linewidth than Intel is using.

You also have to consider vias and electromigration.

Via resistance is a big problem in the lower levels of interconnect and barrier-less cobalt and ruthenium are much better than copper!

Cobalt and copper have much better electromigration than copper as well!

imec just did a comparison of copper, cobalt and ruthenium and published at IITC. I the paper plus interviewed one of the authors. Imec's position is that considering line resistance, via resistance and electromigration cobalt can out perform copper below 40nm pitch depending on the design. Intel has the smallest known MMP for "7nm class" processes (Intel 10nm is similar to foundry 7nm) and the highest currents so likely most stringent electromigration requirements and therefore they choose cobalt.

I have been told that TSMC has recently bought cobalt plating equipment. I am not aware of any disclosures yet on whether TSMC has cobalt filled contacts or any cobalt interconnect at 7nm, but if they don't I would expect cobalt at 5nm (due next year) for TSMC based on the equipment purchases.

You can read my write up on the imec article and interview here: https://www.semiwiki.com/forum/cont...er-cobalt-ruthenium-interconnect-results.html

Scotten, some of this is obvious facts, while some is in fact questionable, e.g. "highest currents". Nvidia and AMD have been shipping parts rated at 250W TDP for many years now, so that comment might be incorrect in fact.

Anyway, what really matters here is whether Intel's cobalt decision is justified or not. My contention is it is not. As for EM, liners and caps would have been more than enough (in fact, more than designers could have taken full advantage of). As for resistivity, my conclusion is Intel ended up with cobalt wires slower than copper, and I have high confidence in my findings.

Is your analysis (or just opinion) different from mine? What are your expectations then? Is cobalt-filled interconnect (as opposed to using Co liners and caps) really required for EM-related considerations? And did Intel end up with faster cobalt wires, about the same speed, or slower than copper? And if faster/slower, then by what factor approx?

Because that's what really matters.
 
Is cobalt-filled interconnect (as opposed to using Co liners and caps) really required for EM-related considerations?

Well, I'm in no way in the semiconductor business, merely lurking / reading around and trying to parse your logic while you're questioning other peoples expertise, but ask yourself this:

-In your FiPo, you provided dozens of papers why copper has less resistance than cobalt,
-Intel - fully aware of this (unless we'd consider they're no experts?) - chose cobalt anyway.

What _else_ reason than EM could there have been?

You _assume_ Intel made the choice driven by resistivity, but I nowhere see sources?
From what I once read elsewhere (must have been semiengineering.com) Intel's choice for cobalt was never driven by resistance. Also more or less indicated by Scotten Jones or the first article I found wen looking for tCoSFB (again, I'm not in the field so had to look it up). Cobalt was about solving some other problem while at the same time keeping resistance within boundaries - that's what I read, or am I terribly wrong?

If an engineer tells you "it's not possible", then respond with: "I'll pay you $1Billion if you did it" - and the answer stays the same, then only do you truly know the real answer.

OK - if I'll give you 1 Billion dollar to raise 10 kids who should be able to graduate as PhD in AI-topics at the age of 10, what will your answer be?
Probably: "Yes, with 1 Billion dollar I can do it, but you have to wait for 7 years".

Throwing money at a problem to solve it immediately only works if resources are readily available, so if it doesn't involve 'craftmanship which only can be acquired over the years'.

ASML has this problem; after all, how many experts on EUV topics are available for hire? Or people who can engineer nanometer precise handling? And how many people who can not only "engineer / design in CAD", but actually _build_ a plane the size of Germany (it's quite a long drive from north to south mind you) with max 1 sandgrain height difference?
 
Well, I'm in no way in the semiconductor business, merely lurking / reading around and trying to parse your logic while you're questioning other peoples expertise, but ask yourself this:
Well, I published my views, and people who disagreed published none on this topic, at least in such depth, to begin with. And since I do have high confidence in my findings, I certainly feel in position to stand for them.

Does it not go well with you?
-In your FiPo, you provided dozens of papers why copper has less resistance than cobalt,
-Intel - fully aware of this (unless we'd consider they're no experts?) - chose cobalt anyway.

What _else_ reason than EM could there have been?

You _assume_ Intel made the choice driven by resistivity, but I nowhere see sources?
From what I once read elsewhere (must have been semiengineering.com) Intel's choice for cobalt was never driven by resistance. Also more or less indicated by Scotten Jones or the first article I found wen looking for tCoSFB (again, I'm not in the field so had to look it up). Cobalt was about solving some other problem while at the same time keeping resistance within boundaries - that's what I read, or am I terribly wrong?
My analysis is not based on any assumptions at all, and that's very poor practice for this sort of work, in fact. There are certainly speculations on my part, and they are numerous, but they are strictly fact-based and are clearly marked as such. One of the points of this work was tying what we know into a picture that makes sense.


As for resistance, Intel cited it as one of the reasons as recently as this year's IITC (if you read my writeup, you'll find it there):

"Cobalt metallization is introduced in the pitch quartered interconnect layers in order to meet electromigration and gapfill-resistance requirements."

However, one really needs to read the paper in order to find clues, and I was very lucky to get hold of it thanks to a forum member who simply sent it to me.


PS thanks for the thumbs-up, appreciated.
 
Scotten, some of this is obvious facts, while some is in fact questionable, e.g. "highest currents". Nvidia and AMD have been shipping parts rated at 250W TDP for many years now, so that comment might be incorrect in fact.

Anyway, what really matters here is whether Intel's cobalt decision is justified or not. My contention is it is not. As for EM, liners and caps would have been more than enough (in fact, more than designers could have taken full advantage of). As for resistivity, my conclusion is Intel ended up with cobalt wires slower than copper, and I have high confidence in my findings.

Is your analysis (or just opinion) different from mine? What are your expectations then? Is cobalt-filled interconnect (as opposed to using Co liners and caps) really required for EM-related considerations? And did Intel end up with faster cobalt wires, about the same speed, or slower than copper? And if faster/slower, then by what factor approx?

Because that's what really matters.

As far as I know GPUs are massively parallel devices which run at much lower frequencies than CPUs. AMD Vega runs at 1.5-1.7 Ghz while Nvidia Pascal runs at 1.8-2.1 Ghz. But none of these GPUs can compare to a CPU like Intel's Coffeelake which runs at 4.7-5.2 Ghz. Intel's processes are specifically designed for their high performance CPUs to enable such high clock speeds. I think IBM/GF are probably the second to Intel's processes in terms of drive current because IBM requires high performance process capable of 5+ Ghz for their Power CPUs. So do not confuse TDP with chips which require very high clock frequencies and thus high drive current.
 
Right, and thank you for expanding in much detail, but it seems you keep missing my point: while 36nm pitch certainly excludes the possibility of patterning technique used as it's just below resolution limit even with dipole ilumination, you can't be as certain wrt 40nm. Making assumptions such as yours can easily lead one to wrong conclusions. See? Facing a product whose measured pitch is close to litho equpment resolution limit, you just can't say for certain what patterning technique was used -- and good case in point is Intel's 45nm process. While certainly doable with single patterning, they chose double nonetheless -- that's the illustration of my point.


Such assumption as yours can simply fail any time it's applied in practice to make judgements about a process/product with known pitches but unknown technology.

The 45nm gate double patterning was simply the "poly cut" which does not get limited by the gate pitch. Intel was ok to spend more on the gate layer, as their most critical.
 
If numbers make it obvious, where are they then? Show us if not peer-reviewed research, then at least something credible. Both wrt to dissipation into mb and amount of heat dissipated in the interconnect, which becomes separated by the cobalt gasket from the cooler (or are you [dis]missing the latter factor entirely?). Tell us, after all, something more specific about your own research you mentioned than very general words, as that would be interesting indeed.

What really makes it obvious though and indeed doesn't take measurements to realize -- as I have already pointed out at RWT -- is that motherboard is in fact a 2nd cooler attached to the cpu from the other side. Pretty lowsy, but still a cooler (the lower the power dissipated by the cpu, the higher the effect). It's enough to get hold of ir cam and take it on the backside of mb to see the picture quite clearly.

And what's probably more important here, is the combination of good thermal conductivity and thermal capacitance formed by Cu interconnect in the chip, its package and mb wires coming to the socket. It's this combo (with some contribution from silicon) which allows to absorb short bursts of Icc exceeding max average Icc (sustainable in long term). Use of poor thermal conductor in the IC violates function of both mechanisms.

Since -- as I said -- these thermal analyses were done on ASICs, which are by definition custom devices for a particular customer, it should be obvious that I'm not going to publish any detailed results. But as I said the overall numbers are pretty obvious, so let me give you a few clues...

Silicon substrate thermal conductivity is 200W/mK and area is 100% of chip area, so thermal resistance from circuits down through substrate to back of die is extremely low. Copper thermal conductivity is higher (400W/mK) but effective area is a tiny fraction, especially for vias, so thermal resistance to top metal is much higher than to die backside, especially when lateral flow through metal is also considered. Then there are solder bumps (60W/mk) which only cover ~20% of the chip. Then the heat has to get through the package substrate (small copper area of unfilled vias) to the package surface, then down through the solder bumps to the PCB, then get into the PCB planes and spread out through them. If a socket is used (most CPUs) then the thermal resistance of this is much higher than solder balls since the contact area to the balls is tiny.

Take all this into account together with thermal resistances of TIM1 (die to lid), lid/heat spreader, TIM2 (lid to heatsink) and compare the heat dissipated through the two paths, and invariably >90% goes out through the back of the die into the heatsink, as any thermal simulation (Icepak or similar) will show -- and the figure is typically >95% with a low thermal resistance package design like a CPU, especially if socketed. So the cobalt "thermal gasket" can only affect the ~5% that flows out of the top of the chip, and since all the thick metal layers are still Cu (and there are lots of other resistances in series) the difference is likely to be 1%-2%.

You're correct that an IR thermometer will show the back of the PCB under the chip is hot, but this shows temperature not heat flux -- it can be as hot as you want but this doesn't show any significant heat is being dissipated by it, it's like the difference between voltage and current.

So if you still don't believe me, go and build yourself a thermal model using realistic figures for all the above factors and see how ineffective heat dissipation up through the metal stack is -- and then you'll also see how Co makes almost no difference to overall thermal resistance compared to Cu, maybe 1 or 2 percent. I'm not going to do it for you, you're the one claiming this "Co thermal gasket effect" exists, so go and do the work to prove it.

[hint: it won't...]

As far as short bursts of high current are concerned the thermal mass of the Cu is negligible (a few percent) compared to the thermal mass of the silicon substrate, which in turn is much smaller than the thermal mass of the package lid, which in turn is much smaller than the thermal mass of the heatsink. In response to a high power pulse you get a series of cascaded time constants, but on a timescale of a second or so it's the heatsink that dominates, and it's the thermal mass of this that largely sets short-term overload capability.

Again, you can simulate all this if you don't believe me. I have on multiple occasions, which is why I can say that your ideas don't reflect reality.

By the way, you do realise that all advanced high-power chips are "flip-chips" with the circuits (and BEOL interconnect) facing down towards the PCB, and the back of the die facing up towards the package lid and heatsink, so your "thermal gasket" is under the chip (in the high thermal resistance path) not between the chip and the heatsink (the low thermal resistance path)?
 
Last edited:
Right, and thank you for expanding in much detail, but it seems you keep missing my point: while 36nm pitch certainly excludes the possibility of patterning technique used as it's just below resolution limit even with dipole ilumination, you can't be as certain wrt 40nm. Making assumptions such as yours can easily lead one to wrong conclusions. See? Facing a product whose measured pitch is close to litho equpment resolution limit, you just can't say for certain what patterning technique was used -- and good case in point is Intel's 45nm process. While certainly doable with single patterning, they chose double nonetheless -- that's the illustration of my point.


Such assumption as yours can simply fail any time it's applied in practice to make judgements about a process/product with known pitches but unknown technology.

I honestly have no idea what you are saying here: "Making assumptions such as yours can easily lead one to wrong conclusions. See?"

What assumptions and what wrong conclusion?

You had said previously that Intel could use SADP at 10nm for the 36nm pitch because it is only 10% less than 40nm. I said you can't because 40nm is a cliff limit for SADP, You now appear to agree you can't use SADP for a 36nm pitch.

Intel uses SAQP for 36nm, that is known from their IEDM paper. GF uses SADP for their 40nm pitch, that is known from their IEDM paper. Samsung is going to use EUV for whatever their MMP ends up being per their VLSIT paper. The only "assumption" would be how TSMC is going to do their 40nm pitch (40nm per their IEDM paper and many other sources). I have strong reason to believe TSMC will use a litho-etch spacer based process based on inputs I have gotten but they haven't publicly announced it.

My point all along has been that Intel is the only company known to be using SAQP in the back-end and that SAQP in the back-end is known to be difficult and may explain the lithography problems they discussed on their earnings call. The only assumption in that statement is that SAQP is the source of their lithography yield issues and that is why I said may. Based on everything known about Intel's lithography strategy versus what the foundries are doing, SAQP in the back-end is the only known significant difference in Intel's lithography approach.

You keep bringing up Intel's 45nm node as if that is somehow relevant to this discussion. You do realize that Intel's 45nm node gate pitch was 160nm, not 45nm? At the 65nm node Intel used ArF for gate and at 45nm something about the pattern meant they couldn't get the fidelity they wanted with ArF (160nm is actually above the ArF limit). Their choice was either buy immersion tools for one layer or multi-pattern it with ArF. Intel chose to multi-pattern it. As I have been saying all along when you drop below a lithography limit you have to change your lithography technique.
 
Well, I published my views, and people who disagreed published none on this topic, at least in such depth, to begin with. And since I do have high confidence in my findings, I certainly feel in position to stand for them.

Does it not go well with you?

My analysis is not based on any assumptions at all, and that's very poor practice for this sort of work, in fact. There are certainly speculations on my part, and they are numerous, but they are strictly fact-based and are clearly marked as such. One of the points of this work was tying what we know into a picture that makes sense.


As for resistance, Intel cited it as one of the reasons as recently as this year's IITC (if you read my writeup, you'll find it there):

"Cobalt metallization is introduced in the pitch quartered interconnect layers in order to meet electromigration and gapfill-resistance requirements."

However, one really needs to read the paper in order to find clues, and I was very lucky to get hold of it thanks to a forum member who simply sent it to me.


PS thanks for the thumbs-up, appreciated.

"Cobalt metallization is introduced in the pitch quartered interconnect layers in order to meet electromigration and gapfill-resistance requirements."

I don't think gapfill-resistance is referring to electrical resistance as you appear to be assuming, I think it is referring to the difficulty of filling the narrow trenches and vias that the interconnect is fabricated in.

Here is a quote from Intel's IITC paper:

"Cobalt metallization is used in M0 and M1. Cobalt's properties provide the required excellent electromigration resistance for high performance designs. At the short-range routing distances typical of M0 and M1, the intrinsic resistance penalty of cobalt (vs. copper) is negligible, especially when the true copper volume at sub-40nm pitches is considered. Additionally, mobility of cobalt in low K dielectric is low that permits a simple titanium-based liner, thereby minimizing interlayer via resistance at these high via count layers."

This clearly indicates Intel is using cobalt for electromigation resistance and that while the line resistance is higher, the via resistance is lower and offsets that at least to some degree consistent with the imec paper and my write up of it.
 
Last edited:
There are so many things that could go wrong with this 10nm process: open shorts margin, gap fill, electromigration, planarization, you name it. If it is really as simple as Co thermal budget thing, Intel would have sorted it out already. :)
 
I honestly have no idea what you are saying here: "Making assumptions such as yours can easily lead one to wrong conclusions. See?"

What assumptions and what wrong conclusion?

You had said previously that Intel could use SADP at 10nm for the 36nm pitch because it is only 10% less than 40nm. I said you can't because 40nm is a cliff limit for SADP, You now appear to agree you can't use SADP for a 36nm pitch.
No, that's not what I said at all. I said 40nm-36nm is not equal to SADP-SAQP.

Intel uses SAQP for 36nm, that is known from their IEDM paper. GF uses SADP for their 40nm pitch, that is known from their IEDM paper. Samsung is going to use EUV for whatever their MMP ends up being per their VLSIT paper. The only "assumption" would be how TSMC is going to do their 40nm pitch (40nm per their IEDM paper and many other sources). I have strong reason to believe TSMC will use a litho-etch spacer based process based on inputs I have gotten but they haven't publicly announced it.

My point all along has been that Intel is the only company known to be using SAQP in the back-end and that SAQP in the back-end is known to be difficult and may explain the lithography problems they discussed on their earnings call. The only assumption in that statement is that SAQP is the source of their lithography yield issues and that is why I said may. Based on everything known about Intel's lithography strategy versus what the foundries are doing, SAQP in the back-end is the only known significant difference in Intel's lithography approach.

You keep bringing up Intel's 45nm node as if that is somehow relevant to this discussion. You do realize that Intel's 45nm node gate pitch was 160nm, not 45nm? At the 65nm node Intel used ArF for gate and at 45nm something about the pattern meant they couldn't get the fidelity they wanted with ArF (160nm is actually above the ArF limit). Their choice was either buy immersion tools for one layer or multi-pattern it with ArF. Intel chose to multi-pattern it. As I have been saying all along when you drop below a lithography limit you have to change your lithography technique.
I keep bringing up Intel's 45nm process because it illustrates the trap of your assumption: you can't simply look at pitch and draw correct conclusion on that basis as to what patterning technique is used.

Besides, we know now that Intel had to resort to quintuple and sextuple patterning for 10nm from Krzanich during his last conference call, why do you prefer not to notice it? You fall in the trap of your assumption right there.

You can't rely on it. Relying on known pitch to draw conclusions about patterning technique is as misleading as Bohr's metric with regard to density -- it simply doesn't tell you what you really need to know, as in case of Intel it simply doesn't correspond to transistor density of their real products.
 
Back
Top