Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/index.php?threads/intel-10nm-process-problems-my-thoughts-on-this-subject.10535/
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2021370
            [XFI] => 1050270
        )

    [wordpress] => /var/www/html
)

Intel 10nm process problems -- my thoughts on this subject

-AM-

Banned
Hi folks. I have already posted this writeup at a forum in several instalments and now submit it here since I figured you guys can find it interesting. Some parts go verbatim, some are edited, but only for readability or better articulation of some thoughts. If you want to read the original, you can start with the most recent post and follow the links.

-----------
What started as another writeup continuing my previous posts with thoughts on Intel's 10nm process affair, which I was sharing here mainly out of frustration for total lack of analysis on this topic, be it for pro-Intel agenda or lack of required background among tech writers, somehow evolved into something different. It was originally prompted by Charlie's article, to which I wanted to add only a few comments in a short post, then I unexpectedly found some free time on my hands and thought I'd finally put together some of my previous posts at RWT so they are in one place. Time still permitting, I added some structure and style to the text, giving it articlish looks.

Have a nice read, folks.

Part 1: a nice catch from Charlie

A nice catch from Charlie who looked at ark pages for i3-8121U. After taking a look at them, a few more interesting things caught my eye, so here is some more food for thought, in addition to Charlie's article and what I posted already.

1. No datasheet (and not just on ark pages, it appears to be missing on Intel's site -- try searching and let me know if you manage to find it -- I didn't). Releasing a chip without a datasheet is a trait shared mainly by dirty little chip cloners from SE Asia, rather than reputable semiconductor manufacturers that Intel is. Or at least was.

2. A whole number of missing features, not just GPU, but also configurable TDP down among others -- both of which even 3000 series Celerons have (Intel's cheapest mobile offering).

3. Junction temperature is 105 °C. The problem is not the absolute value, but the fact it's 5 °C up from a similar 14nm i3-8130U -- with GPU fused off. I take it as a corroboration of the thermal gasket effect that use of cobalt in metal stack has on temperatures, essentially making hot spots even hotter due to 4x poorer thermal conductivity vs Cu.

Add to this no big public announcement to this day of 10nm products or process as going into hvm (off-hand comment made at CES hardly counts as such). Consider this: a company which was always full of pride for the ability to follow Moore's law for decades, starts long-overdue 10nm deliveries without even talking about them -- no press release, nothing. That CES remark and i3-8121U scores which appeared on geekbench with no products in sight until just recently generated only questions and theories as to what's really going on rather than belief that 10nm is finally ready and chips are shipping.

So why did Intel start supplying their debut 10nm chips in such a quiet fashion at all? Just because the chip is defeatured to sub-Celeron level and only for the bragging rights and a historical record that Intel's 10nm shipments started in Q2 2018?

I don't think there's much reason for Intel to do that; after all, they made the entry for i3-8121U only following the flood of news reports of the Lenovo notebook spotted in China, and we don't know if that CES comment was planned or sanctioned by the top management at all, as BK himself made no mention of it.

While I'm at it, cudos to either someone extremely eagle-eyed or an insider (I'm going with the latter) who disclosed it first, and it was not one of news sites often quoted as the original source for this news -- the credit goes to forum member not_someone over at Anandtech forums who broke the news ahead of multiple sites. If you happen to know an even earlier source, please do mention it in the comments.

Getting back to the subject, I think Intel's main goals are different with this launch.

Subtle problem

I have already said that I don't buy Brian Krzanich's "low yield" stories he tells, likely simply to avoid any questioning, and speculated as to possible reasons behind Intel's neverending 10nm problems.

To begin with, Intel introduced a whole bunch of innovations in their 10nm process, one of them being copper-cobalt stack. It's out of question that if M0 wire cross-section will continue shrinking, sooner or later alternatives with shorter electron mean free path will offer better conductivity than copper, the question is whether time for the switch from copper to some alternative, even in lower levels of the stack, has come.


TSMC, Samsung and GF are all staying with Cu stack at 7nm, and their mmp is the same as Intel's on 10nm -- 36-40nm. GF are only replacing W with Co for contacts (I haven't seen original paper and wonder what the purpose is, perhaps to reduce Schottky barrier height and improve drive?) and make Co liners (probably replacing Ta in order to shrink liner thickness) and caps in several lower levels of metal stack, and TSMC aren't doing even that I think.

Regardless of the choice of replacement, Intel's switch from copper seems premature at best. Advances in copper deposition techniques allow to achieve resistivity as low as 3-4 µOhm·cm for <30nm CD -- that's lower than bulk resistivity of cobalt (6-6.5 µOhm·cm), and Intel's competitors are probably well aware of that.

As for the choice of cobalt, one serious thing to consider is that unlike copper, it's brittle. Non-ferrous metals don't have endurance limit so one could design around mechanical failure from thermal cycling with a properly chosen safety margin. It fully applies to copper just as well, but it has been used for so long without major problems attributed to fatigue failure, that I'd hazard a guess it responds with micro-yielding along grain boundaries once its fatigue strength falls below stress resulting from thermal cycling. I wouldn't expect by default a graceful fatigue failure from cobalt (as well as other brittle materials in general).

Besides, cobalt's thermal conductivity is 4x less than that of copper. Using cobalt in lower levels of the stack is like installing a thermal gasket between transistors and the rest of stack, effectively making hot spots even hotter.

It might be the case that Intel's resulting problems are such that one of the things they are facing is mindboggling variation of reliability and life of their 10nm samples -- some chips work just fine for months, while others fail or become glitchy after weeks under test, and others crumble in days). While that's just a hypothesis, this is consistent with two things we know:

First is repeated promises that 10nm hvm is just around the corner, as execs including BK probably sincerely believe that something as minor as remaining issues with reliability would be fixed Real Soon Now.

Second thing consistent with such undercover release of their first 10nm chip is no public announcement, no datasheet, and through a single OEM in China. If my hunches are correct, then in a situation like this any manufacturer would be scared like hell of releasing such a nightmare in high volume and with a big announcement for reasons which need no explanation.

So what can help when one stepping after another fails at fixing "minor" remaining reliability issues, and with process as complex as Intel's 10nm, fab time is probably around 2.5 months (I don't have exact figures, of course, -- that's assuming they work fast averaging a layer per day) and after that you need time for those lengthy reliability tests to see if you can finally ship chips in volume, and every time the answer from QA team is "no"?

Solution

One thing that comes to mind is a magic wand. No, stop laughing, I'm serious!

If my theories and speculations turn out in the end to be correct, then Intel's brainpower apparently doesn't seem to realize what kind of wall they are up against as a result of their decision to go with heterogeneous copper-cobalt metal stack.

If in what must have looked to some people at Intel as a touch of a genius you build a stack
- using metals with significantly different thermal expansion coefficients (16.5 for Cu vs 12-13 for Co),
- and one of them being brittle and having 4x worse thermal conductivity on top making hot spots even hotter,
how in the world are you going to fix that?!

Even if your eventual indicator of chip health is its performance on a test rig under conditions closely matching those which are likely to be in the real world, the rig is one thing -- every single setpoint is your choice, and even if you manage to end up with a stepping which begins to look if not yet sellable, but at least reliable as long as you don't hit some nasty corner cases, real world is a different thing -- when thermal cycling is concerned, every app has unique "fingerprints" -- it exercises various blocks of a chip in a specific manner: activating them with certain probability, and even that probability is usually a variable, not a constant.

See what I'm getting at? Different people use different apps, there are millions of apps and billions of people; some never power down their PC, while those working on the run can flip the switch a dozen times a day. Wanna simulate that or build a math model for distribution of service life and probability of a failure? Good luck with that.

Besides, if you know for a fact you have corner cases when your chip crumbles, you simply can't guarantee they won't happen with real applications, no matter how unique and improbable the cases can look (let alone when you know there do exist killer apps, pardon the pun). To add insult to injury, in real world fans tend to slow down or stop, TIM -- to dry out, VRs -- to fail in different manners, dirt-cheap or faulty electrolytic caps -- to age quickly, and the list goes on. All of this starts to matter a lot more for a chip with razor-thin reliability margin.

One thing you can still try in a situation that I think Intel found themselves in, is field-testing this flaky nightmare. You can take your best part which begins to look alive on the rig (never mind it happens only when GPU is fused off), ship it and wait and see how high the death rate would be. Who knows, maybe you get lucky.

Needless to say, better do it without any announcements and in some remote place with a language other than English, so that risk of possible reputational damage is kept to a minimum.

One interesting question

An interesting question here is the price at which this chip is sold to Lenovo. Since a problem with components tends to have seriously detrimental effect on the brand name regardless of what common sense can suggest, what do you think happens when an OEM is offered an unknown CPU
- which is not on supplier's price list even after "launch",
- for which there's no datasheet even after "launch",
- which is fabbed on a new process which is long overdue and was not even officially announced as going into hvm,
- and to top things off, whose GPU is fused off for some reason, yet it's still 15W as otherwise similar 14nm counterpart and Tj is bumped up to 105 °C?

Right, questions arise, and many of them. Starting with "WTF is all of this supposed to mean?" and "Why should we bother at all and bet our reputation on this strange piece of something, among other things crippled to sub-Celeron level, yet still branded as Core i3?".

That being said, what do you think would be a reasonable agreed-on price for i3-8121U when dealing with someone of Lenovo caliber? Your guess is as good as mine, and quite frankly, mine is that the "sales" price for i3-8121U to Lenovo is deeply sub-zero.

What Intel is likely busy doing these days with their first 10nm product is not selling for profit, or even shipping for zero-profit revenue as one can think, but
- supplying what really counts as rejects which belong to a scrapyard by any quality standard (certainly by Intel's of former years), but which -- I think Charlie is spot on there -- is the best (or perhaps the only thing?) they can offer at the moment,
- paying up dearly for these supplies to make their way into Lenovo notebooks,
- and expecting to use Chinese computer users as field-testers of this flaky nightmare.

P.S. There are still many interesting questions: will Intel give up on CuCo stack and join the rest of industry, or will they persist and try to take that wall, erected by their own hands, by storm? If so, will they finally succeed at that or not? What do you think?

Note that regardless of the exact nature and source of problems Intel faces, that's been dragging for about 2.5 years now since 10nm was originally scheduled for 2015 (I lost count how many times 10nm launch was pushed back already), and with Intel's current 2019 launch plans we're already talking about 3-4 year delay. And that's not all, as process development programs at Intel run for about 4 years, and those 4 years do not include pathfinding and component research. That's the scale of investment that's not even starting to pay off.

Part 2: What about others?

As mentioned above, for 7nm processes GF are using cobalt for liners, caps and contacts, and TSMC are not doing even that. At fairly recently held IITC IBM's Dan Edelstein presented a keynote speech on the 20-year history of Cu BEOL development at IBM and sent a clear message that copper interconnect can still be extended.

imec submitted several papers to IITC, and one particular point of their research was relative performance of Cu, Co and Ru interconnect. Their results, which they also shared via a PR show that copper outperforms cobalt for wire cross-sections down to 300 nm² or 12nm linewidth, which correspond to about 3nm node. An interesting bit that follows from their graph is that there's no cross-section area range where cobalt wins -- their least-resistant samples are only copper and ruthenium for the whole range of cross-sections they considered.

Looking at papers submitted to IITC, one thing that struck me personally is the relatively small number of papers dedicated exclusively to cobalt-based solutions and really huge number of papers dealing with extension of copper interconnect and very high interest in ruthenium among presenting researchers. But for the record, I don't have much belief in pure ruthenium replacing copper for high-volume products. Liners, caps, contacts -- maybe, but even that is in question (see 2-5).

Here is a selection of IITC papers I made for my own use with emphasis of some abstracts.

2-2 High-Aspect-Ratio Ruthenium Lines for Buried Power Rail
2-4 Subtractive Etch of Ruthenium for Sub-5nm Interconnect
2-5 Impact of liner metals on copper resistivity at beyond 7nm dimensions
"The impacts of ruthenium and cobalt liners on copper resistivity have been investigated at beyond 7nm dimensions. Liner metal conduction was carefully evaluated in a Cu resistivity derivation using the temperature coefficient of resistivity (TCR) approach. Cu resistivity with Ru liner is higher than with a Co liner by 10-15%, which is verified by RC plot. The resistivity difference is attributed to interface scattering and possibly grain boundary scattering. Interface ab initio calculations show 3-7% increase of Cu resistivity from Co liner to Ru liner."
5 CMOS/Cu BEOL Technology in Manufacturing: 20 years and Counting / IBM
"Now in its 10th generation of CMOS manufacturing, and 12th generation in the research phase, we are finally starting to see changes beyond evolutionary in the materials and processes, with the end in sight for the Cu finest wires in perhaps 1-2 more generations. However, our current data for recent innovations still suggests this Cu/alternate metal crossover point may be pushed off at least once more beyond some predictions."
6-1 Ru liner scaling with ALD TaN barrier process for low resistance 7 nm Cu interconnects and beyond
"Low resistance Cu interconnects with CVD Ru liner have been demonstrated for 7 nm node."
6-2 Modified ALD TaN Barrier with Ru Liner and Dynamic Cu Reflow for 36nm Pitch Interconnect Integration
7-1 Microstructure Evolution and Implications for Cu Nanointerconnects and Beyond
7-3 Pathfinding of Ru-Liner/Cu-Reflow Interconnect Reliability Solution
10.3 Modulation of Within-wafer and Within-die Topography for Damascene Copper in Advanced Technology
10.4 The impact of solute segregation on grain boundaries in dilute Cu alloys
10.11 Effective Methods Controlling Cu Overburdens for Cu RDL Process
10.13 Oxidation Structure Change of Copper Surface Depending on Accelerated Humidity
10.16 Integration of Metallization Processes in Robust Interconnects Formation for 14 nm Nodes and beyond
11-3 PVD-Treated ALD TaN for Cu Interconnect Extension to 5nm Node and Beyond
12-1 Alternative Metals: from ab initio Screening to Calibrated Narrow Line Models (Invited)
"We discuss the selection and assessment of alternative metals by a combination of ab initio computation of electronic properties, experimental resistivity assessments, and calibrated line resistance models. Pt-group metals as well as Nb are identified as the most promising elements, with Ru showing the best combination of material properties and process maturity. An experimental assessment of the resistivity of Ru, Ir, and Co lines down to ~30 nm² is then used to devise compact models for line and via resistance that can be compared to Cu predictions. The main advantage of alternative metals originates from the possibility for barrierless metallization."
13-1 Resistance Scaling of Cu Interconnect and Alternate Metal (Co, Ru) Benchmark toward sub 10nm Dimension
13-2 Embedded metal voids detection to improve Copper metallization for advanced interconnect
13-3 Damascene benchmark of Ru, Co and Cu in scaled dimensions
The alternative metals Ru and Co are benchmarked to Cu in a damascene vehicle at scaled dimensions. Ru and Co are found to be superior in line resistance for trenches smaller than 250nm2. The work is complemented with a via R modelling and EM performance comparison. Here, the barrierless Ru is superior at both levels.

As for cobalt, there were a few papers only, none of which even mentions in the abstract advantages vs. copper or alternatives to copper (with the exception of Intel's paper).

2-3 Electroless Cobalt Via Pre-Fill Process for Advanced BEOL Metallization and Via Resistance Variation Reduction
4-2 Extreme Contact Scaling with Advanced Metallization of Cobalt
7-2 Electromigration and Thermal Storage study of Barrierless Co vias
10.17 Electrolytic Cobalt Fill of Sub-5 nm Node Interconnect Features
11-1 Interconnect Stack using Self-Aligned Quad and Double Patterning for 10nm High Volume Manufacturing (Invited) / Intel
"Cobalt metallization is introduced in the pitch quartered interconnect layers in order to meet electromigration and gapfill-resistance requirements."

Part 3: Intel 10nm IITC paper and AMAT's pubs provide some clues

Thanks to a forum member at RWT, I managed to take a look at Intel's IITC paper, and it turned out it contains a few interesting clues. As a side note, there is not a single word about ruthenium that Techinsights found in i3-8121U.

The paper talks about increased EM resistance, all right, but according to their own graph, Co caps -- what GF are doing and, btw, what Intel says they're doing too -- deliver 1000x higher 50% TTF, and pure Co is 50x on top of that, which renders it unnecessary. Sure, 50000x is better than 1000x, but 1000x is a honking huge overkill -- there's simply no way to take full advantage of that, so why seek more?

Another important bit is "At the short range routing distances typical of M0 and M1, the intrinsic resistance penalty of cobalt (vs. copper) is negligible, especially when the true copper volume at sub-40nm pitches is considered. Additionally, mobility of cobalt in low K dielectric is low that permits a simple titanium-based liner, thereby minimizing interlayer via resistance at these high via count layers."

Quite frankly, it looks like they are not even trying too hard to conceal that they found that Co indeed loses to Cu in lines, but at least they succeeded with vias. This is actually consistent with recent results from
a) imec -- Co loses to Cu for lines all the way to 3 nm (300 nm² cross-section, 12nm linewidth), but wins for vias,
b) AMAT -- in their recent launch of cobalt suite they gave specific resistance figures for replacement of W contacts (also a vertical structure, where cross-section gap between Cu(W) and Co due to thinner liner is apparently significant enough to justify the switch), but were very shy of claiming any win for lines, despite their ongoing advocacy of CuCo stack (coincidentally, exactly like what Intel is doing).

One thing I don't really appreciate about AMAT is not their advocacy of cobalt per se (the reasons are obvious), but the fact they they are not entirely straightforward in this regard. Not downright dishonest, only somewhat misleading -- a quick glance at their pictures easily creates a wrong impression -- you have to closely read the "fine print", and even that will not necessarily clear everything out.

For example, take a look at Jonathan Bakke's recent blog post, namely Fig. 3: what's your impression from a quick glance concerning the impact of cobalt on performance? It's a win, right?

But diving in the fine print below we discover that "While copper as a bulk metal has a lower resistance than cobalt, there is a crossover point in the 10–15nm range where cobalt interconnects have lower resistance than copper."

But 10-15nm CD (or ca. 20-30 nm metal pitch) is probably several nodes away from now (min metal pitch for 7nm (foundry) and 10nm (Intel) processes is 36-40nm, take a half for CD, and how mmp will actually scale past 7nm is anyone's guess). But AMAT's estimation for the CD at which we have resistivity crossover agrees with above-mentioned recent research results published by imec, who estimate it will happen at 12nm CD which they project at 3nm node.

Going further, we read "Also, as mentioned earlier, cobalt works with thinner barriers than copper and as a result, the vertical resistance in the via is lower for cobalt interconnects. For these reasons, cobalt helps unlock the full potential of transistors at the 7nm foundry node and below."

Again consistent with imec's research, but unless you're in the loop for this sort of stuff, you simply won't be able to read the hidden message here, which is ""cobalt wins for vias" means in fact "cobalt loses for lines"".

Right next after this paragraph we read "Finally, we have demonstrated the value of cobalt using EDA simulations of a 5-stage ring oscillator circuit. We showed that for a range of CDs simulated, the performance of the circuit with cobalt was better than for tungsten. In fact, this benefit for cobalt increases as CDs shrink, with a highly significant improvement of up to 15 percent in chip performance."

Okay, so finally we have something that looks like cobalt finally drives a nail in the non-cobalt interconnect's coffin? Not in the copper coffin though, but at least in the tungsten used for contacts, right?

Well, let's take a closer look at the picture (Fig. 3). First, the 15% advantage corresponds to 6nm CD, or min metal pitch of about 12nm. Which node is it, how many years from now? Anyway, 7nm (Intel 10nm) processes have min metal pitch of 36-40nm or about 20nm CD, and the win for that CD is about 2% according to their chart.

But if you think surprizes are over at this point, and cobalt finally eked out a 2% win, you're wrong. But to realize that, we need to go to fine print on the picture now: "compares cobalt with tungsten transistor contacts. Excludes via and M1 effects".

Why? Because Co line resistance is in fact higher than Cu line resistance down to about 12nm CD (imec) or 10-15nm (AMAT)! I must say I'm surprized they decided to exclude vias for the comparison, but that's probably because their impact on total delay is simply too small to bother including. So what happens to the 2% win if we were to factor in slower interconnect? Make a wild guess.

Now let's take a look at Jonathan Bakke's recent interview: "As far as pure material cost is concerned, cobalt is three times more expensive than tungsten, but it remains inconvenient for us to comment on the actual cost that also involves the cost of collaborative R&D with customers."

Well, that's quite fair Jonathan, but in my opinion it would be more fair if you also pointed out that cobalt is 12x-13x more expensive than copper.

So as far as resistivity alone is concerned, by all accounts now -- including Intel's and AMAT's own (as long as you meticulously read their fine print) -- Co interconnect (vias + lines) at 36-40nm mmp is a one step forward, two steps backwards kind of thing. And not even two, but in fact many -- if you consider the ratio of line length to via height.

But probably what's more important here, the authors of Intel's IITC paper don't even touch issues which, in my not so humble opinion, are the core reasons for their neverendig problems -- mechanicals (fatigue failure of Co interconnect due to its brittleness and CTE mismatch) and thermal gasket effect (more severe hotspotting due to 4x worse thermal conductivity of Co).

Contributing to these problems is intrinsic difficulty with
a) modeling, as we're talking about fatigue failure, which is still a grand challenge except in trivial cases) and
b) mere detection of this bug, let alone finding a cure for it.

Some people mentioned COAG and SAQP as the reason of Intel's problems. They can be blamed, but in a somewhat special sense -- printability is what any process technology development program begins with (not ends!), and these issues are one of the simplest to catch, and, more importantly, Brian Krzanich already disclosed that Intel resorted to quintuple and sextuple patterning -- something not found in any of Intel's papers or presentations, afaik. So yes, it certainly appears that SAQP was identified as a source of problems very late in the process development, but I think any remaining printability issues must have been rectified by moving to 5x and 6x patterning.

As unbelievable as it would seem many years ago, current generation of Intel's process engineers (whether too old and uninterested or, vice versa, too inexperienced) appear simply unprepared to face the challenges posed by the process they're developing. OTOH, they are but victims of someone's decision to employ heterogeneous CuCo stack -- be it due to arrogance, lack of skill, or something else.

The point of "something else" here is it's not clear where exactly the idea of heteregoneous CuCo stack comes from. Was it born inside Intel? Not necessarily in my opinion. AMAT's history of cobalt-related developments has a long history. Could it be that they thought using cobalt in several lower levels of metallization stack was a brilliant idea and start pitching it to fabs and chipmakers, finally succeeding with Intel? Before dismissing this as yet another conspiracy theory, keep in mind Intel has a long history of buying into crap pitched to them as well (Itanium comes to mind), so in my opinion anything is possible.

Again, cobalt is 12x-13x more expensive than copper, and I guess both suppliers and AMAT would be very happy if the metal is employed not only for liners, caps and contacts, but is adopted by the industry as a replacement for copper for several levels of interconnect stack as they're advocating. Lots of $$$ in there -- and a lot at stake for some people at Intel now.

But given Intel's progress with 10nm process, current availability of analysis reports from Techinsights and GloFo's initial bet on CuCo stack, as follows from their 2016 IEDM 7nm paper, but apparent review of their initial plans, I think there's simply no way industry is going to repeat Intel's mistake. Not for the nearest nodes, and likely never in the future.

The jury is still out whether Samsung reviewed their plans to employ heterogeneous CuCo metal stack for 7nm process; given lack of news I take it they decided not to disclose it during their VLSI Symposium talk. I have a hunch that they reviewed their plans, but in case they didn't, I'm afraid they're heading towards the same troubles as Intel.

There's no doubt for me there are enough bright engineers at Intel who understand all of this very well and simply wonder why they should bother at all and waste their lives teaching this pig to fly. Team morale suffers heavily in such situations -- it happend at Intel with Merced 20-25 years ago, and 10nm is Intel's process-technology Merced all over again.

Again, my big thanks to RWT forum member who kindly shared Intel's IITC paper with me.

And in conclusion I'd like to say: please don't take this writeup as some kind of ultimate undisputable truth from an expert -- I started sharing my thoughts on Intel's 10nm tech on a forum mostly out of frustration at total lack of analytics on this interesting topic. In posting it here I hope to spark some interest and hear from folks, in the first place from three groups: those who, like me, put some thought into this matter and have something interesting to say, or can speak from experience (Daniel, Scotten, maybe others), or have inside knowledge to share. But of course any other comments are welcome as well.
 
Last edited:
The inclusion of 36 nm metal pitch, if addressed by SAQP or EUV, brings in a whole new set of issues. So Samsung and Intel experience their current delays.
 
The inclusion of 36 nm metal pitch, if addressed by SAQP or EUV, brings in a whole new set of issues. So Samsung and Intel experience their current delays.
Well, unless TSMC's mmp for 7nm process is different from their IEDM paper, it's 40nm, and they're already fabbing 7nm products -- at least A12, maybe more. I don't see any good reasons that 10% difference in mmp could send things flying off a cliff.

Second, Samsung does not experience any delay with intro of 7nm at all -- they recently confirmed they're on track to launching 7nm this year, probably meaning closer to year end which should be in time for S10 launch -- be it with SDM855, their own Exynos 9820, or maybe both as was the case with S9, but of course world's first intro of euv for hvm is a big deal, and I wouldn't be surprized if their 8LPP process (completely ready) was designed with the main goal in mind as a fallback option in case their euv line is not ready for hvm in time.

And on a final note, 36nm mmp was Samsung's early 7nm plans (2016 IEDM paper) -- we don't know whether they reviewed them or not. For the record, I have a hunch they will relax it to 40nm; I expected that their VLSI talk would shed some light on this issue, but haven't seen it addressed in articles.
 
Well, unless TSMC's mmp for 7nm process is different from their IEDM paper, it's 40nm, and they're already fabbing 7nm products -- at least A12, maybe more. I don't see any good reasons that 10% difference in mmp could send things flying off a cliff.

It's true TSMC 7nm currently is believed to be 40 nm, and uses neither EUV nor SAQP. 36 nm is often expected to be too tight for SADP, so Intel switches to SAQP which has an extra spacer process. There is less degree of freedom to define metal and dielectric linewidths because the second spacer process needs the first spacer process.

Second, Samsung does not experience any delay with intro of 7nm at all -- they recently confirmed they're on track to launching 7nm this year, probably meaning closer to year end which should be in time for S10 launch -- be it with SDM855, their own Exynos 9820, or maybe both as was the case with S9, but of course world's first intro of euv for hvm is a big deal, and I wouldn't be surprized if their 8LPP process (completely ready) was designed with the main goal in mind as a fallback option in case their euv line is not ready for hvm in time.

And on a final note, 36nm mmp was Samsung's early 7nm plans (2016 IEDM paper) -- we don't know whether they reviewed them or not. For the record, I have a hunch they will relax it to 40nm; I expected that their VLSI talk would shed some light on this issue, but haven't seen it addressed in articles.

Samsung's marketing and technical reports are not in sync. One source indicates EUV yield "very low": https://www.fool.com/<wbr>investing/2018/06/25/qualcomm-<wbr>inc-to-tap-tsmc-for-7-nano-<wbr>chip-productio.aspx This could be from various issues, like stochastics, 3d mask effects, resists, no pellicle, etc. which won't be resolved for a while.

For example, we also heard last month that they were still working on resist: https://www.eetimes.com/document.asp?doc_id=1333318

Despite the use of EUV, it appears their current 7nm demo does not show competitive transistor density compared to the others.

The timing is almost coincidental, that the report of Qualcomm switching back to TSMC for 7nm happened right after this conference.
 
Last edited:
Subtle problem

I have already said that I don't buy Brian Krzanich's "low yield" stories he tells, likely simply to avoid any questioning, and speculated as to possible reasons behind Intel's neverending 10nm problems.

To begin with, Intel introduced a whole bunch of innovations in their 10nm process, one of them being copper-cobalt stack. It's out of question that if M0 wire cross-section will continue shrinking, sooner or later alternatives with shorter electron mean free path will offer better conductivity than copper, the question is whether time for the switch from copper to some alternative, even in lower levels of the stack, has come.

So I get your gist, that the cobalt metallization is the serious downfall for Intel. It's very possible.
 
Lacking any of the qualifications you listed above, I still wonder what happens next. Can INTC change the process back to a better metal mix say in a year? BK was probably fired for something tangible beyond the stated reasons; something the board knows but we have not been told. I bet the news gets out by their Q3 results. Even their Q2 call should be really weird. Does the market just move to AMD? ARM? NVDA? How many AMD chips could GF and TSM make in a year, anyway? This is horrible news for the hyperscalers as their capex is going to skyrocket without a Plan B. But then, "trends that are unsustainable will end"; Moore's Law could not go on for ever. At any rate, we need to call shrinks something else now that the mathematical relations highlighted in Gordon Moore's 1965 note certainly no longer hold.

Can any of this impact 3D X-point? That seems to be another never-ending delay.
 
Last edited:
It's true TSMC 7nm currently is believed to be 40 nm, and uses neither EUV nor SAQP. 36 nm is often expected to be too tight for SADP, so Intel switches to SAQP which has an extra spacer process. There is less degree of freedom to define metal and dielectric linewidths because the second spacer process needs the first spacer process.
If your concern is resolution, then you clearly have something to worry more about -- their fin pitch is 34 nm.

Samsung's marketing and technical reports are not in sync. One source indicates EUV yield "very low": https://www.fool.com/<wbr>investing/2018/06/25/qualcomm-<wbr>inc-to-tap-tsmc-for-7-nano-<wbr>chip-productio.aspx This could be from various issues, like stochastics, 3d mask effects, resists, no pellicle, etc. which won't be resolved for a while.
That source must had been David tweeting from the VLSI Symposium. :) Their self-reported >50% yield for SRAM test vehicle they wanted to present there can hardly be a good basis for judgment calls wrt to health of process.

For example, we also heard last month that they were still working on resist: https://www.eetimes.com/document.asp?doc_id=1333318
I seriously doubt they would announce a month ago start of EUV 7nm this year if they knew full well they can't do it. Such misleading public statements can cost them dearly -- they would lose credibility completely, so I doubt they flat out lied.

Despite the use of EUV, it appears their current 7nm demo does not show competitive transistor density compared to the others.
What density figures are you referring to?

The timing is almost coincidental, that the report of Qualcomm switching back to TSMC for 7nm happened right after this conference.
I agree the flee of Qualcomm to TSMC is a troubling sign, but we don't know the reason -- it's not necessarily unacceptable results with preproduction wafers or delayed start of hvm, it could be simply about money.
 
If your concern is resolution, then you clearly have something to worry more about -- their fin pitch is 34 nm.

That's a good point. The patterning of active silicon (portions of the substrate to be isolated) has been using SAQP for a while, in 1X DRAM, TSMC 10FF. The fins are the same size and spacing and the cuts are more simply laid out. The fins are patterned by the final spacer width. For metal, it's tricky because it isn't directly etched, so actually the final spacer must pattern the dielectric. The metal is more correlated with the first spacer. So neither metal width nor dielectric width are actually directly related to the lithography anymore. In SADP, at least one of those two widths still preserve a direct connection.

That source must had been David tweeting from the VLSI Symposium. :) Their self-reported >50% yield for SRAM test vehicle they wanted to present there can hardly be a good basis for judgment calls wrt to health of process.


I seriously doubt they would announce a month ago start of EUV 7nm this year if they knew full well they can't do it. Such misleading public statements can cost them dearly -- they would lose credibility completely, so I doubt they flat out lied.

It will be clearer after we get to read the paper. But from the conference preview, the key points were not highlighted, such as the MMP. 36 nm is last published in a Common Platform paper from as early as IEDM 2016. I got the impression their 7LPP is mainly to shrink against their own 10LPP, but not necessarily denser than the rival foundries.

Some other details were announced at their Samsung Foundry Forum: https://www.semiwiki.com/forum/content/7491-top-10-highlights-samsung-foundry-forum.html

I thought the throughput was still low, not HVM level: 1300 WPD (effectively 50-60 WPH). For single exposure, the fastest immersion is over 6000 WPD (~250 WPH or more).


What density figures are you referring to?
The transistor density figures are compiled here, recent as of two days: https://www.semiwiki.com/forum/content/7544-7nm-5nm-3nm-logic-current-projected-processes.html


I agree the flee of Qualcomm to TSMC is a troubling sign, but we don't know the reason -- it's not necessarily unacceptable results with preproduction wafers or delayed start of hvm, it could be simply about money.
We know they worked with both Samsung and TSMC, so they are in a position to compare PDKs, etc. The risks of EUV are quite public, though mostly in SPIE papers.
 
That's a good point. The patterning of active silicon (portions of the substrate to be isolated) has been using SAQP for a while, in 1X DRAM, TSMC 10FF. The fins are the same size and spacing and the cuts are more simply laid out. The fins are patterned by the final spacer width. For metal, it's tricky because it isn't directly etched, so actually the final spacer must pattern the dielectric. The metal is more correlated with the first spacer. So neither metal width nor dielectric width are actually directly related to the lithography anymore. In SADP, at least one of those two widths still preserve a direct connection.
As a matter of fact, we already know that Intel doesn't use SAQP -- they had to resort to quintuple and sextuple patterning instead, so you're really looking at issues which are secondary compared to much more serious problems -- higher resistance of their cobalt metals (I would actually consider it obvious based on what we know by now), reliability problems and more severe hotspotting. None of this is fixable by proper patterning or relaxing mmp, but of course patterning difficulties they clearly had make things only worse.

It will be clearer after we get to read the paper. But from the conference preview, the key points were not highlighted, such as the MMP. 36 nm is last published in a Common Platform paper from as early as IEDM 2016. I got the impression their 7LPP is mainly to shrink against their own 10LPP, but not necessarily denser than the rival foundries.

Some other details were announced at their Samsung Foundry Forum: https://www.semiwiki.com/forum/content/7491-top-10-highlights-samsung-foundry-forum.html

I thought the throughput was still low, not HVM level: 1300 WPD (effectively 50-60 WPH). For single exposure, the fastest immersion is over 6000 WPD (~250 WPH or more).
Quite frankly, these figures look like underpromising on Samsung's part (or obsolete by now) -- perhaps the same story as their decision to report >50% yield for 7nm SRAM test vehicle at VLSI Symposia. ASML's EUV steppers already run at 125 wph. Are you aware of that? And that's for installed systems, ASML themselves claimed achieving 140 wph throughput (or maybe higher by now).

The transistor density figures are compiled here, recent as of two days: https://www.semiwiki.com/forum/content/7544-7nm-5nm-3nm-logic-current-projected-processes.html
Well, this is clearly not some Samsung's demo as you said, but Scotten's article and how come you failed to notice they have the densest SRAM cell which is quoted right next to it?

We know they worked with both Samsung and TSMC, so they are in a position to compare PDKs, etc. The risks of EUV are quite public, though mostly in SPIE papers.
Proper questions to ask here are a) how are those risks handled in the contract? and b) what's the price of 7nm for Qualcomm, TSMC vs Samsung? There are simply too many unknowns here to take your message for granted.


Anyway, our talk has drifted completely away from the topic, so I suggest that the conversation is transferred to a more appropriate thread instead of continuing it here (either existing one if something fits, or if not, it can be created anew). Agree?
 
Yes, I acknowledged that cobalt is likely a serious problem for Intel 10nm. It just so happens the cobalt layers used SAQP (as announced) as published, which could have aggravated it, as you concur. Regarding the other answers:
- The ASML throughput announcement is under the condition of a nominal dose of 20 mJ/cm2, which is probably no longer used because of stochastic issues; that is one of the more severe EUV issues. The throughput of EUV is dependent on dose.
- The density metric is based on a weighted sum of logic cells (60% NAND and 40% flipflop), to allow easier comparison among companies, as Scotten mentioned. There is direct involvement of MMP and CGP and the number of tracks. The SRAM cell size doesn't seem to trace this metric. It's the denser component in the layout (memory instead of logic), and also very design-dependent (you have high-density and low-density versions), so it's more difficult to standardize for comparison. In any case among the three foundries, it is a few % difference. The logic density is also a few % difference among the foundries, so the point is EUV's use by Samsung did not help it get a clear advantage in density among companies.
- I brought up Samsung 7nm with EUV since it also seems to be on shaky ground, like Intel's published 10nm. The commonality is MMP below 40 nm. It can be discussed elsewhere as you wish.
 
Last edited:
Can any of this impact 3D X-point? That seems to be another never-ending delay.

3D XPoint uses 40 nm pitch. If they used SAQP there, I would be quite surprised, but you never know. In their 10nm publication they did use SAQP and cobalt for 40 nm pitch M0. I don't know if the bit lines and word lines of the XPoint array used cobalt. There is a report by TechInsights, which would go into more detail but did not mention cobalt in its brief: Intel 3D XPoint Memory Die Removed from Intel Optane™ PCM (Phase Change Memory)
 
Last edited:
Can Intel skip 10nm or outsource 10nm production to foundries? By doing so Intel might buy some extra time to solve all those 7nm issues. Otherwise Intel may reach the 10nm high volume production too late for customers or gets bogged down by those unrealistic goals. Is it too difficult or virtually impossible because they were designed differently to begin with?
 
Last edited:
Intel admitting that their 10nm process is defunct and outsourcing their CPUs to TSMC 7nm will simply never happen, it would be a complete admission of failure in their "we have the best process" strategy and probably bring the company down, as well as being politically unacceptable to top management.

Also the CPU design process inside Intel is different to most companies (even AMD), it's much more reliant on full-custom hand-crafted circuits which are intimately tied to the process then normal ASICs, which would make porting far more difficult and expensive -- remember that pretty much all their IP (including complex analogue IP like high-speed interfaces) is developed in-house and is often CPU-specific, they'd have to do a total rip-up-and-rebuild of the entire design into a very different (foundry-style not Intel-CPU-style) process, which would take at least a couple of years to get to product even using as much 3rd party IP as possible -- again, this would kill the company.

So they're caught between a rock and a hard place -- the only thing which offers any hope of a way out is to carry on banging away with their 10nm process and try to fix it -- however difficult this may be, it's not as bad as the alternatives. But they're betting the company on this succeeding...

It's most likely that Intel's problem is something they're doing that the foundries aren't (cobalt interconnect, COAG) rather than them just pushing the pitches a bit harder, and so they've come up against an intractable yield/reliability problem that nobody else has -- certainly TSMC seem to be having no problem getting 7nm into mass production, and they came from well behind Intel 10nm.

By the way, the "cobalt thermal gasket" issue which the OP keeps bringing up is a red herring -- in all high-power chips including CPUs >95% of the heat exits the back of the die (through the bulk silicon) to the heatsink, not up through the metal/dielectric interconnect stack. A higher thermal resistance cobalt metal stack has negligible effect on die temperature -- trust me, I've tried using this path to reduce Tj with copper interconnect and it simply doesn't work.
 
Last edited:
It will not be the end of the world if Intel moves the cpu production to foundries. It will probably benefit financially if they do decide to outsource. They should continue working on their own 10nm or 7nm fix, while leveraging the foundries so their product roadmap will not depend entirely on the success or failure of their manufacturing group. If AMD can work with various foundries, so can Intel. The only question is whether the management can let go of their ego.
 
It's most likely that Intel's problem is something they're doing that the foundries aren't (cobalt interconnect, COAG) rather than them just pushing the pitches a bit harder.

I guess we probably never know what the problem is and all this discussion is just speculation. The interesting question is how long will it take Intel to fix the problem. There are suggestions that 10nm will be limited to small dies (i.e. laptops SKUs) in 2019 and the larger server dies may not even make it in 2020!
 
I agree the flee of Qualcomm to TSMC is a troubling sign, but we don't know the reason -- it's not necessarily unacceptable results with preproduction wafers or delayed start of hvm, it could be simply about money.

QCOM left Samsung because TSMC 7nm was ready before Samsung 7nm. QCOM is back at Samsung 7nm designing EUV enabled chips and they are very happy from what I have recently learned. EUV is much less complicated for designers especially layout people. I have also learned that Samsung Foundry is much more aggressively building an ecosystem (EDA/IP/Services). It will take time to get near what TSMC has but it is a very good sign that Samsung is making significant ecosystem progress. Case in point: Faraday is now supporting Samsung along with long time partner UMC.
 
It will not be the end of the world if Intel moves the cpu production to foundries. It will probably benefit financially if they do decide to outsource. They should continue working on their own 10nm or 7nm fix, while leveraging the foundries so their product roadmap will not depend entirely on the success or failure of their manufacturing group. If AMD can work with various foundries, so can Intel. The only question is whether the management can let go of their ego.

Actually, I think it would be the end of Intel as we know it. I talked to quite a few of the Intel people who attended DAC this week. They were all horrified at how the CEO resignation was handled. I was also told that the 10nm problems are overblown (lots of fake news) and will be resolved with HVM by year end.

It will be interesting to see who the new CEO is. Hopefully someone from outside Intel who can bring a fresh perspective and more transparent leadership.
 
Last edited:
It will not be the end of the world if Intel moves the cpu production to foundries. It will probably benefit financially if they do decide to outsource. They should continue working on their own 10nm or 7nm fix, while leveraging the foundries so their product roadmap will not depend entirely on the success or failure of their manufacturing group. If AMD can work with various foundries, so can Intel. The only question is whether the management can let go of their ego.

AMD can work with different foundries because AMD design flow is foundry-like (more standard-cell, compiled cell, fewer full-custom hand-built circuits) because it had to be, it was the way they had to go to get silicon. The downside is some loss of speed and increase in power, the upside is relatively easy portability especially if the foundry processes are pretty similar (e.g. TSMC and GF at 7nm).

Intel can't do this because as the last true IDM that's not how they do designs, they're tightly tied into their own process with a much more custom design flow (I know some ex-Intel guys) -- design effort is higher and layout is a pain in the butt, no nice well-documented foundry DKs for them, but you can also do things that would be "illegal" (unsupported) at a foundry. The upside is some gain in speed and decrease in power, the downside is portability is *way* more difficult, especially since Intel's 10nm process has significant differences to foundry 7nm.

So the problem with moving to a foundry isn't just Intel egos, it's that (unlike AMD) their CPU designs and design flow are not "foundry-friendly".
 
I conducted an interview with imec yesterday about their IITC paper and I will be writing it up shortly. From my interview, cobalt doesn't beat copper for line resistance until around a 12nm lines and that is equivalent to something like a 16nm or 18nm pitch (for low level interconnect the line is wider than the space), however you get better electromigration and lower via resistance. At the lower levels, via resistance is very important and the imec position is that cobalt becomes attractive around a 40nm pitch depending on your design. Intel's 36nm pitch and their need for high power may just mean their design goals are different than the foundries who are more focused on low power mobile, and therefore Intel concluded cobalt made sense for them.

What everyone seems to forget is SAQP in the FEOL and BEOL are completely different. FEOL is cut masks, SAQP with cut masks for fins is well established. In the BEOL you need SAQP with block masks and that is completely different and much harder! The first block mask etches off the layers you need for subsequent block masks so you have to put all the block masks on reverse toned and then reverse the whole thing at the end. SAQP in the BEOL is likely 3 or 4 block masks so really complex and I believe this is most likely Intel's yield problem in-line with their comments about lithography issues.

There was comment about how going 10% below 40nm shouldn't be a cliff but that is exactly what it is. SADP can do 40nm, at 39nm you are looking at SAQP (optical) or EUV, 40nm is literally a lithography cliff in cost and difficulty.

In terms of 10nm parts what I think Intel is doing is they are shipping small quantities of 10nm parts coming out of their D1X development fab in Oregon. 10nm is scheduled for high volume manufacturing in Fab 28 in Israel and that isn't on-line yet.
 
Last edited:
Back
Top