Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/index.php?threads/how-important-is-cobalt.9962/
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2021370
            [XFI] => 1050270
        )

    [wordpress] => /var/www/html
)

How important is cobalt?

lefty

Active member
At IEDM Intel revealed that they will use cobalt in on the bottom two layers of its 10-nm interconnect to get a five- to ten-fold improvement in electromigration and a two-fold reduction in via resistance: ​https://www.eetimes.com/document.asp?doc_id=1332696

What sort of advantages will Intel get from using cobalt compared to TSMC, or Globalfoundries 7nm, which are using old-fashioned copper?
 
First guess: slower electromigration because of the much higher melting point suggesting stronger bonds in the lattice
 
Some benefits to using Cobalt in the lower two layers of metal interconnect:

  • Lower resistance wires means higher clock speeds because of faster switching times on the interconnect
  • Better electromigration means interconnect lasts longer before wearing out, causing high resistance and possible open circuits

Of course, there has to be a downside, which is probably higher cost and longer times in the fab with more steps added for depositing the Cobalt. Maybe even more time to QA the new 10nm process using Cobalt as a new part of interconnect.
 
At IEDM Intel revealed that they will use cobalt in on the bottom two layers of its 10-nm interconnect to get a five- to ten-fold improvement in electromigration and a two-fold reduction in via resistance: ​https://www.eetimes.com/document.asp?doc_id=1332696

What sort of advantages will Intel get from using cobalt compared to TSMC, or Globalfoundries 7nm, which are using old-fashioned copper?

I will be blogging about these two processes shortly.

Cobalt has a higher bulk resistivity than copper but at very narrow lines cobalt can have a lower line resistance because copper suffers from scattering and requires relatively thick barrier layers. Intel has a 36nm minimum metal pitch while GLOBALFOUNDRIES has a 40nm minimum metal pitch and therefore Intel may need cobalt where GLOBALFOUNDRIES doesn't. Cobalt is more expensive.
 
I will be blogging about these two processes shortly.

Cobalt has a higher bulk resistivity than copper but at very narrow lines cobalt can have a lower line resistance because copper suffers from scattering and requires relatively thick barrier layers. Intel has a 36nm minimum metal pitch while GLOBALFOUNDRIES has a 40nm minimum metal pitch and therefore Intel may need cobalt where GLOBALFOUNDRIES doesn't. Cobalt is more expensive.

I think the thing most people are interested in is to what extent this is a unique (and important) Intel advantage.

My understanding is that IBM pioneered this, meaning that presumably such patent barriers as exist are somewhat porous... So the real issue is what are the costs (to the foundry, and on a per chip basis), and what are the benefits?

(a) Would it make sense for Apple (or QC or ...) to call up TSMC and say "we want cobalt ASAP", or would that make no sense yet because it's only helpful for chips running at 5GHz+ and generating 150W+?

(b) Assuming it would be useful for some segment (mobile? GPUs? AMD, IBM, or Oracle large server chips?) how long would it realistically take TSMC/Samsung/GloFo to offer it as an option? My guess is that they're both researching it as part of the usual on-going development, and with customer demand they could deliver it as an option within a year; but of course I've no idea if there is some subtle problem with using Cobalt that makes it a much tricky prospect than it appears.
 
I think the thing most people are interested in is to what extent this is a unique (and important) Intel advantage.

(a) Would it make sense for Apple (or QC or ...) to call up TSMC and say "we want cobalt ASAP", or would that make no sense yet because it's only helpful for chips running at 5GHz+ and generating 150W+?

(b) Assuming it would be useful for some segment (mobile? GPUs? AMD, IBM, or Oracle large server chips?) how long would it realistically take TSMC/Samsung/GloFo to offer it as an option? My guess is that they're both researching it as part of the usual on-going development, and with customer demand they could deliver it as an option within a year; but of course I've no idea if there is some subtle problem with using Cobalt that makes it a much tricky prospect than it appears.

GLOBALFOUNDRIES knows how to use cobalt and in fact makes use of it in their 7nm technology, just not as widely as Intel does. There are trade offs to cobalt and whether it benefits a process depends on the details of the process. GLOBALFOUNDRIES uses cobalt in their process where it makes sense for the process and Intel uses cobalt in their process where it makes sense for that process, the two processes are different and have different requirements. At the end of the day you have to compare the processes on their density and performance and cobalt is just one element building up to that.

I will discuss this in detail in my blog but just having cobalt in a process doesn't really mean anything, it is what the process achieves overall that is important.
 
GLOBALFOUNDRIES knows how to use cobalt and in fact makes use of it in their 7nm technology, just not as widely as Intel does. There are trade offs to cobalt and whether it benefits a process depends on the details of the process. GLOBALFOUNDRIES uses cobalt in their process where it makes sense for the process and Intel uses cobalt in their process where it makes sense for that process, the two processes are different and have different requirements. At the end of the day you have to compare the processes on their density and performance and cobalt is just one element building up to that.

I will discuss this in detail in my blog but just having cobalt in a process doesn't really mean anything, it is what the process achieves overall that is important.

Totally agree. I am looking forward eagerly to reading your analysis of both Intel 10nm and GF 7nm.
 
GLOBALFOUNDRIES knows how to use cobalt and in fact makes use of it in their 7nm technology, just not as widely as Intel does. There are trade offs to cobalt and whether it benefits a process depends on the details of the process. GLOBALFOUNDRIES uses cobalt in their process where it makes sense for the process and Intel uses cobalt in their process where it makes sense for that process, the two processes are different and have different requirements. At the end of the day you have to compare the processes on their density and performance and cobalt is just one element building up to that.

I will discuss this in detail in my blog but just having cobalt in a process doesn't really mean anything, it is what the process achieves overall that is important.

Thanks for the answer. That pretty much matches my conclusions.
I phrased the questions the way I did in response to the level of hype that's building up in some quarters regarding Intel's use of cobalt, a level that seems to me detached from precisely the sort of analysis you provided.
 
From what is published I share Scotten conclusion. The two solutions are not so radically different anyhow and cobalt is there to have more conductive lines in the end.
Cobalt is not a new beast in the fab, some with a good memory may remember that the first chips with a silicide contacts where with Co and that process lasted many nodes.
If I remember correctly some SEU where related to its presence in the process. Would be nice to see if anybody looked into it or at these nodes it is not an issue.
 
Last edited:
From what is published I share Scotten conclusion. The two solutions are not so radically different anyhow and cobalt is there to have more conductive lines in the end.
Cobalt is not a new beast in the fab, some with a good memory may remember that the first chips with a silicide contacts where with Co and that process lasted many nodes.
If I remember correctly some SEU where related to its presence in the process. Would be nice to see if anybody looked into it or at these nodes it is not an issue.

Yes cobalt has been used in fabs for silicide for a long time, there are also cobalt liners and caps for copper interconnect that have been used for at least a few years.

It is really using cobalt fill of contacts, vias and interconnect lines that is new and therefore presents new fill and planarization requirements.
 
In Intel's case, they targeted those 44-52 nm pitches for EM resistance - at least that's what the reader gets from their IEDM paper. But those layers (M2-M5) are also longer lines, it seems the resistance impact could be more significant.
 
In Intel's case, they targeted those 44-52 nm pitches for EM resistance - at least that's what the reader gets from their IEDM paper. But those layers (M2-M5) are also longer lines, it seems the resistance impact could be more significant.

I still don't understand what point you are trying to make. Resistance impact of what and everyone else also uses cobalt liners and caps for those lines.
 
I still don't understand what point you are trying to make. Resistance impact of what and everyone else also uses cobalt liners and caps for those lines.

It's probably no big deal either way, but does TSMC 7nm use cobalt cap? I had assumed not or at least it was not highlighted to the same extent as IBM/Globalfoundries had.

In their IITC 2018 paper, Intel implied they could back out of the cobalt, but it was introduced for the EM reason. In other words, to them it was an option as a process. The shorter lines of M0 and M1 made less of a resistance impact.
 
It's probably no big deal either way, but does TSMC 7nm use cobalt cap? I had assumed not or at least it was not highlighted to the same extent as IBM/Globalfoundries had.

In their IITC 2018 paper, Intel implied they could back out of the cobalt, but it was introduced for the EM reason. In other words, to them it was an option as a process. The shorter lines of M0 and M1 made less of a resistance impact.

TSMC has been using cobalt caps since 16nm, I haven't seen a 7nm analysis yet but they had cobalt caps at 16nm, 12nm and 10nm so I would expect it at 7nm.

Intel uses cobalt because of EM and also lower via resistance and there are a lot of vias at M0/M1 so overall it was better than copper.
 
Thanks Scotten. Guess it's no big deal. The IITC 2018 paper is also interesting since it discusses M0/M1 a little more including the patterning.
 
TSMC has been using cobalt caps since 16nm, I haven't seen a 7nm analysis yet but they had cobalt caps at 16nm, 12nm and 10nm so I would expect it at 7nm.

Intel uses cobalt because of EM and also lower via resistance and there are a lot of vias at M0/M1 so overall it was better than copper.

Scotten but does TSMC 7nm use cobalt for contacts. I think there is no confirmation on that. TSMC has not provided that information in their IEDM 2016 paper on N7. TSMC N7 seems to have underdelivered on performance to meet time to market goals. The CPU clock speed on Apple A series chips have literally stalled after A10 on 16FF+ hit 2.34 Ghz. A11 clocked on N10 ran at 2.34 Ghz and A12 on N7 runs at 2.5 Ghz.

https://www.eetasia.com/news/article/18060102-arm-announces-high-performance-laptop-cpu

"There hasn't been much frequency benefit at all since 16 nm ... wire speed hasn’t scaled for some time," said Peter Greenhalgh, an Arm fellow and vice president of technology.

Since TSMC 7nm is the only 7nm foundry process in HVM i think these statements by ARM VP are directly addressed at TSMC 7nm. I hope N7 HPC has some more improvements to drive wire speed. The relaxed CPP rumoured to be 64nm and the 7.5T 3 fin cells should help. But without addressing contact resistance I think N7 HPC could also underdeliver. I am hoping N7+ brings some improvements for high performance applications like CPUs. I think N5 will definitely use cobalt for contacts and my guess is for atleast 1-2 metal layers otherwise the performance improvements are going to be even more smaller than N10 to N7.
 
Last edited:
"The CPU clock speed on Apple A series chips have literally stalled after A10 on 16FF+ hit 2.34 Ghz. A11 clocked on N10 ran at 2.34 Ghz and A12 on N7 runs at 2.5 Ghz."

It's unhelpful to try to establish the limits of a process based on the choices made by a particular vendor. Oracle was achieving 5GHz on the same 20nm process that gave Apple 1.4GHz (for A8 in iPhones). This isn't because Oracle is so superior to Apple in CPU design, it's because they had a very different design target.

All we know is that Apple has figured that 2.5GHz gives them the best performance-power tradeoff for iPhones. We have ZERO idea what TSMC's process is capable of if Apple (or any other vendor) targeted 100W rather than 5W. We may have a better idea once AMD ships something...

(Beyond the above, which are standard cautions that would be as true for comparing any other mobile vendor --- eg QC --- there is the additional fact that the A12 seems to be a drastically different sort of release from everything earlier. In particular it seems to be essentially the same micro-architecture, with the same performance characteristics, as the A11, buoyed only by the trivial addition of larger L1 caches, and a few extra instructions from ARMv8.3

There is plenty of room for speculation around this. One possibility is that the CPU designs have become so complex that Apple has switched to what is essentially a tick-tock model, and from now on we'll see a big jump every two years, along with a maintenance improvement on the alternating years? Another possibility is that this was a one-time resteering of the design to harden it --- everything looks the same as far as performance is concerned, but all sorts of structures (like branch predictor records) have additional tags to ensure that various SPECTRE-type bugs can't leak info? A third possibility is that the stars of the design team have been pulled off the iPhone, at least for a while, to design a SoC appropriate to Macs, and with a consequent much higher power budget.

Point is, especially with the A12 to A11 comparison, there's even less justification than usual in assuming that its frequency represents ANY sort of "best possible" from TSMC, even if you were targeting mobile. It very likely represents more the "best possible" when you take a design optimized for 16nm and simply recompile it for 10nm, with a specific mandate to do absolutely zero beyond that bare minimum of work.)
 
Back
Top