Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/threads/intel-ceo-optimistic-about-chips-act%E2%80%99s-future-after-trading-texts-with-jd-vance.21425/page-4
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2021770
            [XFI] => 1050270
        )

    [wordpress] => /var/www/html
)

Intel CEO optimistic about CHIPS Act’s future after trading texts with JD Vance

Mar Bohr proves my point. Mark predicted the collapse of the fabless model which was ridiculous and the result of wearing IDM blinders his entire career. As a result the IDM business model has collapsed:


12 years ago, Mark Bohr of Intel said:

"“Being an integrated device manufacturer really helps us solve the problems dealing with devices this small and complex,” Bohr said “the foundries and fabless companies won’t be able to follow where Intel is going.”"

Twelve years later, someone at Intel says:

"Did our Mike Bohr really say that?"

Thanks to Semiwiki for recording history as it unfolded and evolved.
 
Mar Bohr proves my point. Mark predicted the collapse of the fabless model which was ridiculous and the result of wearing IDM blinders his entire career. As a result the IDM business model has collapsed:

This was at a press event with the objective to talk about how super dupper awesome intel's technology is. It is hardly surprising this was a narrative that intel would tell their engineers to bandy around. Not really any different from the mudslinging that TSMC or Samsung rattle around to discredit each other or intel. It is also natural for market leaders to always want to present their position as unbreakable and how others will never be able to catch up to them. See GM in automobiles, IBM in compute, and Meta in social media as past examples and TSMC and NVIDIA as current examples.

But market speak and sensational language aside, when I dig into the actually technical meat of what was said, I can't say I entirely disagree with Bohr's assessment. Reading from the actual EEtimes interview I was kind of surprised how much of what was said came to pass; I expected to read about milk and I got wine. Bohr was 100% correct that design and fab needed to more closely optimized together. The "traditional" fabless model of make one design that and easily "toss it over the wall" and port the design across multiple foundries is dead. What's more it was well and truly dead not soon after the interview. All major fabless design houses now have dedicated foundry engagement teams staffed by employees sourced from the manufacturer and with deep intimate knowledge of the process, device, and even the specific tooling used by their fab Said teams also have input about process development (something completely unheard of back in the 2000s). In this day and age all foundries are virtual IDMs with their fabless customers. Hyperbole aside, the only thing in Bohr's statements to EEtimes that jumps off the page as not panning out was the assumption that IDM like DTCO/collaboration could only happen at an actual IDM.

Note that the video is from 2012. Situations change, perhaps Bohr's opinion has too.
The situation certainly couldn't have been more different. Let's start with what was known to intel at the time. The rest of the world announced that their 22/20nm processes would be planar. Intel correctly believed that 22/20nm bulk planar would not have sufficient short channel control to offer a compelling process technology. They also knew that the rest of the world wouldn't have a functioning finFET process until 2015 (which at the time must have looked like a slow down of RoW process development from 2 years to 3), and that rest of world 16/14nm nodes would have PPA characteristics similar to i22nm and far inferior to i14nm (which was doing a 2.4x density uplift rather than the usual 2x). TLDR intel figured they were 4 years ahead of the rest of the industry. Intel was also looking at how when the rest of industry finally caught up to 2011's i22nm their customers would be seeing higher cost per FETs than their 22/20nm nodes. Meanwhile intel was trying to achieve a faster than usual cost per FET declines. From what intel could see at the time, their already commanding 4 year lead looked to be growing.

Under that list of assumptions, I don't exactly think it is a particularly large logical leap to think that intel ripping mobile from the hands of QCOM and MTK should have been a cake walk; and without those mobile customers TSMC and to a lesser extent Samsung would have been unable to justify staying on the leading edge. Of course history didn't pan out that way and even with the crutch of an overwhelming process lead intel's chip designers flushed billions of dollars down the drain during BK's failed mobile push. So many baffling choices like a separate PCH die including graphics and memory controllers even after intel's PC parts integrated these components to save power because it was hurting product competitiveness versus AMD's products on far inferior process, and the call to rely on V scaling for IA cores thus requiring intel's process nodes to be optimized for high V in CCGs pursuit of high Fmax and maximum per core performance.

I met Mark Bohr at a conference, he sat next to me and we chatted. My take-away was that he was past his prime. This was during the whole 10nm debacle which you failed to mention.
I wouldn't make such sweeping claims about Mark Bohr being responsible for the decade of seemingly never ending debacles/delays. It is irksome when I see people who have never worked inside of LTD always act like they "know exactly what happened inside of LTD", or that "they have the true 10nm story" and they usually end up looking foolish for it. And I know your smart enough to know that the story is too complex/multifaceted to be as simple as "this one guy" or "this one mistake". To have failures of that magnitude happen many factors, poor choices, the difficulty of diagnosing issues with the way pre-late 2020 LTD was operating, and many years of under-investment/decay/attrition needed to all simultaneously occur. Sunlin Chao's development model also had the critical Achilles' heel of innovation being done in a serial manner. When one part of that development process gets stuck the whole technology development pipeline comes screeching to a halt until the problematic technology can move onto the next phase of development.
Intel 14nm was also a debacle but it was the first FinFET process so the delay and bad yield was much less of an issue than at 10nm.
intel 14nm was intel's 2nd finFET process and was comparable in PPA and fin profile to the rest of the industry's 10"nm" process technologies. Non intel 16/14"nm" had fin profiles and PPA characteristics on par to slightly better than intel 22"nm". With that context, when one considers that TSMC's 10FF was also troubled/was never adopted beyond Huawei and Apple, Samsung 10LPE/early 10LPP being as rough as they were, and GF 10LP completely missing the target window for 10"nm" class foundry nodes and having to be canned - intel 14nm has plenty of company in the "subpar early yield" club.

It was self-evident even then. AMD had spun out Globalfoundries already at that point.
Hardly. When AMD spun out their manufacturing arm in 2008 AMD's had one 12" fab and one 8" fab. Intel had three 8" fabs (one of them on loan to their jointly owned subsidiary Numonyx), a 49% stake in two JV NAND fabs with Micron, and 16 AMD Fab30 equivalent 12" fabs running at full tilt. AMD was simply subscale, and this was made even worse by not being able to keep up with intel's process development team. Add in the crippling amount of debt/intrest from the ATI purchase at a time when AMD's finacnes were already in poor shape and AMD had no choice but to bow out. AMD is a case study for the fundamentals that govern the semiconductor manufacturing industry rather than proof of the IDM model's inferiority. Don't confuse TSMC with the foundry model, as they are not the same thing. Other logic IDMs have continued along to newer process technologies past where AMD had to bow out (ST-M, Toshiba, NEC/Fujitsu, TI, Phillips, Infenion, IBM, and Samsung). To add insult to injury GF never even developed a more advanced (for the time) internal node past their 32nm SOI and 28nm bulk (the same place the non intel/Samsung/IBM IDMs dropped out of the leading edge). Meanwhile IBM and a bit later UMC would independently join the finFET clubs with their own native 14"nm" class process. There is also the elephant in the room of also having intel/Samsung still duking it out on the leading edge with TSMC.
 
Last edited:
This was at a press event with the objective to talk about how super dupper awesome intel's technology is. It is hardly surprising this was a narrative that intel would tell their engineers to bandy around. Not really any different from the mudslinging that TSMC or Samsung rattle around to discredit each other or intel. It is also natural for market leaders to always want to present their position as unbreakable and how others will never be able to catch up to them. See GM in automobiles, IBM in compute, and Meta in social media as past examples and TSMC and NVIDIA as current examples.

But market speak and sensational language aside, when I dig into the actually technical meat of what was said, I can't say I entirely disagree with Bohr's assessment. Reading from the actual EEtimes interview I was kind of surprised how much of what was said came to pass; I expected to read about milk and I got wine. Bohr was 100% correct that design and fab needed to more closely optimized together. The "traditional" fabless model of make one design that and easily "toss it over the wall" and port the design across multiple foundries is dead. What's more it was well and truly dead not soon after the interview. All major fabless design houses now have dedicated foundry engagement teams staffed by employees sourced from the manufacturer and with deep intimate knowledge of the process, device, and even the specific tooling used by their fab Said teams also have input about process development (something completely unheard of back in the 2000s). In this day and age all foundries are virtual IDMs with their fabless customers. Hyperbole aside, the only thing in Bohr's statements to EEtimes that jumps off the page as not panning out was the assumption that IDM like DTCO/collaboration could only happen at an actual IDM.


The situation certainly couldn't have been more different. Let's start with what was known to intel at the time. The rest of the world announced that their 22/20nm processes would be planar. Intel correctly believed that 22/20nm bulk planar would not have sufficient short channel control to offer a compelling process technology. They also knew that the rest of the world wouldn't have a functioning finFET process until 2015 (which at the time must have looked like a slow down of RoW process development from 2 years to 3), and that rest of world 16/14nm nodes would have PPA characteristics similar to i22nm and far inferior to i14nm (which was doing a 2.4x density uplift rather than the usual 2x). TLDR intel figured they were 4 years ahead of the rest of the industry. Intel was also looking at how when the rest of industry finally caught up to 2011's i22nm their customers would be seeing higher cost per FETs than their 22/20nm nodes. Meanwhile intel was trying to achieve a faster than usual cost per FET declines. From what intel could see at the time, their already commanding 4 year lead looked to be growing.

Under that list of assumptions, I don't exactly think it is a particularly large logical leap to think that intel ripping mobile from the hands of QCOM and MTK should have been a cake walk; and without those mobile customers TSMC and to a lesser extent Samsung would have been unable to justify staying on the leading edge. Of course history didn't pan out that way and even with the crutch of an overwhelming process lead intel's chip designers flushed billions of dollars down the drain during BK's failed mobile push. So many baffling choices like a separate PCH die including graphics and memory controllers even after intel's PC parts integrated these components to save power because it was hurting product competitiveness versus AMD's products on far inferior process, and the call to rely on V scaling for IA cores thus requiring intel's process nodes to be optimized for high V in CCGs pursuit of high Fmax and maximum per core performance.


I wouldn't make such sweeping claims about Mark Bohr being responsible for the decade of seemingly never ending debacles/delays. It is irksome when I see people who have never worked inside of LTD always act like they "know exactly what happened inside of LTD", or that "they have the true 10nm story" and they usually end up looking foolish for it. And I know your smart enough to know that the story is too complex/multifaceted to be as simple as "this one guy" or "this one mistake". To have failures of that magnitude happen many factors, poor choices, the difficulty of diagnosing issues with the way pre-late 2020 LTD was operating, and many years of under-investment/decay/attrition needed to all simultaneously occur. Sunlin Chao's development model also had the critical Achilles' heel of innovation being done in a serial manner. When one part of that development process gets stuck the whole technology development pipeline comes screeching to a halt until the problematic technology can move onto the next phase of development.

intel 14nm was intel's 2nd finFET process and was comparable in PPA and fin profile to the rest of the industry's 10"nm" process technologies. Non intel 16/14"nm" had fin profiles and PPA characteristics on par to slightly better than intel 22"nm". With that context, when one considers that TSMC's 10FF was also troubled/was never adopted beyond Huawei and Apple, Samsung 10LPE/early 10LPP being as rough as they were, and GF 10LP completely missing the target window for 10"nm" class foundry nodes and having to be canned - intel 14nm has plenty of company in the "subpar early yield" club.


Hardly. When AMD spun out their manufacturing arm in 2008 AMD's had one 12" fab and one 8" fab. Intel had three 8" fabs (one of them on loan to their jointly owned subsidiary Numonyx), a 49% stake in two JV NAND fabs with Micron, and 16 AMD Fab30 equivalent 12" fabs running at full tilt. AMD was simply subscale, and this was made even worse by not being able to keep up with intel's process development team. Add in the crippling amount of debt/intrest from the ATI purchase at a time when AMD's finacnes were already in poor shape and AMD had no choice but to bow out. AMD is a case study for the fundamentals that govern the semiconductor manufacturing industry rather than proof of the IDM model's inferiority. Don't confuse TSMC with the foundry model, as they are not the same thing. Other logic IDMs have continued along to newer process technologies past where AMD had to bow out (ST-M, Toshiba, NEC/Fujitsu, TI, Phillips, Infenion, IBM, and Samsung). To add insult to injury GF never even developed a more advanced (for the time) internal node past their 32nm SOI and 28nm bulk (the same place the non intel/Samsung/IBM IDMs dropped out of the leading edge). Meanwhile IBM and a bit later UMC would independently join the finFET clubs with their own native 14"nm" class process. There is also the elephant in the room of also having intel/Samsung still duking it out on the leading edge with TSMC.

In April 2012, Mark Bohr predicted that the fabless model was “collapsing.” By then, it had already been five years since the iPhone was first released in June 2007. Sales of iPhones and other smartphones were growing rapidly, with foundries and fabless design companies sharing in the prosperity. Why couldn’t Mark Bohr see this and instead believed the fabless model was “collapsing”?

Did he and other Intel leaders fall into a groupthink trap, or was it something else?

 
What are the driving factors behind the election outcome? The economy—specifically, how people feel about inflation—not ideology or foreign policies.
11M who voted last time , didnt vote this time.

How come high prices leading to record profits leading to Stock Market highs wasnt celebrated?

Isnt that the market in action?

I am wondering how private enterprise is going to be convinced to drop prices and or raise salaries.
 
When AMD spun out their manufacturing arm in 2008 AMD's had one 12" fab and one 8" fab. Intel had three 8" fabs (one of them on loan to their jointly owned subsidiary Numonyx), a 49% stake in two JV NAND fabs with Micron, and 16 AMD Fab30 equivalent 12" fabs running at full tilt.
AMD also had the Spansion NOR Flash partnership with Fujitsu. Which came with its own fabrication facilities.
You could say AMD had half the logic fab capacity they needed at one point. Because of financial issues they never had more than two logic fabs (Austin, Texas and Dresden, Germany).

AMD was simply subscale, and this was made even worse by not being able to keep up with intel's process development team.
Sure AMD had to bow out earlier because their scale was smaller than Intel. And that is my point. Even Intel will have to bow out eventually, it will be later than AMD because their scale is larger, but it will still happen.
AMD at times had a more advanced process than Intel. For example they introduced copper interconnect into their manufacturing processes first, I think it was process technology licensed from Motorola at the 180nm process.
AMD merged process R&D with IBM and Samsung to be able to cope. And this was enough to provide those fabs with processes all the way into the FinFET era. Sure there were some delays, you could say IBM pushed the wrong materials in 28nm and going into SOI was a waste of time, but the processes were developed eventually.

Add in the crippling amount of debt/intrest from the ATI purchase at a time when AMD's finacnes were already in poor shape and AMD had no choice but to bow out.
I already said as much. AMD wasted the funds they had to build their next generation fab buying ATI at an inflated price just before the stock market crashed. Many billions in cash were spent on that acquisition.
$4.2 billion USD in cash. Back then it was enough to build a whole fab.

AMD is a case study for the fundamentals that govern the semiconductor manufacturing industry rather than proof of the IDM model's inferiority. Don't confuse TSMC with the foundry model, as they are not the same thing. Other logic IDMs have continued along to newer process technologies past where AMD had to bow out (ST-M, Toshiba, NEC/Fujitsu, TI, Phillips, Infenion, IBM, and Samsung).
IBM sold its fabs to GlobalFoundries. I never heard of Toshiba again and after they went bankrupt I doubt they still have their fabs. TI, STMicro, Infineon bowed out of advanced process development and are making niche products for analog, MEMS, or the automotive sector with legacy processes. Fujitsu makes its leading edge technical compute A64X processors at TSMC.

So you were saying.

To add insult to injury GF never even developed a more advanced (for the time) internal node past their 32nm SOI and 28nm bulk (the same place the non intel/Samsung/IBM IDMs dropped out of the leading edge). Meanwhile IBM and a bit later UMC would independently join the finFET clubs with their own native 14"nm" class process.
This UMC FinFET process seems to be a bit of a mirage. I keep going to their financial reports and they do not seem to manufacture anything with it. Have UMC ever produced anything with FinFET?

From what I understand GlobalFoundries hit delays with their 14nm process and they ended up licensing 14nm FinFET from Samsung to use in their fabs.

There is also the elephant in the room of also having intel/Samsung still duking it out on the leading edge with TSMC.
Samsung is fumbling horribly.
 
Last edited:
AMD also had the Spansion NOR Flash partnership with Fujitsu. Which came with its own fabrication facilities.
You could say AMD had half the logic fab capacity they needed at one point. Because of financial issues they never had more than two logic fabs (Austin, Texas and Dresden, Germany).


Sure AMD had to bow out earlier because their scale was smaller than Intel. And that is my point. Even Intel will have to bow out eventually, it will be later than AMD because their scale is larger, but it will still happen.
AMD at times had a more advanced process than Intel. For example they introduced copper interconnect into their manufacturing processes first, I think it was process technology licensed from Motorola at the 180nm process.
AMD merged process R&D with IBM and Samsung to be able to cope. And this was enough to provide those fabs with processes all the way into the FinFET era. Sure there were some delays, you could say IBM pushed the wrong materials in 28nm and going into SOI was a waste of time, but the processes were developed eventually.


I already said as much. AMD wasted the funds they had to build their next generation fab buying ATI at an inflated price just before the stock market crashed. Many billions in cash were spent on that acquisition.
$4.2 billion USD in cash. Back then it was enough to build a whole fab.
Okay we are on the same page, and good catch on Spanson. I forgot that they weren't bankrupt/absorbed by Cyprus semi yet.

As for the IBM allience, FWIW Samsung dumped the IBM finFET process and the IBM HKMG process for their own internally developed processes and GF would use Samsung's finFET as their primary offering rather than their own or IBM's finFET process.

While AMD did indeed introduced Cu before intel, but wasn't the performance and early yield significantly worse than intel's 180nm, meanwhile intel's first Cu process (130nm) had high yield from the beginning of product ramp? If that was the case isn't that just a less exaggerated version of Samsung being "first to GAA". But I digress, when I say AMD couldn't keep up with intel's process development team, what I mainly referring to is intel being 1-2 years ahead on shrink and 4 years ahead on transistor technology for those final days of AMD being an IDM.
IBM sold its fabs to GlobalFoundries. I never heard of Toshiba again and after they went bankrupt I doubt they still have their fabs. TI, STMicro, Infineon bowed out of advanced process development and are making niche products for analog, MEMS, or the automotive sector with legacy processes. Fujitsu makes its leading edge technical compute A64X processors at TSMC.

So you were saying.
My point was the foundry model doesn't guarantee you scale nor success since GF bowed out of leading edge around when most of the others did despite being handed the third largest foundry in the world at their founding and being paid to take IBM's logic business later down the track. Even though today TI/STM/Infineon all do power/RF/embedded stuff that is exclusively what a GF or UMC do today, yet GF/UMC are less profitable ventures. Granted that isn't really their fault since they just sell wafers while those IDMs sell the product plus the wafer, but I suspect you already knew that. As for Fujitsu specifically they did have their own process with 28nm being their last before going to TSMC. I also mistakenly thought Fujitsu's manufacturing arm got merged into Renesas, but it was just NEC/HIT/MIT.
This UMC FinFET process seems to be a bit of a mirage. I keep going to their financial reports and they do not seem to manufacture anything with it. Have UMC ever produced anything with FinFET?
I did the same thing a while ago because I also was like "How does this not exist on today's revenue sheets!" :LOL:. If memory serves they had revenue on it for like the first three years and then it was completely deramped. So definitely not great, but at least it existed in the wild unlike 12FDX and was somewhat advanced for the time so I give it 2/3 points.
From what I understand GlobalFoundries hit delays with their 14nm process and they ended up licensing 14nm FinFET from Samsung to use in their fabs.
Yes, but I wouldn't exactly call that arrangement any different that an IDM such as Fujitsu or intel outsourcing to TSMC.
Samsung is fumbling horribly.
I don't disagree with you. However it has to be worth something that they have gotten this far where no other IDM or Foundry besides TSMC or intel has been able to follow (or I guess more accurately Samsung has been able to follow but you get the point). At this point SF4 seems at the least usable and is maybe even fine now. SF7 and SF8 are also functioning. SF3/2 family is looking like a clown show, but so did intel 10nm before it was fixed. I say all that to say this: they aren't out until they are out. Looking at history Samsung also seems to do their best work when they are behind.

Back to the topic of scale though, Samsung has less foundry revenue than intel. Samsung should be able to charge more for SF4/3 wafers than intel can charge for i7 wafers, and almost all of intel's wafer capacity are i7 and below versus Samsung where only like 1/3-1/2 of their logic capacity is SF7 and below. Between their advanced revenue likely has higher ASPs than intel's avg wafer ASP, much of Samsung's revenue, and most of their wafer starts are on trailing edge processes; it seems fair to assume that intel's IDM (even while outsourcing like 40% of their wafers to TSMC) has significantly more scale than Samsung's IDM+foundry. With that said I suspect that Samsung would have to become subscale and bow out before it is possible for intel to become subscale (assuming intel doesn't run out of money first which is not a given).

I believe all of the above drives home the point that AMD needing to spin out doesn't mean the IDM model itself is untenable not worth pursuing. On the trailing edge it is a very profitable way to do business (more so than many of the leading fabless firms). I also agree that without external contracts eventually intel will become subscale, but I don't think they are there yet, and pulling products back from TSMC further pushes out that timeline. This issue of scale is also only a concern because intel has failed at every opportunity to expand their TAM meanwhile Xeon's TAM has greatly shrunk, CCG/DCAI have hemorrhaged MSS, and NEX hasn't seemingly grown their MSS. While I do think foundry is a good play even if intel's product business weren't faltering; I suspect that if Intel hadn't fallen asleep at the proverbial wheel, then "IDM 1.0" could have been successful for the foreseeable technical horizon rather than the current Intel where operating in "IDM 1.0" mode likely means not having the horsepower to scale the summit into the CFET era.
 
For what it's worth AMD tried to push the ATI engineers post acquisition to use GlobalFoundries instead of TSMC. Initially those engineers did not want to do it because the EDA environment for GlobalFoundries was way less developed. Then GlobalFoundries started fumbling continuously delaying the process roadmap. So Team Red kept using TSMC. And then later Team Green also dropped GlobalFoundries as a supplier.

I could see Intel foundry working for Intel design if they deliver on the process and Intel moves more wafers to its own fabs. Not just the CPU dies but also the GPU dies and other large dies on leading edge processes. But thus far it ain't happening. The fab at Ireland only has enough capacity for the server processors or so it seems. Intel needs more fabs.
 
AMD wasted the funds they had to build their next generation fab buying ATI at an inflated price just before the stock market crashed. Many billions in cash were spent on that acquisition.
Ironically, the ATI purchase moved them in to GPU, which is now a saving grace, that has helped them gain a 2nd place position in GenAI hardware.

Even Intel will have to bow out eventually, it will be later than AMD because their scale is larger, but it will still happen.
As I have pointed out several times, at the leading edge, it’s all about ROIC - return on invested capital. Intel‘s ROIC used to be much higher than TSMC, but TSMC flipped the equation eventually due to the far longer lifetimes for their fabs at a given or adjacent node.
 
Back
Top