This was at a press event with the objective to talk about how super dupper awesome intel's technology is. It is hardly surprising this was a narrative that intel would tell their engineers to bandy around. Not really any different from the mudslinging that TSMC or Samsung rattle around to discredit each other or intel. It is also natural for market leaders to always want to present their position as unbreakable and how others will never be able to catch up to them. See GM in automobiles, IBM in compute, and Meta in social media as past examples and TSMC and NVIDIA as current examples.
But market speak and sensational language aside, when I dig into the actually technical meat of what was said, I can't say I entirely disagree with Bohr's assessment. Reading from the actual EEtimes interview I was kind of surprised how much of what was said came to pass; I expected to read about milk and I got wine. Bohr was 100% correct that design and fab needed to more closely optimized together. The "traditional" fabless model of make one design that and easily "toss it over the wall" and port the design across multiple foundries is dead. What's more it was well and truly dead not soon after the interview. All major fabless design houses now have dedicated foundry engagement teams staffed by employees sourced from the manufacturer and with deep intimate knowledge of the process, device, and even the specific tooling used by their fab Said teams also have input about process development (something completely unheard of back in the 2000s). In this day and age all foundries are virtual IDMs with their fabless customers. Hyperbole aside, the only thing in Bohr's statements to EEtimes that jumps off the page as not panning out was the assumption that IDM like DTCO/collaboration could only happen at an actual IDM.
The situation certainly couldn't have been more different. Let's start with what was known to intel at the time. The rest of the world announced that their 22/20nm processes would be planar. Intel correctly believed that 22/20nm bulk planar would not have sufficient short channel control to offer a compelling process technology. They also knew that the rest of the world wouldn't have a functioning finFET process until 2015 (which at the time must have looked like a slow down of RoW process development from 2 years to 3), and that rest of world 16/14nm nodes would have PPA characteristics similar to i22nm and far inferior to i14nm (which was doing a 2.4x density uplift rather than the usual 2x). TLDR intel figured they were 4 years ahead of the rest of the industry. Intel was also looking at how when the rest of industry finally caught up to 2011's i22nm their customers would be seeing higher cost per FETs than their 22/20nm nodes. Meanwhile intel was trying to achieve a faster than usual cost per FET declines. From what intel could see at the time, their already commanding 4 year lead looked to be growing.
Under that list of assumptions, I don't exactly think it is a particularly large logical leap to think that intel ripping mobile from the hands of QCOM and MTK should have been a cake walk; and without those mobile customers TSMC and to a lesser extent Samsung would have been unable to justify staying on the leading edge. Of course history didn't pan out that way and even with the crutch of an overwhelming process lead intel's chip designers flushed billions of dollars down the drain during BK's failed mobile push. So many baffling choices like a separate PCH die including graphics and memory controllers even after intel's PC parts integrated these components to save power because it was hurting product competitiveness versus AMD's products on far inferior process, and the call to rely on V scaling for IA cores thus requiring intel's process nodes to be optimized for high V in CCGs pursuit of high Fmax and maximum per core performance.
I wouldn't make such sweeping claims about Mark Bohr being responsible for the decade of seemingly never ending debacles/delays. It is irksome when I see people who have never worked inside of LTD always act like they "know exactly what happened inside of LTD", or that "they have the true 10nm story" and they usually end up looking foolish for it. And I know your smart enough to know that the story is too complex/multifaceted to be as simple as "this one guy" or "this one mistake". To have failures of that magnitude happen many factors, poor choices, the difficulty of diagnosing issues with the way pre-late 2020 LTD was operating, and many years of under-investment/decay/attrition needed to all simultaneously occur. Sunlin Chao's development model also had the critical Achilles' heel of innovation being done in a serial manner. When one part of that development process gets stuck the whole technology development pipeline comes screeching to a halt until the problematic technology can move onto the next phase of development.
The history of semiconductor research at Intel is discussed in this 2004 conversation with Sunlin Chou. Learn how Intel systematically turned research into gold with the
www.chiphistory.org
intel 14nm was intel's 2nd finFET process and was comparable in PPA and fin profile to the rest of the industry's 10"nm" process technologies. Non intel 16/14"nm" had fin profiles and PPA characteristics on par to slightly better than intel 22"nm". With that context, when one considers that TSMC's 10FF was also troubled/was never adopted beyond Huawei and Apple, Samsung 10LPE/early 10LPP being as rough as they were, and GF 10LP completely missing the target window for 10"nm" class foundry nodes and having to be canned - intel 14nm has plenty of company in the "subpar early yield" club.
Hardly. When AMD spun out their manufacturing arm in 2008 AMD's had one 12" fab and one 8" fab. Intel had three 8" fabs (one of them on loan to their jointly owned subsidiary Numonyx), a 49% stake in two JV NAND fabs with Micron, and 16 AMD Fab30 equivalent 12" fabs running at full tilt. AMD was simply subscale, and this was made even worse by not being able to keep up with intel's process development team. Add in the crippling amount of debt/intrest from the ATI purchase at a time when AMD's finacnes were already in poor shape and AMD had no choice but to bow out. AMD is a case study for the fundamentals that govern the semiconductor manufacturing industry rather than proof of the IDM model's inferiority. Don't confuse TSMC with the foundry model, as they are not the same thing. Other logic IDMs have continued along to newer process technologies past where AMD had to bow out (ST-M, Toshiba, NEC/Fujitsu, TI, Phillips, Infenion, IBM, and Samsung). To add insult to injury GF never even developed a more advanced (for the time) internal node past their 32nm SOI and 28nm bulk (the same place the non intel/Samsung/IBM IDMs dropped out of the leading edge). Meanwhile IBM and a bit later UMC would independently join the finFET clubs with their own native 14"nm" class process. There is also the elephant in the room of also having intel/Samsung still duking it out on the leading edge with TSMC.