Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/threads/intel-ceo-highlights-the-company%E2%80%99s-top-three-mistakes.19092/page-2
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2021770
            [XFI] => 1050270
        )

    [wordpress] => /var/www/html
)

Intel CEO Highlights the Company’s Top Three Mistakes

Those "mistakes" are all focusing on products, and ignoring the massive elephant in the room which was the multi-year 10nm process disaster...
Can somebody explain the largest contributors to this process disaster?

Too aggressive on design rules?
Difficulty dealing with rounded corners?
Contact over poly above the active layer?
Not taking baby steps (Mr. Ng once recommended this over a year ago)?
Trying to push the limits of DUV?

Intel certainly has the technology but they lacked the outbound foundry experience and ecosystem.
Logistics is more difficult than fighting physics? Interesting.
 
They thought replacing their most experienced engineers with H1B's and PhD's would shake things up and send the right message, a real bean counter/HR sicko mentality running the show.
Mr. Blue. Please confirm. Was this the problem? Sounds a little like Bangalore (and woke) Instruments.
 
Here was another historical milestone which typically is forgotten. Intel used to license Atom core to TSMC to compete with ARM core which was emerging in 2009.

I remember this one. TSMC was feuding with Arm at the time so they jumped on the Atom train which fell off a bridge. It was hard to watch. Thankfully TSMC and Arm kissed and made up in time for the Apple harvest.
 
Intel didnt miss mobile. Intel worked hard in mobile before mobile was cool and intel tried VERY hard at Mobile. Intel failed in mobile. Intel also tried in graphics multiple times. Intel failed at graphics. I can give the models on why this happened and if anything has changed. I can also discuss how Intel was doing mobile chips before Iphone was even prototyped. Side note: not selling mobile chips to Apple was the smartest think Intel did. there are simple reasons why

IBM doesnt make phones
Ford doesnt make motorcycles
Nvidia doesnt make tablets
Apple doesnt own fabs
AMD doesnt make PCs

Pat says these things because it rationalizes what Intel is and is not.

Just an opinion!
 
Intel simply had no clue what the market might be. And no concept of the real value. They knew it could not beat NAND on price, but they did not understand it could not match DRAM value because of perf. They thought it would sell like hotcakes because of persistence at a price higher than DRAM, but the reality is that DRAM is persistent with the power on and durable only when replicated to multiple separate systems, and software that worked like that was mature, so the only folks interested in persistent main memory were running obscure experiments.
I think the engineering people knew what Optane would be good for, the product management and product marketing people IMO did not position it well, but that part came after I left Intel so I don't know any details. I think Optane memory persistence was a definite opportunity for innovation, but we agree that Intel's decision to make it exclusive for their own CPUs turned off the major software providers (well, at least made them very hesitant to invest), and pissed off a lot of the open source community (especially Linux). Oracle, uniquely to my knowledge, made a big investment in Optane memory development for Exadata, and got burned.
When we told Intel the price had to undercut DRAM because the perf was poor and persistence of little interest, my impression was they thought we were lying and it was just a negotiation tactic. They just kept going, and just as predicted it did not sell.

Then, the inability to attach their DIMMs to anything other than a Xeon made vendor lock-in a red flashing problem. It did not help that it sucked bandwidth away from the DRAM you needed to support it, and the first version had problems with mixing reads and writes that dragged perf way down. Which we could have found and fixed if they were not so secretive before releasing it. Not Intel's finest effort.

I still believe the tech could have won coming in at 1/3rd the cost of DRAM (about 20x the cost of NAND) if they had adopted CCIXX (which already worked like CXL.mem) for open attach on a larger market, and worked openly with customers to try out the use cases. The manufacturing looked like that price would be ok, and there is nothing that motivates customers like a price cut on their most expensive system component (memory).
CCIX was always a non-starter for Intel. Peer-to-peer open cache coherence was/is poisonous to them.
How is this relevant to their future? Well, that Optane-style inward-looking product development culture will not work well for success in the IFS market. The IDM model's weakness is how inward looking they are, xeons and fabs locked in an internal embrace which is stale and brittle. TSMC shows how much more vibrant the IFS model is, and Intel will need to split to really ensure they shift their culture.
I think IFS and Intel's x86 architectural insular behavior are orthogonal. Well, they better be, or IFS is doomed. I have also thought for a long time that Intel's everything-looks-like-an-x86-cpu-application attitude must change, or it'll put Intel on IBM's path. The problem IMO is, Gelsinger is one of the high priests of that attitude.
 
Last edited:
Mr. Blue. Please confirm. Was this the problem? Sounds a little like Bangalore (and woke) Instruments.
Not in my opinion. I was not employed by Intel at the time, but I had many, many friends at the grades 10 & 11 level who seemed to be primary targets of the RIF. The informal narrative seemed to be Intel's workforce has become top-heavy over the years, and there were too many so-called leaders and not enough doers. My view - I agree with this qualitative assessment Intel execs made, and though the implementation of the RIF was visibly clumsy and political, there was a legitimate problem. I don't think H1B hiring, off-shoring, wokeness, or other conspiracy theories had anything to do with it. And g10-g11 people at Intel were and still are quite expensive to have around. And IMO many people got into those positions more by managing upward well, rather than the intrinsic value they added.
 
Last edited:
Can somebody explain the largest contributors to this process disaster?

Too aggressive on design rules?
Difficulty dealing with rounded corners?
Contact over poly above the active layer?
Not taking baby steps (Mr. Ng once recommended this over a year ago)?
Trying to push the limits of DUV?
The officially stated reason seems to have always been that the targets were too aggressive and SAQP being hard.

As a side note, I would love if at IEDM or something we could get all of the gory details in a presentations rather than only having the high level 1-liner reasons we often get from these firms. Once 10nm inevitably gets deramped I would enjoy seeing a no punches pulled 10nm postmortem from process definition all the way to ICL or TGL's launch. I would also love to see one for 7LPP family and N3 (unsurprisingly I would also be more curious in seeing these since I can only infer what went wrong from reverse engineering the publicly known info for those nodes).
 
The officially stated reason seems to have always been that the targets were too aggressive and SAQP being hard.

As a side note, I would love if at IEDM or something we could get all of the gory details in a presentations rather than only having the high level 1-liner reasons we often get from these firms. Once 10nm inevitably gets deramped I would enjoy seeing a no punches pulled 10nm postmortem from process definition all the way to ICL or TGL's launch. I would also love to see one for 7LPP family and N3 (unsurprisingly I would also be more curious in seeing these since I can only infer what went wrong from reverse engineering the publicly known info for those nodes).

I'm sure we'll never get any public admission of *exactly* what went wrong, it would be too embarrassing for Intel and possibly helpful to IFS competitors -- the most we're likely to get is the current admission of "too hard"... ;-)

There were rumours about various problems with the process apart from pushing SAQP too hard -- "too narrow process window", like TSMC N3==>N3E -- including problems with the new cobalt interconnect/vias like bad yield and poor reliability over temperature cycling, but I doubt that Intel will ever confirm or deny this. I can't remember whether this process had COAG (Contact Over Active Gate) or not, , that's another possible yield nightmare which can also cause degraded reliability at high voltages where we know Intel go to maximize short-term clock rates.

The risk when you try and do too many new things all at the same time is not just that one (or more) of them goes wrong, but that they interact with each other especially over process variation so you get lots with a complete yield collapse, and that this is sensitive to the design/layout of each product especially if you're pushing the rules hard -- and you don't find this out until you get multiple designs into volume production.

It's why TSMC tend to spread major process changes over generations (e.g. half-nodes, or even more process iterations at a given nominal node), and because they also have a much bigger number of designs (including from multiple customers) going through the fab it gives them more opportunity to find such problems and less chance of them being catastrophic.
 
Great insight Mr. Ng and Ian.

DRMs provides choices (gate pitches, for example). We have been picking which rules to follow based on

1) Ease of routing automation.
2) Spacing based on voltage tradeoffs (1.2 vs 1.5 vs 1.8).

We never considered yield and reliability. We just assumed that following the LOGIC rules meant good reliability and good yields. We also assumed that SRAMs and the aggressive standard cells were pushing of the design rule yields, at least on the lower levels.

this is sensitive to the design/layout of each product especially if you're pushing the rules hard -- and you don't find this out until you get multiple designs into volume production...

This and voltage related issues mentioned in your comments above are huge statements. Would you speculate the following:

When a DRM provides a choice of distinct gate pitches, is the tightest pitch value more risky for yields and long term reliability?

Is staggering the vias rather than stack them a good idea?

Are the majority of the yield issues based on the via extension rules and spacing of 2 same metals approaching each other? I ask this because of the dog ear patches of the past (present?)

Are there additives that EDA companies should do to enhance yields rather than dumping the problem onto the post-gds processors? Basically OPC vs being conservative, or doing both? Are the voltage tags on wires strictly for arcing, or also for OPCers?

Thank you. Your insights go beyond the manual.
 
Great insight Mr. Ng and Ian.

DRMs provides choices (gate pitches, for example). We have been picking which rules to follow based on

1) Ease of routing automation.
2) Spacing based on voltage tradeoffs (1.2 vs 1.5 vs 1.8).

We never considered yield and reliability. We just assumed that following the LOGIC rules meant good reliability and good yields. We also assumed that SRAMs and the aggressive standard cells were pushing of the design rule yields, at least on the lower levels.



This and voltage related issues mentioned in your comments above are huge statements. Would you speculate the following:

When a DRM provides a choice of distinct gate pitches, is the tightest pitch value more risky for yields and long term reliability?

Is staggering the vias rather than stack them a good idea?

Are the majority of the yield issues based on the via extension rules and spacing of 2 same metals approaching each other? I ask this because of the dog ear patches of the past (present?)

Are there additives that EDA companies should do to enhance yields rather than dumping the problem onto the post-gds processors? Basically OPC vs being conservative, or doing both? Are the voltage tags on wires strictly for arcing, or also for OPCers?

Thank you. Your insights go beyond the manual.
Following the rules -- including voltage-dependent spacings, and gate pitches -- should ensure a reliable and high-yielding design, that's the entire point of them. But this assumes that the foundry has spotted all the possible "gotchas" in the layout, which often involves trying out a huge number of test layouts, and the more different layouts that go through the better -- and especially any "abnormal" layouts which meet the rules but don't look like conventional digital cells e.g. in high-speed analog circuits.

Don't forget that what you get on silicon nowadays looks nothing like the nice uniform rectangles on the computer screen, huge amounts of OPC are applied to get anything to work at all, and this is part of the process optimization which is invisible to the end used and can change over time as the foundry improves the process.

On top of the rules you *must* meet for a tapeout to be accepted there are also DFM recommendations which are "recommended", but these can often have effects like decreasing density so it's down to the customer to decide which ones to follow.
 
It took us 2 additional years to tighten the pitch in 16nm and 14nm. We needed to handle density rules while floorplanning, and we ended up relaxing the pitch a little for our analog. The double patterned finfet processes are tough. Rules change based on density. Not intuitive. Perhaps that is why 28nm fabs are still being built?
 
Intel didnt miss mobile. Intel worked hard in mobile before mobile was cool and intel tried VERY hard at Mobile.
No. I worked on mobile from late 90s and Intel was nowhere. They had the Freescale processors in a few organizers like the HP (still got a couple of those dev kits somewhere) but those missed the point for phone, which was outselling organizers 10x and Intel had nothing the phones were interested in at that point. They sold Freescale (I think the last gasp of those at Intel was for routers not mobile, and IIRC the Freescale team had moved to Apple anyway). By the time they woke up and tried to do modems it was desperately late and they had no suitable process, and they tried to pair it with Atom because they lost their ARM mojo. Sad.

Phones were cool and inevitable long before that. Long before iPhone, which captured the lead, but there would have been smart phones without Apple. Apple redefined the UX, but everything else already existed. That was how Android took off so fast, they just repurposed the already dominant feature-phone Linux hardware and slapped a graphic UX on top.

It is not like motorcycles vs. cars. It is like cars vs. trucks.
 
No. I worked on mobile from late 90s and Intel was nowhere. They had the Freescale processors in a few organizers like the HP (still got a couple of those dev kits somewhere) but those missed the point for phone, which was outselling organizers 10x and Intel had nothing the phones were interested in at that point. They sold Freescale (I think the last gasp of those at Intel was for routers not mobile, and IIRC the Freescale team had moved to Apple anyway). By the time they woke up and tried to do modems it was desperately late and they had no suitable process, and they tried to pair it with Atom because they lost their ARM mojo. Sad.

Phones were cool and inevitable long before that. Long before iPhone, which captured the lead, but there would have been smart phones without Apple. Apple redefined the UX, but everything else already existed. That was how Android took off so fast, they just repurposed the already dominant feature-phone Linux hardware and slapped a graphic UX on top.

It is not like motorcycles vs. cars. It is like cars vs. trucks.
I think you need to look at what Intel was doing internally. great ideas and vision. taking 2-3 times as long as others to execute killed them. There are great examples .... mobile voice phones and modems are not where the money was at. Smart phones and mobile computing. Freescale/Motorola saw the decline and end it deserved. Nothing Intel did was related to this. Intel had tons of internal projects and plans. Its about mobile processors not modems.
 
I think you need to look at what Intel was doing internally. great ideas and vision. taking 2-3 times as long as others to execute killed them. There are great examples .... mobile voice phones and modems are not where the money was at. Smart phones and mobile computing. Freescale/Motorola saw the decline and end it deserved. Nothing Intel did was related to this. Intel had tons of internal projects and plans. Its about mobile processors not modems.
All I remember were x86 projects, like Moorestown. Is that one of the great projects you're referring to? Still an Atom with an FSB architecture on an N-1 process, if my memory is still working. I think this link says it is.

 
All I remember were x86 projects, like Moorestown. Is that one of the great projects you're referring to? Still an Atom with an FSB architecture on an N-1 process, if my memory is still working. I think this link says it is.

You are missing all the Arm Products, processors, early handhelds, mobile computers, etc. Intel spend billions annually on Arm products, mobile products with every major customer. Intel tried, just failed. Lots of roadmaps and details. Intel had thousands of people just working on Arm based mobile products
 
You are missing all the Arm Products, processors, early handhelds, mobile computers, etc. Intel spend billions annually on Arm products, mobile products with every major customer. Intel tried, just failed. Lots of roadmaps and details. Intel had thousands of people just working on Arm based mobile products
Thousands of people? When? Where? Are you thinking of the StrongArm acquisition from DEC in the 1990s?
 
Strongarm and the other acquisitions became Xscale. I worked extensively in this area at Intel. Santa Clara, Folsom, Austin, New Mexico, Oregon. and of course the Sales teams by Geo.
 
Strongarm and the other acquisitions became Xscale. I worked extensively in this area at Intel. Santa Clara, Folsom, Austin, New Mexico, Oregon. and of course the Sales teams by Geo.
I remember XScale, though mostly from the IXP products. I'm still not convinced Intel ever had thousands of people and billions of dollars of spending on the programs. When Intel sold Xscale to Marvell in 2006, apparently there were 1400 Intel people who were sent to Marvell in that transaction, according to some sources I just looked at.

At the time this was going on, 2004-2006, Intel was in a world of hurt. AMD was taking market share in clients and servers by being first to market with x86-64 and eliminating northbridges (by integrating the memory controllers). Intel didn't get their act together until Nehalem in 2008, which tilted the advantage back to Intel. So I'm not surprised that Intel doubled down on x86 and divested of "distractions", because that's what Intel has always done, and they're still doing it.
 
Back
Top