Array ( [content] => [params] => Array ( [0] => /forum/threads/intel-ceo-highlights-the-company%E2%80%99s-top-three-mistakes.19092/page-2 ) [addOns] => Array ( [DL6/MLTP] => 13 [Hampel/TimeZoneDebug] => 1000070 [SV/ChangePostDate] => 2010200 [SemiWiki/Newsletter] => 1000010 [SemiWiki/WPMenu] => 1000010 [SemiWiki/XPressExtend] => 1000010 [ThemeHouse/XLink] => 1000970 [ThemeHouse/XPress] => 1010570 [XF] => 2021770 [XFI] => 1050270 ) [wordpress] => /var/www/html )
Intel acquired a fantastic group (Motorola=>Freescale=>Fujitsu). The opportunity slipped away. Too bad.Right up Intel's alley, but RF stuff is not an Intel strong point.
Can somebody explain the largest contributors to this process disaster?Those "mistakes" are all focusing on products, and ignoring the massive elephant in the room which was the multi-year 10nm process disaster...
Logistics is more difficult than fighting physics? Interesting.Intel certainly has the technology but they lacked the outbound foundry experience and ecosystem.
Mr. Blue. Please confirm. Was this the problem? Sounds a little like Bangalore (and woke) Instruments.They thought replacing their most experienced engineers with H1B's and PhD's would shake things up and send the right message, a real bean counter/HR sicko mentality running the show.
Here was another historical milestone which typically is forgotten. Intel used to license Atom core to TSMC to compete with ARM core which was emerging in 2009.
Intel outsourcing some Atom manufacturing to TSMC
The agreement illustrates the importance of Atom to Intel.www.oregonlive.com
I think the engineering people knew what Optane would be good for, the product management and product marketing people IMO did not position it well, but that part came after I left Intel so I don't know any details. I think Optane memory persistence was a definite opportunity for innovation, but we agree that Intel's decision to make it exclusive for their own CPUs turned off the major software providers (well, at least made them very hesitant to invest), and pissed off a lot of the open source community (especially Linux). Oracle, uniquely to my knowledge, made a big investment in Optane memory development for Exadata, and got burned.Intel simply had no clue what the market might be. And no concept of the real value. They knew it could not beat NAND on price, but they did not understand it could not match DRAM value because of perf. They thought it would sell like hotcakes because of persistence at a price higher than DRAM, but the reality is that DRAM is persistent with the power on and durable only when replicated to multiple separate systems, and software that worked like that was mature, so the only folks interested in persistent main memory were running obscure experiments.
CCIX was always a non-starter for Intel. Peer-to-peer open cache coherence was/is poisonous to them.When we told Intel the price had to undercut DRAM because the perf was poor and persistence of little interest, my impression was they thought we were lying and it was just a negotiation tactic. They just kept going, and just as predicted it did not sell.
Then, the inability to attach their DIMMs to anything other than a Xeon made vendor lock-in a red flashing problem. It did not help that it sucked bandwidth away from the DRAM you needed to support it, and the first version had problems with mixing reads and writes that dragged perf way down. Which we could have found and fixed if they were not so secretive before releasing it. Not Intel's finest effort.
I still believe the tech could have won coming in at 1/3rd the cost of DRAM (about 20x the cost of NAND) if they had adopted CCIXX (which already worked like CXL.mem) for open attach on a larger market, and worked openly with customers to try out the use cases. The manufacturing looked like that price would be ok, and there is nothing that motivates customers like a price cut on their most expensive system component (memory).
I think IFS and Intel's x86 architectural insular behavior are orthogonal. Well, they better be, or IFS is doomed. I have also thought for a long time that Intel's everything-looks-like-an-x86-cpu-application attitude must change, or it'll put Intel on IBM's path. The problem IMO is, Gelsinger is one of the high priests of that attitude.How is this relevant to their future? Well, that Optane-style inward-looking product development culture will not work well for success in the IFS market. The IDM model's weakness is how inward looking they are, xeons and fabs locked in an internal embrace which is stale and brittle. TSMC shows how much more vibrant the IFS model is, and Intel will need to split to really ensure they shift their culture.
Not in my opinion. I was not employed by Intel at the time, but I had many, many friends at the grades 10 & 11 level who seemed to be primary targets of the RIF. The informal narrative seemed to be Intel's workforce has become top-heavy over the years, and there were too many so-called leaders and not enough doers. My view - I agree with this qualitative assessment Intel execs made, and though the implementation of the RIF was visibly clumsy and political, there was a legitimate problem. I don't think H1B hiring, off-shoring, wokeness, or other conspiracy theories had anything to do with it. And g10-g11 people at Intel were and still are quite expensive to have around. And IMO many people got into those positions more by managing upward well, rather than the intrinsic value they added.Mr. Blue. Please confirm. Was this the problem? Sounds a little like Bangalore (and woke) Instruments.
The officially stated reason seems to have always been that the targets were too aggressive and SAQP being hard.Can somebody explain the largest contributors to this process disaster?
Too aggressive on design rules?
Difficulty dealing with rounded corners?
Contact over poly above the active layer?
Not taking baby steps (Mr. Ng once recommended this over a year ago)?
Trying to push the limits of DUV?
The officially stated reason seems to have always been that the targets were too aggressive and SAQP being hard.
As a side note, I would love if at IEDM or something we could get all of the gory details in a presentations rather than only having the high level 1-liner reasons we often get from these firms. Once 10nm inevitably gets deramped I would enjoy seeing a no punches pulled 10nm postmortem from process definition all the way to ICL or TGL's launch. I would also love to see one for 7LPP family and N3 (unsurprisingly I would also be more curious in seeing these since I can only infer what went wrong from reverse engineering the publicly known info for those nodes).
this is sensitive to the design/layout of each product especially if you're pushing the rules hard -- and you don't find this out until you get multiple designs into volume production...
Following the rules -- including voltage-dependent spacings, and gate pitches -- should ensure a reliable and high-yielding design, that's the entire point of them. But this assumes that the foundry has spotted all the possible "gotchas" in the layout, which often involves trying out a huge number of test layouts, and the more different layouts that go through the better -- and especially any "abnormal" layouts which meet the rules but don't look like conventional digital cells e.g. in high-speed analog circuits.Great insight Mr. Ng and Ian.
DRMs provides choices (gate pitches, for example). We have been picking which rules to follow based on
1) Ease of routing automation.
2) Spacing based on voltage tradeoffs (1.2 vs 1.5 vs 1.8).
We never considered yield and reliability. We just assumed that following the LOGIC rules meant good reliability and good yields. We also assumed that SRAMs and the aggressive standard cells were pushing of the design rule yields, at least on the lower levels.
This and voltage related issues mentioned in your comments above are huge statements. Would you speculate the following:
When a DRM provides a choice of distinct gate pitches, is the tightest pitch value more risky for yields and long term reliability?
Is staggering the vias rather than stack them a good idea?
Are the majority of the yield issues based on the via extension rules and spacing of 2 same metals approaching each other? I ask this because of the dog ear patches of the past (present?)
Are there additives that EDA companies should do to enhance yields rather than dumping the problem onto the post-gds processors? Basically OPC vs being conservative, or doing both? Are the voltage tags on wires strictly for arcing, or also for OPCers?
Thank you. Your insights go beyond the manual.
No. I worked on mobile from late 90s and Intel was nowhere. They had the Freescale processors in a few organizers like the HP (still got a couple of those dev kits somewhere) but those missed the point for phone, which was outselling organizers 10x and Intel had nothing the phones were interested in at that point. They sold Freescale (I think the last gasp of those at Intel was for routers not mobile, and IIRC the Freescale team had moved to Apple anyway). By the time they woke up and tried to do modems it was desperately late and they had no suitable process, and they tried to pair it with Atom because they lost their ARM mojo. Sad.Intel didnt miss mobile. Intel worked hard in mobile before mobile was cool and intel tried VERY hard at Mobile.
I think you need to look at what Intel was doing internally. great ideas and vision. taking 2-3 times as long as others to execute killed them. There are great examples .... mobile voice phones and modems are not where the money was at. Smart phones and mobile computing. Freescale/Motorola saw the decline and end it deserved. Nothing Intel did was related to this. Intel had tons of internal projects and plans. Its about mobile processors not modems.No. I worked on mobile from late 90s and Intel was nowhere. They had the Freescale processors in a few organizers like the HP (still got a couple of those dev kits somewhere) but those missed the point for phone, which was outselling organizers 10x and Intel had nothing the phones were interested in at that point. They sold Freescale (I think the last gasp of those at Intel was for routers not mobile, and IIRC the Freescale team had moved to Apple anyway). By the time they woke up and tried to do modems it was desperately late and they had no suitable process, and they tried to pair it with Atom because they lost their ARM mojo. Sad.
Phones were cool and inevitable long before that. Long before iPhone, which captured the lead, but there would have been smart phones without Apple. Apple redefined the UX, but everything else already existed. That was how Android took off so fast, they just repurposed the already dominant feature-phone Linux hardware and slapped a graphic UX on top.
It is not like motorcycles vs. cars. It is like cars vs. trucks.
All I remember were x86 projects, like Moorestown. Is that one of the great projects you're referring to? Still an Atom with an FSB architecture on an N-1 process, if my memory is still working. I think this link says it is.I think you need to look at what Intel was doing internally. great ideas and vision. taking 2-3 times as long as others to execute killed them. There are great examples .... mobile voice phones and modems are not where the money was at. Smart phones and mobile computing. Freescale/Motorola saw the decline and end it deserved. Nothing Intel did was related to this. Intel had tons of internal projects and plans. Its about mobile processors not modems.
You are missing all the Arm Products, processors, early handhelds, mobile computers, etc. Intel spend billions annually on Arm products, mobile products with every major customer. Intel tried, just failed. Lots of roadmaps and details. Intel had thousands of people just working on Arm based mobile productsAll I remember were x86 projects, like Moorestown. Is that one of the great projects you're referring to? Still an Atom with an FSB architecture on an N-1 process, if my memory is still working. I think this link says it is.
Thousands of people? When? Where? Are you thinking of the StrongArm acquisition from DEC in the 1990s?You are missing all the Arm Products, processors, early handhelds, mobile computers, etc. Intel spend billions annually on Arm products, mobile products with every major customer. Intel tried, just failed. Lots of roadmaps and details. Intel had thousands of people just working on Arm based mobile products
I remember XScale, though mostly from the IXP products. I'm still not convinced Intel ever had thousands of people and billions of dollars of spending on the programs. When Intel sold Xscale to Marvell in 2006, apparently there were 1400 Intel people who were sent to Marvell in that transaction, according to some sources I just looked at.Strongarm and the other acquisitions became Xscale. I worked extensively in this area at Intel. Santa Clara, Folsom, Austin, New Mexico, Oregon. and of course the Sales teams by Geo.