Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/index.php?threads/intel-reducing-staff-by-15-and-dividend-suspended.20721/page-4
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2021770
            [XFI] => 1050270
        )

    [wordpress] => /var/www/html
)

Intel reducing staff by 15% and dividend suspended

I think AMD shed 10% of their employees announced in 2011 (Rory) after having spun out GlobalFoundries business in 2009, 7% in 2014 (Lisa), 5% in 2015 until they turned the corner with Zen in 2016. Nearest death experience was mid-2015 where the stock price was less than 2$. These ships turn slowly.
 
I think AMD shed 10% of their employees announced in 2011 (Rory) after having spun out GlobalFoundries business in 2009, 7% in 2014 (Lisa), 5% in 2015 until they turned the corner with Zen in 2016. Nearest death experience was mid-2015 where the stock price was less than 2$. These ships turn slowly.

Is Intel too big to turn around at this moment?
 
The one blunder Intel made that no one mentioned in this entire thread is not adopting EUV soon enough. To me this seems to be the most important miss, that led to many other issues. This led to all the debacles with completing their 10nm node in the middle of the last decade, the wall of not being able to go past 10nm This in turn, led to uncompetitive products, allowed AMD to have a better node and better product for the 1st time and allowed TSMC to leap ahead and stay ahead. It allowed any fabless company to have better nodes than Intel (AMD, Apple, QCOM, Nvidia, etc...). Luckily, Pat has corrected this problem and now has competitive nodes again. Intel 3 node is already used in their latest Xeon data center chips. Reports I've seen suggests Intel 3 is lower in density than TSMC N3, but performance is as good if not better. Intel 18A is ahead of schedule, currently improving yield and appears to it will be a legit competitor for TSMC N2. I think once 18A is mature, it will allow Intel products to again be competitive, and allow for a new revenue stream from foundry. Things can turn quickly. If Nvidia, or Apple or AMD, or QCOM were to annouce a big 18A deal, it would instantly change the mood and view of Intel. Zero to hero. Intel has estimated (For what it's worth) Foundry will break even in 2027. I think Intel will hang on and turn itself around. It will be a tough 2-3 years, but they will do it. Intel also appears to be not be making this mistake again, by buying up all of ASML's high NA EUV tools, while TSMC has said no thansk to high NA. Finally, Intel advanced packaging is impressive. Packaging is just as important these days as process technology and nodes. Foundry will score some packaging deals too (rumors are rampant about Nvidia packaging deal).
 
Intel also needs to improve it's messaging and marketing. They should ditch the famous 4 chimes audio jingle. It's 30 years old and gives the impression Intel is trying to cling to it's glory days. Everytime Pat speaks he mentions Andy Grove, Gordon Moore, and Moore's Law. This also give the impression Intel is living in the past. Intel should stop mentioning x86. Yes, they are x86, but who cares. It doesn't matter if your product is x86, Arm, or RiscV. If your product performs, no one cares. SHow me the performance. I think the world has been led to believe Arm is fundamentally better in efficiency, but this is not really true. There is no fundamental reason why this would be true. It's likely due to Intel not being in mobile, Intel having inferior nodes for the last decade, and due to macbooks which use Arm, but aslo have on chip memory, and are desgined solely for Apple. Intel should also improve its product launches and stage presence at big events. It's often clunky and feels old. The music is odd, the images are odd, the feel is odd. They should learn from Apple and Nvidia how to do this. Of course having great products is a must and more important, but this can also help.
 
The one blunder Intel made that no one mentioned in this entire thread is not adopting EUV soon enough. To me this seems to be the most important miss, that led to many other issues. This led to all the debacles with completing their 10nm node in the middle of the last decade, the wall of not being able to go past 10nm This in turn, led to uncompetitive products, allowed AMD to have a better node and better product for the 1st time and allowed TSMC to leap ahead and stay ahead. It allowed any fabless company to have better nodes than Intel (AMD, Apple, QCOM, Nvidia, etc...). Luckily, Pat has corrected this problem and now has competitive nodes again. Intel 3 node is already used in their latest Xeon data center chips. Reports I've seen suggests Intel 3 is lower in density than TSMC N3, but performance is as good if not better. Intel 18A is ahead of schedule, currently improving yield and appears to it will be a legit competitor for TSMC N2. I think once 18A is mature, it will allow Intel products to again be competitive, and allow for a new revenue stream from foundry. Things can turn quickly. If Nvidia, or Apple or AMD, or QCOM were to annouce a big 18A deal, it would instantly change the mood and view of Intel. Zero to hero. Intel has estimated (For what it's worth) Foundry will break even in 2027. I think Intel will hang on and turn itself around. It will be a tough 2-3 years, but they will do it. Intel also appears to be not be making this mistake again, by buying up all of ASML's high NA EUV tools, while TSMC has said no thansk to high NA. Finally, Intel advanced packaging is impressive. Packaging is just as important these days as process technology and nodes. Foundry will score some packaging deals too (rumors are rampant about Nvidia packaging deal).
You're correct. I didn't bring up the 10nm node fail because it isn't a "past" problem in my view, it is the root of the current fab process mess. Intel execs made the wrong decisions and fell behind. My point was that Intel has survived several missteps in the past and survived, so I'm still hopeful. Actually, Intel arguably lost chip design leadership to AMD and others years ago, and it was only strong fab process leadership that kept the company competitive. Without process leadership the foxes are in the henhouse.
 
You're correct. I didn't bring up the 10nm node fail because it isn't a "past" problem in my view, it is the root of the current fab process mess. Intel execs made the wrong decisions and fell behind. My point was that Intel has survived several missteps in the past and survived, so I'm still hopeful. Actually, Intel arguably lost chip design leadership to AMD and others years ago, and it was only strong fab process leadership that kept the company competitive. Without process leadership the foxes are in the henhouse.
Blueone, totally agree. It's still a problem, but luckily one that has light at the end of the tunnel.
 
The one blunder Intel made that no one mentioned in this entire thread is not adopting EUV soon enough. To me this seems to be the most important miss, that led to many other issues.
Okay so intel 10nm doesn't come out until 2019/2020. Other than improving the cost structure for 10nm EUV does NOTHING to solve intel losing the process lead. If anything it guarantees the loss of the process lead. Main benefit of that is that maybe BK and BS would have finally built new fab shells to accommodate EUV scanners. But also maybe not; god forbid intel builds a new shell after 2008. Much easier to just put your hands over your ears and say "lalala I don't hear you! There won't be any capacity shortfall on 14nm or 10nm. lalala".
This led to all the debacles with completing their 10nm node in the middle of the last decade, the wall of not being able to go past 10nm This in turn, led to uncompetitive products, allowed AMD to have a better node and better product for the 1st time and allowed TSMC to leap ahead and stay ahead. It allowed any fabless company to have better nodes than Intel (AMD, Apple, QCOM, Nvidia, etc...).
Is that why CCG has worse margins on TSMC made part while having a node advantage over AMD, than back when they were stuck a node behind AMD? The problems at intel run far deeper than the process node. The margin stacking, process lead, and the resultant cost per FET advantage just covered up these issue. Pat said as much in an interview from maybe a year ago.
Luckily, Pat has corrected this problem and now has competitive nodes again. Intel 3 node is already used in their latest Xeon data center chips. Reports I've seen suggests Intel 3 is lower in density than TSMC N3, but performance is as good if not better. Intel 18A is ahead of schedule, currently improving yield and appears to it will be a legit competitor for TSMC N2. I think once 18A is mature, it will allow Intel products to again be competitive, and allow for a new revenue stream from foundry. Things can turn quickly. If Nvidia, or Apple or AMD, or QCOM were to annouce a big 18A deal, it would instantly change the mood and view of Intel. Zero to hero. Intel has estimated (For what it's worth) Foundry will break even in 2027. I think Intel will hang on and turn itself around. It will be a tough 2-3 years, but they will do it.
I have two big worries. The smaller of the two is filling 18A and i3 fabs in the 10A and 8A timeframes as intel products presumably move onto newer nodes. Since intel is clearly positioning intel 3 as a base die and chipset technology that should be fine. 18A is the one I am more worried about since intels SOC and CPU dies can't stay on 18A forever. Maybe 18A external foundry will be a "late bloomer" that will fill intel's 18A fabs once intel proves 18A and themselves out, but there is no way to say for sure. The larger of the two issues I am worried about is ROIC on advanced node R&D. With all of intel's products becoming disag, the collapse of DCAI's competitive moat, and CCG buckling the demand for advanced nodes is much lower than it was in the 14/10nm days and might even be lower than the day of unquestioned dominance. I feel this is reflected in the scale intel is building out on their new nodes being smaller than what intel would run historically (adjusting for increasing process flow lengths as nodes become more advanced) meaning that your advanced node R&D is spread out over fewer wafers and takes longer to justify.
TSMC has said no thansk to high NA.
They haven't though. They got the second unit like a quarter after intel got theirs. Also charging blindly ahead is no recipe for success. See Samsung 7LPP coming out later than N7. EUV scanner throughput was low and supposedly that is what caused SF7 to be vaporware until 2019. Meanwhile TSMC only did a very limited EUV insertion on N7+/N6 in 2019/2020. Either way it is way too early to call good or bad on intel 14A high-NA use. Especially without knowing exactly how much (if any)/where 14A and A14 will implement high-NA.
Finally, Intel advanced packaging is impressive. Packaging is just as important these days as process technology and nodes. Foundry will score some packaging deals too (rumors are rampant about Nvidia packaging deal).
My opinion is that people over promise on packaging. For DC I think it deserves all the hype if for no other reason than it lets you go beyond litho field size and further condense the rack for better power efficiency. But when we are talking small die low ASP products you are adding packaging costs, test time, and power to not really help chip yield very much (assuming DD isn't horrible). Since the cost per function isn't really improved either. I can only really see it being a thing to maybe do an iPhone with N2 cores and L2 cache/GPU over an N5 base die with LPDDR phys, SOC level cache, USB, NPU, etc. Even then I am not totally convinced that this would be that much cheaper than a monolithic chip. If SRAM cost per bit does start reducing (even a little) again then I think even that idea would fall apart from a manufacturing cost perspective. The only savings would be IP reuse (which I admit I might be under valuing as a fab person since I don't have much of a concept for how expensive that is retaliative to saving a few bucks on a 100s of millions of unit SOC).
 
You're correct. I didn't bring up the 10nm node fail because it isn't a "past" problem in my view, it is the root of the current fab process mess. Intel execs made the wrong decisions and fell behind. My point was that Intel has survived several missteps in the past and survived, so I'm still hopeful. Actually, Intel arguably lost chip design leadership to AMD and others years ago, and it was only strong fab process leadership that kept the company competitive. Without process leadership the foxes are in the henhouse.
If the cost structure for Intel 7 was better it would have been fine but that's not the case and hurting them low I7 utilisation along eith cost is just a recipe for loss
 
If the cost structure for Intel 7 was better it would have been fine but that's not the case and hurting them low I7 utilisation along eith cost is just a recipe for loss
Don't get me wrong things would be better for intel. My point was that intel 7 utilization would still be low, intel 7 products would have still been forced to compete with N5 products, intel would have been stuck on 14nm until 2019 at the earliest, outsourcing to TSMC would have still happened, and intel would have been forced to go on an expensive intel 4/3 and 20/18A development and rollout acceleration.

It is also a hindsight is 20/20 sort of thing. If intel knew ahead of time with 100% certainty that SAPQ would take so long to become robust that they might as well have just waited for EUV and have a better cost structure, then they might well have made the choice to go the EUV route assuming BK wasn't a complete and utter moron (which is something I am not completely convinced by).

What I think is a much stronger argument is that the definition and risk management for 10nm was flawed from the beginning even with the information intel knew at the time. The M0p is 40nm and M1p is 36nm. If instead of a 1.5 M1:CPP gearing intel went for a 1:1 gearing (like they did on intel 4/3) they should have been able to do everything with battle tested SAPD (no risky and expensive SAPQ or rolling the dice on when EUV will finally be ready required). This would have had 0 impact on the size of the std cells, it would have just given designers less metal to work with/needing to pop up to M2 more often. Speaking of they should have done vias in a grid. Having infinite possible via layouts when pushing as hard as you are is not worth the risk it introduces for the limited benefit. Intel came to this realization on intel 4, but I don't think it is a stretch to say intel should have known better back in like 2012 when intel would have been starting to conceive what 10nm "should be". Reading intel's white papers on the topic and QnAs from the press gives me the vibe that the root of 10nm's problems were that LTD seemed to have thought they needed to give designers every possible knob in the service of "hand tuning", "IDM advantage", squeezing out every last bit of unused area, etc, etc. Meanwhile foundries lock their processes down but say that if you stay in our sandbox the transistors will perform and yield as we say they will. Even if you are a pure-play IDM (ie no foundry buisness) like intel was or TI is, then DFM and locked down chip design are clearly the way to go. It's just a shame that intel in that period seemed to disagree. If there was a hypothetical 10nm but with 54 MMP M1 and locked down with DFM design rules, I lean towards 10nm would have still missed 2016. But 2017 seems very plausible, and 2018 a shoe-in.
 
Last edited:
Thank you very much for your interest in a career at Intel! We have received your direct submission or referral for the following position:

Position: Foundry Engineering Careers – Join Intel Foundry
Job Number: JR0265541
Application Date: 2024 Jul 14

Unfortunately, this particular position has been cancelled. We apologize for the inconvenience.
 
On Headcount, Intel doesnt seem to know what its headcount is today. (ranges from 115-130K depending on what is or is not included).

What is todays headcount?
Do you think we will be able to get it back to where it was when Pat started by the end of 2025?
 
Doing some quick and dirty math based on internet sources (sketchy I know) indicates that much of Intel's problem in being cost competitive isn't the cost of labor, it is the amount of labor. Numbers for 2020 show Intel with 884K wafer start per month capacity and roughly 55000 employees in manufacturing (Intel states ! 50% of their workforce is involved in the manufacturing processes). Numbers for TSMC show 2,719K wafer start per month capacity and 56000 employees. This give about a 3x advantage in wafer starts per employee for TSMC.

While there are certain OSHA restrictions that will cause Intel (and US TSMC facilities) to have higher headcount, that doesn't account for a 3x delta. My assumption is that while Intel was maintaining technical leadership through shear brute force (i.e. headcount), TSMC was developing automated systems that allow them to do more with less. When the wheels fell off the bus at Intel they could no longer afford that level of brute force and are now having to align more closely to the TSMC model. Unfortunately, they haven't spent the last several decades investing in the automation systems that allow TSMC to run this efficiently. It is likely to get uglier before it gets better.
 
@Artificer60 , your math is interesting, but I think your proximal cause, lack of automation, is off. My view is that the extra headcount stems from 3 sources:
* The inherent inefficiency of the IDM model when all your products require near-leading-edge processes to be competitive. Intel, the IDM, had very few ways to continue to monetize their fully depreciated fabs that were essentially paid for, but no longer useful to them. The headcount for a fab is very front-end loaded for process and IP development, plus bring up, but an operating yielding fab is very automated. About half of TSMCs revenue, and I would guess 3/4 or more of their wafer capacity comes from essentially mature processes and customer designs - very “cheap” when it comes to headcount vs wafer starts. The numbers are quite different for Intel.
* Combine that with scale - I think I did a back of envelop calculation that TSMC was doing 1.3x the number of leading edge wafers that Intel was doing. So TSMC is amortizing the headcount of process and IP development as well as process bring up over substantially more wafers.
* Intel’s go it alone fab equipment, process, IP and design methodology strategy. The outside foundry ecosystem has far greater scale and offers far better shared resource efficiencies that Intel the IDM never wanted to avail themselves of for fear of giving up their secrets and competitive edge.
 
@Artificer60 , your math is interesting, but I think your proximal cause, lack of automation, is off. My view is that the extra headcount stems from 3 sources:
* The inherent inefficiency of the IDM model when all your products require near-leading-edge processes to be competitive. Intel, the IDM, had very few ways to continue to monetize their fully depreciated fabs that were essentially paid for, but no longer useful to them. The headcount for a fab is very front-end loaded for process and IP development, plus bring up, but an operating yielding fab is very automated. About half of TSMCs revenue, and I would guess 3/4 or more of their wafer capacity comes from essentially mature processes and customer designs - very “cheap” when it comes to headcount vs wafer starts. The numbers are quite different for Intel.
* Combine that with scale - I think I did a back of envelop calculation that TSMC was doing 1.3x the number of leading edge wafers that Intel was doing. So TSMC is amortizing the headcount of process and IP development as well as process bring up over substantially more wafers.
* Intel’s go it alone fab equipment, process, IP and design methodology strategy. The outside foundry ecosystem has far greater scale and offers far better shared resource efficiencies that Intel the IDM never wanted to avail themselves of for fear of giving up their secrets and competitive edge.
That is an interesting perspective. If those are the primary issues then the IDM2.0 model should address the first two of them if it is successful. Vendors also seem to indicate that Intel is much more willing to engage with them and leverage their learning than they were in the past. Now we'll see if Intel can make progress fast enough to turn things around before the money runs out. I do think Pat Gelsinger was way too optimistic about how quickly the new model could be implemented and customers would adopt a new foundry.
 
That is an interesting perspective. If those are the primary issues then the IDM2.0 model should address the first two of them if it is successful. Vendors also seem to indicate that Intel is much more willing to engage with them and leverage their learning than they were in the past. Now we'll see if Intel can make progress fast enough to turn things around before the money runs out. I do think Pat Gelsinger was way too optimistic about how quickly the new model could be implemented and customers would adopt a new foundry.
As you also noted, the billion dollar question is if the fabs will be viable before Intel runs out of money. If not, Intel will be the next Solyndra and you will have a lot of unemployed fab engineers or former Intel fab engineers working for $20/hr as technicians/night shift engineers in remaining US fabs.
 
I do think Pat Gelsinger was way too optimistic about how quickly the new model could be implemented and customers would adopt a new foundry.
I think Pat was over-optimistic about speed of implementation, plus he didn’t anticipate the huge upsurge in AI chip / system data center spending upending all other data center spending. If you look at the curve, NVIDIA has been on a total hockey stick tear over the past 1 1/2 years sucking billions from the traditional data center spend on Intel.
 
Last edited:
Intel was once the recipient of the luckiest decisions, choice of x86. They don’t seem to understand that nor have leveraged it.

Call them unlucky or stupid to discount low power compute that became the foundation for the smartphone explosion. As well call them stupid or unlucky not to realize GPUs represent another unique architecture that would be prefect for AI and ML.

Clearly no company was better positioned in the 90s to invest and be luck, call that failed leadership and a failed BoD?
 
Intel was once the recipient of the luckiest decisions, choice of x86. They don’t seem to understand that nor have leveraged it.
This isn't correct. Intel leveraged x86 architecture, I would say over-leveraged it, for 40+ years. When Intel x86 CPUs were competitive (which wasn't always the case), Intel had dominant marketshare. Even when their CPUs weren't competitive in design, fab processes 1-2 generations ahead of the competition kept them dominant anyway. Intel has made a lot of poor decisions over the years, but failing to leverage x86 wasn't one of them.
Call them unlucky or stupid to discount low power compute that became the foundation for the smartphone explosion.
Certain Intel executives bet that they could compete with specialized low-power x86 designs. These decision-makers squashed competing internal projects.
As well call them stupid or unlucky not to realize GPUs represent another unique architecture that would be prefect for AI and ML.
Intel's initial bet on GPUs was Larrabee, which was basically an x86 CPU with wide vector units. This is one of the projects that is an example of why your statement above, that Intel didn't seem to understand the value of x86 architecture, is incorrect. They over-valued it. As for GPUs, some Intel leaders, including the current CEO IMO, disliked any architecture which wasn't an x86 CPU. Their proof points were Itanium and the i860, which were considered failures. And those were at least CPUs. Once you get into any specialized processors that have a completely different architecture, like systolic arrays with SIMD runtime architecture (most GPUs), they run up against Intel's DNA that only likes chips that sell in the tens of millions of units, and they required specialized software that Intel doesn't like. GPUs, FPGAs, SuperNICs and their ilk give some Intel leaders indigestion. For a long time GPUs were not high volume chips, and Intel's integrated graphics processors helped make sure of that.
Clearly no company was better positioned in the 90s to invest and be luck, call that failed leadership and a failed BoD?
Like I said, Intel leveraged its leadership position which originated in IBM PCs for almost four decades. Intel was constantly examined for anti-trust issues. That's not failed leadership, that's leadership that got too entrenched in its own long-term strategy and were standing still while an inflection point or two passed them by, and also lost its fabrication advantage at the same time. To many senior leaders and technologists in Intel, one of the primary drivers of Intel's success was software compatibility. Everything was optimized for x86, so the saying went, and for a time that was actually correct. Now, of course, porting isn't very relevant (proof points are the Apple M-series and Qualcomm Snapdragon), Intel x86 chips arguably went too far in the CISC make-everything-an-instruction direction, and there they are, arguably now selling less than leadership products against competition with some competitive advantages. In my opinion, it's probably nothing that a two generation fab process advantage wouldn't have covered up, but they this time they didn't have that, and losing fab leadership at a more critical juncture would be difficult to imagine. You can have inefficient designs with the best fabs all to yourself and stay competitive, you can have awesome designs with less than the best fabs, but not having either one is a recipe for losing market share. Perhaps Lunar Lake and Arrow Lake will even things up with AMD and Qualcomm, but I'm not making any bets. Product leadership is like human muscle mass - losing it is faster and easier than gaining it back.
 
Last edited:
Back
Top