Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/threads/forget-the-white-house-sideshow-intel-must-decide-what-it-wants-to-be.23389/page-3
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2021770
            [XFI] => 1050270
        )

    [wordpress] => /var/www/html
)

Forget the White House Sideshow. Intel Must Decide What It Wants to Be.

The M1 didn't pop out of another universe. Apple was clearly working on it for years. I think they always wanted their own SoCs. I'm most impressed by their using the same M-series CPUs in iPads. My wife's new 13" M3 iPad is pretty amazing.
Completely agree. But if Intel had stayed on the Moore's Law treadmill, it would have taken Apple longer to come up with a design that would have let them jettison Intel's chips.

Sorry for not making that clear.
 
Unequal answer on the Mac loss; Intel would still have been fabbing the M1 if they fabbed earlier Apple SoC's.
Agreed, if Intel really became the foundry partner Apple wanted. But I suspect that would have had to be post-Jobs. I suspect the iPhone and saving Apple went to his head too much to partner with Intel at its peak.
re: 100X - my understanding (from Intel former employees on this forum) is Intel was always inefficient at fab costs compared to TSMC because of pracrices/discipline and they had little incentive to fix it because the product advantage paid so well. I'm saying that if foundry became a large part of Intel's cost say around 2010, at some point even Intel would have figured out how to fab more cost efficiently.

When a previously small piece of a business becomes a large piece, things tend to change to support that piece..

I've heard this argument before, but I think Intel would have needed to change its manufacturing strategy and put fabs in low-cost geos back then. Times are somewhat different now with increased automation, since I've read here from people whose opinions I trust that TSMC's AZ costs are only about 10% higher than those in TW. But TSMC imported their corporate culture into AZ with numerous TW employees, correct? Intel doesn't have a lower-cost culture to import, nor, apparently, have they developed a strong customer-first culture either. So I'm not convinced of the they'll-figure-it-out-if-the-volume-is-there theory.
 
Completely agree. But if Intel had stayed on the Moore's Law treadmill, it would have taken Apple longer to come up with a design that would have let them jettison Intel's chips.

Sorry for not making that clear.
I don't think so. The M-series SoCs are architecturally superior, IMO. And not just a little bit. For example, my 2018 Mac Mini with an i7 always ran warm to hot. Especially doing an OS upgrade or running an Objective C compile. No matter what I do with my 2025 Mac Mini, which has a 2/3 smaller footprint on my desk than the 2018 version, it is always cool to the touch. I know there's a fan in the M4 Mini, but I literally can't get it turn on. I also know I'm biased by my background, but I think architecture and design beats being a generation ahead on fab process. :)
 
I don't think so. The M-series SoCs are architecturally superior, IMO. And not just a little bit. For example, my 2018 Mac Mini with an i7 always ran warm to hot. Especially doing an OS upgrade or running an Objective C compile. No matter what I do with my 2025 Mac Mini, which has a 2/3 smaller footprint on my desk than the 2018 version, it is always cool to the touch. I know there's a fan in the M4 Mini, but I literally can't get it turn on. I also know I'm biased by my background, but I think architecture and design beats being a generation ahead on fab process. :)
A little OT - but someting weird happened with x86 desktop chips over the past 15 years. CPU Idle power on desktop seemed to have hit it's lowest somewhere around Sandy Bridge or Haswell and then steadily increased over time. Some of that is because increasingly power hungry I/O got integrated into the CPU SoC (faster PCIe, USB, etc). but you can also see the additional power use with chiplets vs. monolithic, etc. Early Zen desktop chips idled higher than Haswell/Skylake because of the I/O overhead power penalty.

Apple M-series takes it a step further than monolithic with closely coupled memory, and large caches / lower frequencies.

I think x86 is somewhat stuck because the market won't let them just 'fall back and start over' like Apple was able to do. (due to cost, performance, and other reasons).

I have a home server running the "core i3-n305" -- an 8 core e-core chip (same e-core uarch as in Alderlake) and it idles at under 10W, and that's with a bunch of (solid state) drives and multiple 2.5G ethernet ports, and 48GB of RAM. It's possible with x86 but unfortunately not a focus.
 
well Lunar Lake is such a design but the core part lacks in comparison to apple
Here are the battery lifef and idle power consumption

1755191793594.png
1755191819999.png

1755192927047.png
 
Last edited:
re: 100X - my understanding (from Intel former employees on this forum) is Intel was always inefficient at fab costs compared to TSMC because of pracrices/discipline and they had little incentive to fix it because the product advantage paid so well. I'm saying that if foundry became a large part of Intel's cost say around 2010, at some point even Intel would have figured out how to fab more cost efficiently.
Corporate weaknesses often start out as strengths. Once upon a time, Intel's radical Copy Exact! culture made it possible to produce at scale with less investment compared to "custom fit" each new node to each fab. I'll grant that this still helps Intel to some degree. TSMC will probably have to evolve in a similar way produce at scale in TW, AZ and DE.

BUUUUUTTTTTTT...It is a huge enough disadvantage to cost that no one really discusses or cares about. It is absurdly expensive to be downgrading pieces of tools to 15 year old firmwares; rather than taking it as-is from the manufacturer. But this is what Intel's radical CE! is. It also means 15 years of better firmwares, with better functionality, is lost to Intel, in the name of CE! Intel isn't in the smartphone era yet inside some of their fabs. NOT IN THE SMARTPHONE ERA!
 
This Apple moment has come and gone but is really irrelevant in todays situation. But let's go back to that Intel, Paul/Steve moment. Can someone spring back to the days when Paul was offered this and tell me what was Intel's biggest challenge at the time, what was their gross margin, what was their revenue growth. What was the obvious path to 100B revenue and where was Intel's competitive. And at the time given the most optimistic take on the Apple request what it would be?

This report is a detailed summary. I was working with the Xscale teams on all of those products and with TD on all the processes. I can discuss the gory details verbally but High level: Apple wanted to pay a price that was below Intels MODEL for best cost.

At the same time, Intel Scale was losing tons of money despite being in the leading Smart phones of the day. for a year or so, the best phone in the world was running Intel Xscale, and the best processor in the world was Intel Xscale. Intel had thousands of people working JUST on mobile. But the cost were insanely high and designs missed the window.

Intel decided to sell off xscale to Marvell and was convinced by some internal people that they could use Atom with x86 as a mobile processor ( this failed ... and again the costs were too high).

Personally, I then left FSM to go work at IM Flash and Micron.... Where I quickly learned how Intel wafer costs were way too high and exactly why they were too high. I had a long discussion with the head of TMG and he was open to the ideas and he had benchmarking data showing the same thing. Managing for cost effectiveness is different from managing to not limit the latest product (Zinsner did a really good summary of this paradox 6-12 months ago)

even if they are super cost effective, People who sell to Apple do not make a lot of money. I have worked for or with 4 different companies selling to Apple. Its always the same story. TSMC today has a different, very strong partnership with Apple which is great for both companies.

Its a good discussion and I am open to phone or zoom calls
 
well Lunar Lake is such a design but the core part lacks in comparison to apple
Here are the battery lifef and idle power consumption

View attachment 3495 View attachment 3496
View attachment 3497

370HX can go below 2W, I engineered hardware on it. Default BIOS setting come from AMD itself, and they don't tell ODMs much how to customize them.

Windows also doesn't help for it lost much of up to date hardware feature support. Linux for example can switch performance profiles on the fly according to load, while windows sticks to "average", and you can only use OEM software to "unlock" bios performance settings.
 
re: TSMC, yes agreed - 28nm was spectacular for example, but they were still behind Intel on fab tech for the first half of the 2010s and not significantly ahead until 5nm in 2020. In my fantasy universe they would have been slower going from 28nm to 5nm without Apple as a customer pushing them / writing them large checks.

I always thought Contra was PO, it is even funnier it's BK..

Grove IMO was 'enlightened arrogant' - There's quite a few stories of OEMs (Compaq, etc) that hated dealing with Intel and Andy for arrogance reasons.. :)

look forward to the presentation. I admit you and blueone and others are far more educated than me on this stuff, but I wanted to present some reasons why I think the Apple business loss was the beginning of the end for Intel fabs. I think it became inevitable after that, with the economics requirements of Moore's Law always trending towards 'last man standing'. (whoever has the biggest foundry business will fab the final node).
Sorry, I forgot this, "In the end, Moore's Law will be gone."
Even the survivors can no longer be in Moore's Law.
 
Last edited:
re: TSMC, yes agreed - 28nm was spectacular for example, but they were still behind Intel on fab tech for the first half of the 2010s and not significantly ahead until 5nm in 2020. In my fantasy universe they would have been slower going from 28nm to 5nm without Apple as a customer pushing them / writing them large checks.

I always thought Contra was PO, it is even funnier it's BK..

Grove IMO was 'enlightened arrogant' - There's quite a few stories of OEMs (Compaq, etc) that hated dealing with Intel and Andy for arrogance reasons.. :)

look forward to the presentation. I admit you and blueone and others are far more educated than me on this stuff, but I wanted to present some reasons why I think the Apple business loss was the beginning of the end for Intel fabs. I think it became inevitable after that, with the economics requirements of Moore's Law always trending towards 'last man standing'. (whoever has the biggest foundry business will fab the final node).
Surprisingly, it's been forgotten, but TSMC was very different from the Intel process until N5 appeared.
There was a time when it didn't reach Intel's back
It's surprisingly easy to forget, but it's a fact
 
Last edited:
This report is a detailed summary. I was working with the Xscale teams on all of those products and with TD on all the processes. I can discuss the gory details verbally but High level: Apple wanted to pay a price that was below Intels MODEL for best cost.

At the same time, Intel Scale was losing tons of money despite being in the leading Smart phones of the day. for a year or so, the best phone in the world was running Intel Xscale, and the best processor in the world was Intel Xscale. Intel had thousands of people working JUST on mobile. But the cost were insanely high and designs missed the window.

Intel decided to sell off xscale to Marvell and was convinced by some internal people that they could use Atom with x86 as a mobile processor ( this failed ... and again the costs were too high).

Personally, I then left FSM to go work at IM Flash and Micron.... Where I quickly learned how Intel wafer costs were way too high and exactly why they were too high. I had a long discussion with the head of TMG and he was open to the ideas and he had benchmarking data showing the same thing. Managing for cost effectiveness is different from managing to not limit the latest product (Zinsner did a really good summary of this paradox 6-12 months ago)

even if they are super cost effective, People who sell to Apple do not make a lot of money. I have worked for or with 4 different companies selling to Apple. Its always the same story. TSMC today has a different, very strong partnership with Apple which is great for both companies.

Its a good discussion and I am open to phone or zoom calls
Nice post. Maybe now people will stop posting made up fairy tales about Intel and Apple, and how Intel just lacked foresight and stupidly missed the iPad and iPhone opportunities. Intel has more than enough stupid mistakes over the years to talk about and learn from, so there's no need for ignorant guesses.
 
At the same time, Intel Scale was losing tons of money despite being in the leading Smart phones of the day. for a year or so, the best phone in the world was running Intel Xscale, and the best processor in the world was Intel Xscale. Intel had thousands of people working JUST on mobile. But the cost were insanely high and designs missed the window.
I keep seeing continued reinforcement how little Intel actually understood about the smart phone business and how much hubris they had for the "Intel way" at that point in time.
* They were way too focused on their ARM architectural license at the time and missed the swerve in the road that others were taking toward:
* Synthesizable ARM based systems leveraging the ARM 7TDMI-S and the upcoming ARM 1176JZF-S, rather than custom micro-architectures and custom design that cost millions and produced little added value.
* Their own Intel MMX extensions instead of Neon.

Basically, there was a whole compounding of bad (but Intel-centric) decisions that essentially precluded them from ever successfully entering the mobile app processor business.
 
370HX can go below 2W, I engineered hardware on it. Default BIOS setting come from AMD itself, and they don't tell ODMs much how to customize them.

Windows also doesn't help for it lost much of up to date hardware feature support. Linux for example can switch performance profiles on the fly according to load, while windows sticks to "average", and you can only use OEM software to "unlock" bios performance settings.
I don't doubt this if we use the correct setting but Microsoft Windows sucks big time. Lisa Su and Lip Bu tan should contact Satya and take some part of software about windows with themselves.
 
Back
Top