Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/threads/forget-the-white-house-sideshow-intel-must-decide-what-it-wants-to-be.23389/page-8
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2021770
            [XFI] => 1050270
        )

    [wordpress] => /var/www/html
)

Forget the White House Sideshow. Intel Must Decide What It Wants to Be.

Memory is much less reliant on design capability or cutting edge fab, and is largely a commodity. If anything, it is mostly dependent on cost.

I think this is quite different from processors ;).
I agreed prior to high profit and demand HBM, although cost is aligned with yields and Moore's law is not yet dead for it. Although the ecosystem argument is very strong for logic being fundamentally different, HBM is still in theory a standardized, substituteble commodity, and I've heard stories of new graduating designers discovering there wasn't much demand for them in DRAM.

Is there similarity in how the major non-Chinese DRAM manufacturers dwindled to three with how the leading edge logic field narrowed? At the moment, the only proven logic leader is TSMC.

Intel is perhaps technologically close with Intel 3 and will arguably exceed TSMC next year if 18A works out due to TSMC taking another node step beyond GAA N2 for BSPD, and that being pragmatically optional. While Samsung has officially realized they have a logic problem after many issues, perhaps after losing Google as a customer cost them face?

Of the three DRAM IDMs we've got two doing well with high profit HBM, Samsung being reported on this forum to be losing talent to SK Hynx due to bad management, and buying EUV machines due to top down orders. Which might be somewhat like Intel being the first to plunge into high NA EUV while TSMC is holding off.
 
Last edited:
All these stories, except for perhaps the the Nokia one, have the same underlying theme - one or more disruptive new technology/market combos arising and the incumbent failing to figure out how to deliver products to serve those new markets in a way that met the margin / volume / channel needs of their existing successful business.

An incumbent company encountering a disruptive technology and market often finds itself in a precarious position, famously described by Clayton Christensen as the "Innovator's Dilemma". These well-managed, successful companies often fail to adapt because their existing business models, which focus on pleasing their most profitable customers, are fundamentally ill-equipped to respond to a new market built around a simpler, cheaper technology.
IBM does not cleanly fit into the Clayton Christensen disruptive innovation model, perhaps because they were always a systems company. They got their digital product start with a punched card vacuum tube multiplying device, in part because Thomas Watson Jr. wanted them to start getting real world experience in that domain. They sold far more units than expected, then had a compelling case for a device which would also divide, which is much harder to do electro-mechanically.

You might view them as following the disruptive innovation path, they supported people doing bigger things with punched card technology, contributed much more than they were credited for to Harvard Mark I, and really jumped into the field with what they initially called the 1952 Defense Calculator AKA 701.

But at the same time developed the 1954 lower end IBM 650. And continued that high through low business model through at least the business System/32 and scientific IBM 1130, and the lowest end 1964 System/360 Model 20 which was a 16 bit minicomputer with an instruction subset of the bigger models.

In general they slugged it out with minicomputer manufacturers, who did do the disruptive innovation thing, although haphazardly for a while. DEC was the best at it, but the PDP-6 was an industrial design disaster, the PDP-10 KA was very limited, and the family in general doomed by a small 1 MB address space. The VAX "super minicomputer" really got them moving into mainframe technology, along with VMS software to make them reliable including in clusters.

I think it was more IBM's salesmen driven culture narrowing its focus to the highest end stuff as you point out, and general old corporation degradation, perhaps similar to Intel's, especially after Thomas Watson Jr, retired in 1971 due to a heart attack. They did a number of very bad moves for their customers, but eventually destroyed themselves by entirely switching from leasing to purchases. That gave them a one time earning boon, which they had no succession plans for.

They did redefine what was a business PC from the Z-80 and CP/M standard (although those were still cheaper, capable, and healthy enough for a long while), but never really got the concept. The minicomputer and workstation industries eventually conquered their higher end except for the sticky installed base with fleets of PC derived systems. But that took decades.

Kodak just couldn't survive its razor blade business model disappearing; they did good hardware, but except for things like pure scanners it always depended on selling film, and there was no way to keep being a big company. Today the camera companies are dying or becoming very niche because smartphones have become good enough for most everything a normal consumer needs.

You are right about Nokia, Apple redefined what a smartphone was and they and Android defeated the older incumbents including BlackBerry and Microsoft based ones. Another part of that redefinition was creating third party ecosystems, and like we see with logic fabs, there are a very limited number of ones the third parties can afford to support.
 
Some might dismiss these failures or mistakes as 20/20 hindsight. But after so many serious missteps by a long list of Intel’s CEOs, board of directors, managers, and engineers, I have to ask: is this the inevitable fate of the IDM business model? If not, then someone must explain why Intel has shown such a persistent pattern of incompetence and poor leadership over the past 20 to 30 years.

Is Intel a magnet that only attracts incompetents, scammers, and crooks who made bad decisions - for more than 20 years? I don't believe so.
No, the employees and executives were no good.
Because made the wrong choice
Are you saying that IDM is the only cause?
 
Switching topic order to Intel then IBM:
Agreed, especially the torture bit, and going deeper, I see routine, extreme high level engineering incompetence that eventually extended to and destroyed the leading edge logic fabrication golden goose with the failed transition to 10 nm/Intel 7.

Off the top of my head, older examples include their RISC attempts, Itanium, Rambus RDRAM which resulted in two million part recalls, one of which was motherboards Dell was about to ship, front side bus vs. ccNUMA, and AMD64. Netburst (P4) was at minimum a fabrication failure, was predicated as a "marketing architecture" on getting parts up to 10 GHz just before Dennard scaling failed.

No surprise that a company with such sustained horrible high level technical management would ignore their teething problems with 14 nm and fail for years with the next node. I also see this as illuminating the IDM or not question:

Would Intel be succeeding well enough today if they hadn't failed so hard with 10 nm, and BK hadn't gutted their verification function? Those together trashed their x86 business line for a long time, which we believe after their exit from memory was the only thing they were institutionally able to do. It's a path they couldn't take, which leads to:

Another way to look at this is risk management, the success of the IDM model depends on sufficient success at both design and fabrication. Whatever you can say about AMD's fabrication problems before and after the spinoff, not being locked into Global Foundries allowed them to eventually move to pure play foundry TSMC. A company can be good at only so many things; for AMD, see their GPGPU software problems.That structure was a big, historical change for IBM. Their punched card computing history was much more in the minicomputer style, and their history of minicomputers goes way back further.

The 1954 not a mainframe IBM 650 was the first mass produced computer and at the low end was followed by the very clever 1959 IBM 1620 scientific computer. Which for the first version used a great deal of its budget on core memory, which IBM was very good at vs. the drum memory used in the 650. Thus the CADET internal name was backronymed into Can't Add, Doesn't Even Try, math was done with tables in memory rather than logic.

According to Wikipedia citing memories of the people doing it, the other half of the phrase "space cadet" was the code name for the very successful also released in 1959 IBM 1401 for business.

Which continued to be produced after the System/360 consolidation of its previous numerous computer lines, and was the father of the System 3, which lead to four System 3N systems, the last of those the ambitious System 38 being the ancestor of the AS/400, now the "i" line.

On the scientific end, the CADET which according to Wikipedia sold about 2,000 units was followed by the IBM 1130 based on System/360 technology, about 10,000 sold. And it was the introduction to programming for a very large number of people including myself at the end of its run. A lot of cleverness was used to make it as inexpensive as possible, like its entry level very slow printer. Wikipedia says it was followed by a System/7 and Series/1. all three from Boca Raton, Florida, home of the IBM PC.

As a programming and systems type, I see the major IBM inflection point as the internal politics that accompanied the "bet the company" System/360 effort. Which was so difficult, from hardware production to software, IBM swore to never try that again, so it eventually became today's high end for them "z" series.

I believe that environment allowed the decision to eschew virtual memory to become iron dogma, all the way through the first set of Series/370 models. That immediately lost them almost all of the higher education market, a lot of which moved to DEC's first mainframe lines, the PDP-6/10/20s with a dead end 18 bits of 36 bit words address space, or 1 MB (IBM's pre-64 bit mainframes were 31 bits of 8 bit bytes, or 2 GB). Along with very bad systems software they funneled their high end into an eventual big business ghetto with no mind share.

At the lowest end, IBM utilizing the Boca Raton crew launched the Intel 8088 CPU based IBM PC. But except for the minor XT revision, the company never really understood the importance of compatibility. They were able to once more move the whole industry with the 286 based IBM AT. What followed was very ugly, Compaq didn't care about IBM's internal infirmaries and launched the first major 386 desktop, and see Micro Channel, and its echoes at DEC following the success of the Unibus and Q-bus.

IBM having bluewashed Boca Raton realized its design cycle was about 4 years long compared to the industry's 2 years or less, so they designed a high speed 16/32 bus and tried the old trick of initially only using and licencing it at half speed. Other companies were not so constrained.
And even if such a terrible company abandons the IDM business, there is only a future that will go bankrupt.
 
Just my opinion, but Intel will be split eventually. The vertical integration model is not financially viable at the volumes Intel has IMO.

"Intel" has done things that are arguably "stupid". I would never state that an entire company is simply "stupid".

Intel has a great deal of value. Anyone that believes otherwise is mistaken. Intel's current state is a result of poor strategy and risk management IMO.
The IDM model does not work
And Intel doesn't make it happen even if you abandon the vertically integrated model
They are incompetent, too incompetent
There's no way this could improve

Why do you think it will improve just by quitting IDM?
Their decisions that made the IDM model unstable remain unchanged, inherited
Incompetence is inherited
 
No, the employees and executives were no good.
Because made the wrong choice
Are you saying that IDM is the only cause?

The IDM business model isn’t the only reason for Intel’s troubles, but it’s a major one, perhaps the primary one. Why has there been a whole trainload of Intel leaders, managers, and employees making big mistakes that critically damaged the company? And why has Intel kept hiring and promoting them?

Meanwhile, AMD, Qualcomm, Nvidia, Broadcom, MediaTek, GlobalFoundries, and TSMC are doing just fine.

Were all those ‘bad’ Intel employees and executives over the past 20 years secretly part of some society dedicated to destroying Intel?

If the answer is no, then we’re forced back to the fundamentals: Intel’s IDM business model. As long as Intel refuses to confront this core problem, it will keep hiring and promoting talented people who will inevitably be boxed into making bad decisions that continue to hurt Intel.
 
Were all those ‘bad’ Intel employees and executives over the past 20 years secretly part of some society dedicated to destroying Intel?

If the answer is no, then we’re forced back to the fundamentals: Intel’s IDM business model.
Too binary in the possibilities. Bad corporate culture is entirely capable of producing the divergent outcomes of Intel vs. the successful fabless and foundry companies you cited, "AMD, Qualcomm, Nvidia, Broadcom, MediaTek, GlobalFoundries, and TSMC."

You'd also need to look at companies like them that failed or not done well, and I'd put GlobalFoundries in the latter category based on what I've read in this forum and their financials.

I'm willing to agree being an IDM has been a factor, but let's look at what put Intel in into its current mess, the near destruction of half of that business model. As far as I can tell, it can be boiled down to one CEO, Brian Krzanich, and the board with enabled him. After Intel initially stumbled on 14 nm he infamously fired the people necessary to make 10 nm a success!

We know from TMSC's N7's initial, pre EUV success the target was not insurmountable. This was also Intel's third FinFET node so it shouldn't be that alone, also see how their first 22 nm was very smooth viewed from the outside. This compares Ivy Bridge to Broadwell with it's limited, late, and I think lower L3 cache budgets, and then Cannon Lake. It may be no coincidence BK was fired 5 weeks after that single SKU launch.

He also gutted Intel's FDIV scarred verification function which was the final straw for Apple with Skylake, and this has to be part of the Sapphire Rapids debacle. There are way too many metal steppings which continued after the first lot fit for some purpose was put into the very late Aurora supercomputer.

A single man can maim or even destroy a high tech company, and rather quickly. For another decline and fall, see DEC under Ken Olsen, who got his start in this field as a graduate student who reduced 3D core memory to practice. He was hardly alone in never grasping the consequences of moving CPUs to single dies, which killed a host of companies along with the challenges of moving operating systems to multiple processors.

That's a story just by itself, management was generally presented with two groups of developers, one saying it was easy, the other saying it required a lot of foundational tooling work and would be hard. You can guess which most companies picked, and what it cost them when the second group had left the company and the first group's product crashed too often in the field under real world loads. Very few super-minicomputer companies survived that.
 
The IDM business model isn’t the only reason for Intel’s troubles, but it’s a major one, perhaps the primary one. Why has there been a whole trainload of Intel leaders, managers, and employees making big mistakes that critically damaged the company? And why has Intel kept hiring and promoting them?

Meanwhile, AMD, Qualcomm, Nvidia, Broadcom, MediaTek, GlobalFoundries, and TSMC are doing just fine.

Were all those ‘bad’ Intel employees and executives over the past 20 years secretly part of some society dedicated to destroying Intel?

If the answer is no, then we’re forced back to the fundamentals: Intel’s IDM business model. As long as Intel refuses to confront this core problem, it will keep hiring and promoting talented people who will inevitably be boxed into making bad decisions that continue to hurt Intel.
I guess you said IDM was the cause after all?
Do you think they can make decent decisions without IDM?
Can't make a decent decision because of IDM? I'll tell you something different
Bad things will be passed on forever
Even if the cancer called IDM is removed, the cancer will remain
 
I guess you said IDM was the cause after all?
Do you think they can make decent decisions without IDM?
Can't make a decent decision because of IDM? I'll tell you something different
Bad things will be passed on forever
Even if the cancer called IDM is removed, the cancer will remain
You said hindsight, but this is too dualism
Some companies have failed...
you don't take it into account
In a personal interpretation, Intel is a company that is in between these two extremes.
 
Too binary in the possibilities. Bad corporate culture is entirely capable of producing the divergent outcomes of Intel vs. the successful fabless and foundry companies you cited, "AMD, Qualcomm, Nvidia, Broadcom, MediaTek, GlobalFoundries, and TSMC."

You'd also need to look at companies like them that failed or not done well, and I'd put GlobalFoundries in the latter category based on what I've read in this forum and their financials.

I'm willing to agree being an IDM has been a factor, but let's look at what put Intel in into its current mess, the near destruction of half of that business model. As far as I can tell, it can be boiled down to one CEO, Brian Krzanich, and the board with enabled him. After Intel initially stumbled on 14 nm he infamously fired the people necessary to make 10 nm a success!

We know from TMSC's N7's initial, pre EUV success the target was not insurmountable. This was also Intel's third FinFET node so it shouldn't be that alone, also see how their first 22 nm was very smooth viewed from the outside. This compares Ivy Bridge to Broadwell with it's limited, late, and I think lower L3 cache budgets, and then Cannon Lake. It may be no coincidence BK was fired 5 weeks after that single SKU launch.

He also gutted Intel's FDIV scarred verification function which was the final straw for Apple with Skylake, and this has to be part of the Sapphire Rapids debacle. There are way too many metal steppings which continued after the first lot fit for some purpose was put into the very late Aurora supercomputer.

A single man can maim or even destroy a high tech company, and rather quickly. For another decline and fall, see DEC under Ken Olsen, who got his start in this field as a graduate student who reduced 3D core memory to practice. He was hardly alone in never grasping the consequences of moving CPUs to single dies, which killed a host of companies along with the challenges of moving operating systems to multiple processors.

That's a story just by itself, management was generally presented with two groups of developers, one saying it was easy, the other saying it required a lot of foundational tooling work and would be hard. You can guess which most companies picked, and what it cost them when the second group had left the company and the first group's product crashed too often in the field under real world loads. Very few super-minicomputer companies survived that.
I agree with your opinion
Thank you for reminding me of the word dualism from my memory

Well, the "dualism" I'm talking about here means a human social dualism, such as the dualism of good and evil. The original dualism has a slightly different meaning, like the first Moore's Law.

Well, personally, if they had made better decisions between 2010 and 2015 (or a little later), they would have made enough money to make the previous iPhone order failure seem like a small deal.

After all, this person verbalized what I wanted to say
Thank you
 
We still have funds in the early and middle stages of 2010, and investing and introducing EUVs will not be that big of a burden.
It's not good that you made the wrong choice here.
If we have introduced EUV by 2020...
 
Nokia had a huge installed base on Symbian OS. Symbian was great when it came out but hopelessly obsolete in the late 2000s. They tried developing several replacement software platforms but none really stuck. I think they got the right idea with Qt and Linux OS which later led to Meego. But then Elop came with his idea of outsourcing software development to Microsoft. It was him who killed Nokia.

Microsoft also had an obsolete software platform called Windows Mobile, and came up with a stupid plan where they would develop an interim OS incompatible with all legacy Windows Mobile apps they had, an OS which was also incompatible with apps in the future mobile OS they were designing.

Nokia was made to commercialize phones with this MS OS which was dead on arrival. Which surprise did not sell much at all. Would you buy a smartphone with next to no apps which would get obsoleted shortly?

Elop also killed the Symbian phones while they were still selling, and went all in on his Windows Mobile misadventure.
 
Last edited:
Back
Top