Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/index.php?threads/amd-taking-server-market-from-intel-many-questions.16529/
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2021370
            [XFI] => 1050270
        )

    [wordpress] => /var/www/html
)

AMD Taking Server Market From Intel, Many Questions

Arthur Hanson

Well-known member
This is but one more reason not to subsidize Intel. It looks like Intel is still falling and has to come up with a new survival plan to the one they currently have. Do any of the readers have any thoughts on how Intel can achieve its past glories, or have they fallen to far behind? Is there any viable way Intel can stay a force in the industry, even with government help? Does Intel have a viable survival plan for the competition is definitely gaining speed? Is Gelsinger the right man to do a turnaround or is there someone better? Any thoughts on what would be the best use of the government's money to advance semi-manufacturing in the US? tWill the TSM and Samsung fabs in the US put US in the position we need to be in?

 
AMD consistently beaten Intel on cost/performance for way longer than it was a net-performance leader.

What we see now is just Intel's network of coercive bulk purchase pricing agreements unravelling. Big server makers, OEMs, and direct buyers get Xeons N-times cheaper than the retail price, but even deepest discounts can't beat AMD now.
 

Thank you for the stratechery link. From there I went to watch a 2019 interview with Pat Gelsinger by Computer History Museum.



Beginning around 59:00 Pat Gelsinger was talking about the Risc vs CISC battle and debate.

After watching this interview, it seems to me Pat Gelsinger was part of the Intel leadership team who helped Intel to win the battle but caused the Intel to lose the war.

For the past 16 years, Intel is not a meaningful contender in the smartphone revolution. Smartphones market becomes much bigger than the PC market. And now all the smartphones available on the market are using Risc chips.
 
Intel is not eligible for federal grants until the Intel Inside federal procurement overcharge 'price fix recovery is resolved. Tax incentives for equipment procurement is likely being an indirect method to secure federal subsidies. And I think Intel would prefer other than grants to keep the government's nose out of Intel's business. Intel is also reconfiguring from producing for supply relied to hold channels financially to producing for demand subject cost optimization and unnecessary cost avoidance and needs to for IFS. Intel Xeon production volume has halved since q3 2021 dropping considerable since Skylake into Cascade Lakes peak production volume. Broad market is standardized on dirt cheap Skylake and Cascade Lake presenting minimally a 300 M unit upgrade market. Haswell 648 M followed by Broadwell 368 M units of production are two largest installed bases of enterprise compute and the optimized application core sweet spot range dependent applications clusters around 8C, 12 to 16 C, 20/24/28C. 25% of Ice production volume is Silver 2P up to 20C per socket. Ice Gold 6330 28C is the single most prolific SKU at 10.43% of full line production. On Ice volume weight, 32C all SKUs in category represent10.18% of full line and all core grades above 32C are another 13%; 36C = 6%, 38C = 2%, 40C = 5%. AMD holds the high ground VM market where Milan 64C = 30% of full line. 56/48C = 6%, 32C = 15%, 24C = 17%, 16C = 12%. All the volume markets are enterprise resource management and traditional Fortune OLTP in relation hyperscale / public cloud. I suspect Sapphire Rapids and Genoa too complex for the mainstream compute market on business of compute code your own specific application emphasis. Dell is the only volume Ice off the shelf in a box vendor where Fujistsu and Lenovo Ice volume is minuscule. Of course Intel market is waiting for Sapphire Rapids but again may be too complex for the vast majority waiting whole platform validation subject plug and play. AMD Milan is simple enough if can supply the VAR Skylake Cascade Lake replacement market along with Intel Silver leaving the massive multicores for business of compute. AMD Milan volume surpassed Ice volume in q1. I predicted over a year ago if SR does not begin volume shipment capable of filling the dealer group by q3 2022 with an off the shelf solution, Intel would manage itself into Chapter 11 on DCG no longer capable of subsidizing other divisions behind the books. mb
 
Thank you for the stratechery link. From there I went to watch a 2019 interview with Pat Gelsinger by Computer History Museum.



Beginning around 59:00 Pat Gelsinger was talking about the Risc vs CISC battle and debate.

After watching this interview, it seems to me Pat Gelsinger was part of the Intel leadership team who helped Intel to win the battle but caused the Intel to lose the war.

For the past 16 years, Intel is not a meaningful contender in the smartphone revolution. Smartphones market becomes much bigger than the PC market. And now all the smartphones available on the market are using Risc chips.
I don't think that it is a reasonable conclusion to draw that since RISC architectures, namely Arm, won the mobile market, RISC is an inherently superior architecture to CISC. As with most architecture strategies, a large part of success are the design choices you make and the technical quality of the implementation. Intel x86 CISC starts with a more difficult to implement and less power efficient concept, variable length instructions. Variable length instructions were a cool optimization when transistors, especially for caches, were precious and expensive, and is more difficult to implement in instruction decoding. And the CISC strategy of specialized instructions eventually gets out of hand when you think CPUs are the center of the universe (IMO, because engineers think it's the pinnacle of accomplishment to get a new instruction approved). Of course, the more specialization you put in circuitry, the more power that is consumed, and before you know it you need, voila, microcode engines (talk about RISC...) to implement your bloated instruction set that isn't practical to commit to state machine logic.

In the last decade, because Intel wanted to lock-in Microsoft to their x86 instruction set strategy (remember that for many years, by microprocessor sales volume, Windows PCs were the only market that mattered), and the Wintel partnership was the most profitable in computer systems history, nothing - absolutely nothing - was allowed to rock that boat. And Intel's fabrication lead was an important factor in keeping their relatively clunky x86 architecture alive longer than it should have for technical reasons. As we all know, business reasons trump technical reasons every time, and they should, but to succeed long-term you have to know when the technical risks are piling up, or the market is changing so much, that you need to revisit your assumptions and strategies.

Many senior executives in Intel didn't believe that tablets and smartphones were going to be more important than desktops and laptops. Many worried that mobile devices would have lower margins than PC and server CPUs, and that Intel should focus its fabulous fabs on the high margin markets. Remember, Intel allocated fab capacity by projected product gross margin. Were these decisions a failure of imagination? Yeah, but Intel had and has a corporate environment where failure is not a valuable learning experience, it often ends careers, so conservative decision-making is the norm.

Of course, let's not forget Intel's biggest CPU fumble, Itanium, which was intended to be the 64bit server processor of the future, and x86 was to continue to be 32bit for the foreseeable future. Who would ever need a PC with more than 4GB of memory? And Itanium, a VLIW design... was the 64bit future intended for general purpose applications? Really? Designed by HP? Ridiculous. "Databases are Itanium forever." Uh-huh.


Unbelievable.

For a long time Intel x86 CISC CPUs were winning because no other company could match Intel's R&D investments in CPU design and fabrication. And computing was very CPU-oriented. Now transistors are more plentiful and cheaper, and the markets are bigger, and CPUs are more and more just for mainline application logic execution and difficult application-specific work is offloaded to different kinds of processors (GPU, AI, etc.) and accelerators (security, virtualization, compression, video processing, network protocols, etc.) and getting all of this stuff to work together is more important and a bigger win than worrying about RISC versus CISC. That's why everyone is talking about CXL and UCI. IMO, RISC is taking over because it's good enough, and now there's a hell of a lot more software in the world than Windows and databases.

CXL is also important because DRAM has been getting an increasing share of wallet in servers for years, and if you're a cloud data center one of your biggest efficiency challenges is making the sequestered DRAM in servers a sharable datacenter resource. CPUs just aren't as important as they used to be, so why not use an easier to implement strategy and focus on where the new wins are.

I hope for Intel's sake that Gelsinger isn't stuck in the past.
 
I don't think that it is a reasonable conclusion to draw that since RISC architectures, namely Arm, won the mobile market, RISC is an inherently superior architecture to CISC. As with most architecture strategies, a large part of success are the design choices you make and the technical quality of the implementation. Intel x86 CISC starts with a more difficult to implement and less power efficient concept, variable length instructions. Variable length instructions were a cool optimization when transistors, especially for caches, were precious and expensive, and is more difficult to implement in instruction decoding. And the CISC strategy of specialized instructions eventually gets out of hand when you think CPUs are the center of the universe (IMO, because engineers think it's the pinnacle of accomplishment to get a new instruction approved). Of course, the more specialization you put in circuitry, the more power that is consumed, and before you know it you need, voila, microcode engines (talk about RISC...) to implement your bloated instruction set that isn't practical to commit to state machine logic.

In the last decade, because Intel wanted to lock-in Microsoft to their x86 instruction set strategy (remember that for many years, by microprocessor sales volume, Windows PCs were the only market that mattered), and the Wintel partnership was the most profitable in computer systems history, nothing - absolutely nothing - was allowed to rock that boat. And Intel's fabrication lead was an important factor in keeping their relatively clunky x86 architecture alive longer than it should have for technical reasons. As we all know, business reasons trump technical reasons every time, and they should, but to succeed long-term you have to know when the technical risks are piling up, or the market is changing so much, that you need to revisit your assumptions and strategies.

Many senior executives in Intel didn't believe that tablets and smartphones were going to be more important than desktops and laptops. Many worried that mobile devices would have lower margins than PC and server CPUs, and that Intel should focus its fabulous fabs on the high margin markets. Remember, Intel allocated fab capacity by projected product gross margin. Were these decisions a failure of imagination? Yeah, but Intel had and has a corporate environment where failure is not a valuable learning experience, it often ends careers, so conservative decision-making is the norm.

Of course, let's not forget Intel's biggest CPU fumble, Itanium, which was intended to be the 64bit server processor of the future, and x86 was to continue to be 32bit for the foreseeable future. Who would ever need a PC with more than 4GB of memory? And Itanium, a VLIW design... was the 64bit future intended for general purpose applications? Really? Designed by HP? Ridiculous. "Databases are Itanium forever." Uh-huh.


Unbelievable.

For a long time Intel x86 CISC CPUs were winning because no other company could match Intel's R&D investments in CPU design and fabrication. And computing was very CPU-oriented. Now transistors are more plentiful and cheaper, and the markets are bigger, and CPUs are more and more just for mainline application logic execution and difficult application-specific work is offloaded to different kinds of processors (GPU, AI, etc.) and accelerators (security, virtualization, compression, video processing, network protocols, etc.) and getting all of this stuff to work together is more important and a bigger win than worrying about RISC versus CISC. That's why everyone is talking about CXL and UCI. IMO, RISC is taking over because it's good enough, and now there's a hell of a lot more software in the world than Windows and databases.

CXL is also important because DRAM has been getting an increasing share of wallet in servers for years, and if you're a cloud data center one of your biggest efficiency challenges is making the sequestered DRAM in servers a sharable datacenter resource. CPUs just aren't as important as they used to be, so why not use an easier to implement strategy and focus on where the new wins are.

I hope for Intel's sake that Gelsinger isn't stuck in the past.

Gelsinger was busy performing a xmas skit saying AMD is in the rear-view mirror - few months later AMD market cap overtook Intel. Now who's enjoying more of the "competitive fun" here?
 
This is but one more reason not to subsidize Intel. It looks like Intel is still falling and has to come up with a new survival plan to the one they currently have. Do any of the readers have any thoughts on how Intel can achieve its past glories, or have they fallen to far behind? Is there any viable way Intel can stay a force in the industry, even with government help? Does Intel have a viable survival plan for the competition is definitely gaining speed? Is Gelsinger the right man to do a turnaround or is there someone better? Any thoughts on what would be the best use of the government's money to advance semi-manufacturing in the US? tWill the TSM and Samsung fabs in the US put US in the position we need to be in?

If the goal is to have a greater semiconductor manufacturing share in the US, then the current plan of action seems to be effective. Government incentives are leading to Global Wafers and Micron building up in the USA, and expanding the investments that Samsung GF TI TSMC and Intel are making within the USA. As for whether Intel should get any money, I don't see why not. It's not the TSMC act, and only benefiting the current largest/most advanced player doesn't exactly seem like a strong long term move. As for intel I think the plan to bank on IFS is a brilliant move. As intel inevitably losses market share (when you are at 90+% you can only go down) they need to have some way to amortize the cost of their Fabs/process R&D. Without this extra revenue stream intel would probably slide into a similar position that AMD fell into. Unlike AMD intel's fabs are too large to be spun off/I don't think any oil barons plan on making the same mistake twice. Losing the fabs would also shutter the parts of intel's business that rely on their pricing advantage. It seems like (time will tell) that the thing that is holding up intel right now is their design teams rather than fabs; so their ability to spit out chips on nodes that are at least roughly competitive with AMDs older nodes should stay intact. After all right now AMD cannot meet Epic demand, so until AMD can get the capacity they need intel can keep some of their market share by default. As for the client section intel seems to be much healthier - as their products are actually competitive in this area. If intel can close their gap with TSMC and their client design teams deliver in a timely manner/create new value add ins for users and OEMs, then they should be able to either maintain their current market share. As for who should lead intel, I think what is most critical right now is consistency. No more flip flopping around. As for my opinion on Pat Gelsinger, I think he has that old intel spunk/energy which is probably good for team morale, and he has had exposure to many of intel's business units over the years. However, after all of the attrition intel has had over the years, I don't know if there are enough star middle/lower managers and industry veterans to rebuild this company to the level of expertise they once had any time soon. In other words I am worried that intel's talent pool might be too watered down.

While I doubt the intel from the 2000s and early 2010s can comeback, I think it is certainly possible that intel can stabilize and carve out nice niches that can keep the company going. What is for sure is that the next year or two for intel will be rough, and we won't have a good outlook until we start hitting major deadlines in 2023-2025. If they can make their deadlines than intel will have greatly reduced their slides while generating new revenue streams. If they fail to meet many of their goals, then they will slide farther, and some even tougher decisions than axing optane will need to be made.
 
As for intel I think the plan to bank on IFS is a brilliant move.
I agree, it's the right move. In fact, the only thing I agree with Gelsinger about. The trend towards increasing in-house chip development in the largest cloud companies, and Apple's Mac moves, is reducing the market for merchant ASICs in the IT hardware industry. Foundries are where the semiconductor industry looks the brightest.
As for my opinion on Pat Gelsinger, I think he has that old intel spunk/energy which is probably good for team morale, and he has had exposure to many of intel's business units over the years. However, after all of the attrition intel has had over the years, I don't know if there are enough star middle/lower managers and industry veterans to rebuild this company to the level of expertise they once had any time soon. In other words I am worried that intel's talent pool might be too watered down.
I am not a Gelsinger fan; I only agree with him about the foundry decision. His personality and demeanor are also big negatives for a lot of people I know, and he apparently hasn't changed much. I'm also concerned he's living in the past, product-wise.

I'm not sure who these middle/lower star managers are at Intel; I never met many who were very good. Many I've known are terrible. On the other hand, Intel hired some of the best and brightest engineers for decades, and IMO the brain drain from Intel to competitors and cloud companies and start-ups is very concerning.
 
I agree, it's the right move. In fact, the only thing I agree with Gelsinger about. The trend towards increasing in-house chip development in the largest cloud companies, and Apple's Mac moves, is reducing the market for merchant ASICs in the IT hardware industry. Foundries are where the semiconductor industry looks the brightest.

I am not a Gelsinger fan; I only agree with him about the foundry decision. His personality and demeanor are also big negatives for a lot of people I know, and he apparently hasn't changed much. I'm also concerned he's living in the past, product-wise.

I'm not sure who these middle/lower star managers are at Intel; I never met many who were very good. Many I've known are terrible. On the other hand, Intel hired some of the best and brightest engineers for decades, and IMO the brain drain from Intel to competitors and cloud companies and start-ups is very concerning.
I am definitely not a fan of the mouthing off, only to walk the dribble back. But I do think a leader should at least emit an aura of confidence and have a strong vision/credentials (things intel hasn't had in a while). As for being stuck in the past, I think maybe a little. But it seems he is willing to modernize intel's business model for the current environment, and it doesn't seem like intel is underinvesting in emerging markets like they did with Atom. The angle I can see for him being stuck in the past is that he wants to bring back the old intel culture and dominance; and I don't think that is possible due to the fact intel has more than 0.5 competitors now and how little of that old DNA is left within the company. But if he can turn that oil tanker around that would probably be one of the biggest turnarounds in a long time. But maybe you think there is some other area where he is holding back the company that I don't see, if so I'd love to hear it :)
 
maybe you think there is some other area where he is holding back the company that I don't see, if so I'd love to hear it :)
Pat has had 18 months now as CEO, and considering that he thinks of himself as an expert in CPU design and development, I'm not seeing any improvement in development rigor, schedule integrity, and especially software development excellence. The latter tells me he learned less than I would have thought at VMware. I expected him to make a lot of headway in the development rigor of these massive drivers that GPUs and other accelerator and networking chips are dependent on. And it's even more surprising since he appointed a corporate CTO, Greg Lavender, who has an impressive software background, but no hardware development experience whatsoever, at least according to his Intel bio. I'm sure he's a smart guy, but I thought he was a curious pick for that position.
 
Pat has had 18 months now as CEO, and considering that he thinks of himself as an expert in CPU design and development, I'm not seeing any improvement in development rigor, schedule integrity, and especially software development excellence. The latter tells me he learned less than I would have thought at VMware. I expected him to make a lot of headway in the development rigor of these massive drivers that GPUs and other accelerator and networking chips are dependent on. And it's even more surprising since he appointed a corporate CTO, Greg Lavender, who has an impressive software background, but no hardware development experience whatsoever, at least according to his Intel bio. I'm sure he's a smart guy, but I thought he was a curious pick for that position.

Greg Lavender followed Pat Gelsinger from VMware where he was VMware CTO under CEO Pat Gelsinger.
 
Greg Lavender followed Pat Gelsinger from VMware where he was VMware CTO under CEO Pat Gelsinger.
I know. Nonetheless, given that Intel gets most of its revenue from chips and other hardware sales, and Lavender has no hardware development background at all, I thought he was curious choice.
 
Pat has had 18 months now as CEO, and considering that he thinks of himself as an expert in CPU design and development, I'm not seeing any improvement in development rigor, schedule integrity, and especially software development excellence. The latter tells me he learned less than I would have thought at VMware. I expected him to make a lot of headway in the development rigor of these massive drivers that GPUs and other accelerator and networking chips are dependent on. And it's even more surprising since he appointed a corporate CTO, Greg Lavender, who has an impressive software background, but no hardware development experience whatsoever, at least according to his Intel bio. I'm sure he's a smart guy, but I thought he was a curious pick for that position.
Valid criticisms, and if we want to be really technical it seems like execution has gotten worse during his tenure (from a chip design perspective anyways). For what it is worth the mistakes that were made with sapphire rapids were likely made before Pat could have had much of an impact on intel's design methodology (if memory serves it was originally supposed to come out in 2021).
 
Valid criticisms, and if we want to be really technical it seems like execution has gotten worse during his tenure (from a chip design perspective anyways). For what it is worth the mistakes that were made with sapphire rapids were likely made before Pat could have had much of an impact on intel's design methodology (if memory serves it was originally supposed to come out in 2021).
I think it's likely still too early to pass meaningful judgement on what effect Pat Gelsinger has had - and will have - on design and project execution. Clear these have not yet improved - though not yet clear why not.

I don't know enough about his history at Intel to know how much of his time was spent managing a more or less "steady state" development environment in which a fairly well-defined process was followed and how much radical change he added. Being good at the first is no guarantee of being good at the second - and it is the second skill which is now required.

Reading the earlier comments, I would summarise by saying that we shall find out in the next year or two if Pat Gelsinger is the turnaround expert needed or yesterday's man.
 
Just a dumb old man's view:

1. Pursue a low-power CPU SoC with in-package memory like Apple's M2 for laptops. I'm guessing the PC laptop suppliers would love this. I'm sorry Apple beat you to the punch, but don't let that stop you. Remember when AMD beat you to x86-64? Suck it up.

2. Clean up the expensive mess they have in the networking group. Lots of products (client NICs, server NICs, offload NICs, IPUs, programmable switches), but no visible over-arching strategy. Playing in everything, leading in nothing. Consider getting on board with Google's Aquila, assuming it's real, to further cement the Google relationship. This group should be a key differentiator from AMD, but it's not. Nvidia kicks their butt.


3. Design a kick-ass single socket cloud server CPU with a lot of CXL/PCIe and DDR connectivity and as many cores as you fit in a package and not have it melt.

4. Integrate a CXL distributed memory solution with the cloud CPU, including an open software stack. Every cloud datacenter will probably be using CXL distributed memory when it's available.
 
Last edited:
Just a dumb old man's view:

1. Pursue a low-power CPU SoC with in-package memory like Apple's M2 for laptops. I'm guessing the PC laptop suppliers would love this. I'm sorry Apple beat you to the punch, but don't let that stop you. Remember when AMD beat you to x86-64? Suck it up.

2. Clean up the expensive mess they have in the networking group. Lots of products (client NICs, server NICs, offload NICs, IPUs, programmable switches), but no visible over-arching strategy. Playing in everything, leading in nothing. Consider getting on board with Google's Aquila, assuming it's real, to further cement the Google relationship. This group should be a key differentiator from AMD, but it's not. Nvidia kicks their butt.


3. Design a kick-ass single socket cloud server CPU with a lot of CXL/PCIe and DDR connectivity and as many cores as you fit in a package and not have it melt.

4. Integrate a CXL distributed memory solution with the cloud CPU, including an open software stack. Every cloud datacenter will probably be using CXL distributed memory when it's available.
1) Apple can get away with this because they can charge 2 grand for a laptop. I don't how big of a market there would be for premium intel cpus with exotic features (cool as that would be), so I feel the current method of focusing on the more mainstream offerings makes sense. As for efficiency I feel like AMD and Intel's mobile offering are pretty power competitive as is (considering they are a full node behind apple and don't have total control over the OS).

2) Agreed.

3) I suppose that is what saphire rapids was supposed to be, but by the time it comes out it will be underwhelming compared to AMD CPUs with the same features, higher core counts, lower power, and similar core per core performance.

4) Presumably this is already in the works, combine this with their excellent new atom architectures, and this could be a homerun with CSPs (assuming the stuff can come out in a reasonable time).
 
1) Apple can get away with this because they can charge 2 grand for a laptop. I don't how big of a market there would be for premium intel cpus with exotic features (cool as that would be), so I feel the current method of focusing on the more mainstream offerings makes sense. As for efficiency I feel like AMD and Intel's mobile offering are pretty power competitive as is (considering they are a full node behind apple and don't have total control over the OS).
I figure it would result in cheaper, lighter, and more power-efficient products, just like the M1/M2 does. Apple sells M1 versions for $900, and you never hear the fan or get a hot flash on your lap. Lots of people seem to like that, a lot. And the Intel-based version would run Windows natively, which the Apple stuff doesn't, and have a touch screen. (I don't like touch screens, but a lot of people do.) I think PC laptops are clunky by comparison, and it doesn't have to be that way. And, no, these wouldn't replace gaming laptops, but even Apple's best can't do that either.
2) Agreed.

3) I suppose that is what saphire rapids was supposed to be, but by the time it comes out it will be underwhelming compared to AMD CPUs with the same features, higher core counts, lower power, and similar core per core performance.
This is the Ampere CPU model, and given how successful Ampere is, I think they have a winning concept. This appears to be what the cloud guys really want. The new Amazon Nitro sorta-kinda looks like this too. Intel server CPUs are still enterprise CPUs sold to cloud companies.

4) Presumably this is already in the works, combine this with their excellent new atom architectures, and this could be a homerun with CSPs (assuming the stuff can come out in a reasonable time).
It is so obvious that if Intel isn't doing this someone senior should get fired. But they need to do a CXL switch and a fabric manager, so I have my doubts. Intel has more corporate mental blocks than breakthroughs. That's one reason why their current product line is in the doldrums. Many of the people innovating at other companies used to work for Intel, but I bet you know that.
 
I figure it would result in cheaper, lighter, and more power-efficient products, just like the M1/M2 does. Apple sells M1 versions for $900, and you never hear the fan or get a hot flash on your lap. Lots of people seem to like that, a lot. And the Intel-based version would run Windows natively, which the Apple stuff doesn't, and have a touch screen. (I don't like touch screens, but a lot of people do.) I think PC laptops are clunky by comparison, and it doesn't have to be that way. And, no, these wouldn't replace gaming laptops, but even Apple's best can't do that either.

This is the Ampere CPU model, and given how successful Ampere is, I think they have a winning concept. This appears to be what the cloud guys really want. The new Amazon Nitro sorta-kinda looks like this too. Intel server CPUs are still enterprise CPUs sold to cloud companies.


It is so obvious that if Intel isn't doing this someone senior should get fired. But they need to do a CXL switch and a fabric manager, so I have my doubts. Intel has more corporate mental blocks than breakthroughs. That's one reason why their current product line is in the doldrums. Many of the people innovating at other companies used to work for Intel, but I bet you know that.


There is a challenge that Intel is operating under the IDM + Foundry business model. From software to hardware, from design to manufacturing, from desktop CPU to server CPU, from AI/ML to high performance computing, from inhouse clients to external foundry customers, Intel is running too many things and pursuing too many targets, IMO.

It is "Everything Everywhere All at Once".

Can Intel cut something out in order to be more focused?
 
Back
Top