Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/threads/taking-intel-private.20998/page-2
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2021770
            [XFI] => 1050270
        )

    [wordpress] => /var/www/html
)

Taking Intel Private?

Well, the IDM 2.0 honeymoon is officially over. I have seen this with CEO changes in the past. You get 2-3 years to make it work. I never agreed with the 5 Nodes in 4 years strategy, that was doomed from the beginning. Ramping 5 nodes to HVM in four years? :ROFLMAO: AMD is not in Intel's rearview mirror. Nvidia owns AI. Dozens of new chip companies are circling Intel like vultures picking at the carcass. Worst of all, the semiconductor industry is not as healthy as it seems.

Again, Intel can claim process supremacy all they want but unless their financials measure up someone is going to get fired. I think taking Intel private would give them more time but it would not solve the problems they face, if it is even possible.

Bottom line: The semiconductor industry moves very fast and Intel does not, my opinion.
I don't think people can discount Pat Gelsinger's achievement that easily, nor should it be.

And there is no ramping 5 nodes to HVM in four years. Intel has always been saying 'manufacturing-ready', which is very different from HVM.

And at 2021, AI isn't a buzz word, most people don't care about it. It happened to be this case only when OpenAI give it real world application. And that's things are suddenly turned negative.

When he get on as the chief of INTEL, I remembered exactly who the main competitor was at that time, and it was AMD at data center, Apple at the edge, and TSMC at manufacturing. Nvidia wasn't the competitor at all. Sure the success in GPU made Pat envious. But GPU was still treated as complement to Intel CPU. Nvidia at that time, has this strange coopetition.

The challenging periods should not be talked easily because I was there, I watched the industry going through ups and downs.
 
I don't think people can discount Pat Gelsinger's achievement that easily, nor should it be.

And there is no ramping 5 nodes to HVM in four years. Intel has always been saying 'manufacturing-ready', which is very different from HVM.
Just to source this: https://www.anandtech.com/show/1682...nm-3nm-20a-18a-packaging-foundry-emib-foveros

“As always, there is a difference between when a technology ramps for production and comes to retail; Intel spoke about some technologies as 'being ready', while others were 'ramping', so this timeline is simply those dates as mentioned. As you might imagine, each process node is likely to exist for several years, this graph is simply showcasing the leading technology from Intel at any given time.”

The included roadmap chart showed Intel 7, 4, 3, 20A, 18A arriving in Q3 respectively in: 2021, 2022, 2023, 2024, 2025.

..

Unfortunately I think it was left a little ambiguous if this was volume ramping or base availability dates, but they didn’t necessarily say each node was getting a volume ramp.

As for actual availability of retail product, we got Meteor Lake in Dec 2023; about 15 months implied after Q3 2022. TSMC typically announces volume production 9+ months before we see an Apple iPhone with the tech available, so this implies Intel was a bit late if volume was the goal.

Intel 3 products started shipping Q2 this year (Sierra Forest), 12 months after implied date.

IMO, We should be seeing 18A products Q3 next year if this roadmap was held.
 
The most serious one is missing the AI trend. If they had persisted with the AXG group, it might be different. A lot of money was wasted on various projects. Both PG and Koduri thought it was Nvidia and not AMD that was their number one threat. Instead, they spread their resources on many other things. They should just copy what Jensen is doing and not try to be clever.
Do you know about why AXG was essentially nerfed and Raja was fired? Supposedly he was lying to the rest of the company and after ARC gpus were finally released a year behind schedule AND having software that was clearly only half-baked, Intel finally decided to pull the plug on funding AXG. All the delays, uncompetitive products, and missed deadlines led AXG to get cut hard. Business-wise, this seems to make sense - Intel already knew 2024-26 would be harder so they had to trim some of the fat, so the first thing that got cut was AXG - who delivered underperforming, constantly late products. Of course, the current market for AI is such that any product will sell at a high (or at least with good margins) price - but when these groups were cut it was done so thinking about the future.
 
Do you know about why AXG was essentially nerfed and Raja was fired? Supposedly he was lying to the rest of the company and after ARC gpus were finally released a year behind schedule AND having software that was clearly only half-baked, Intel finally decided to pull the plug on funding AXG. All the delays, uncompetitive products, and missed deadlines led AXG to get cut hard. Business-wise, this seems to make sense - Intel already knew 2024-26 would be harder so they had to trim some of the fat, so the first thing that got cut was AXG - who delivered underperforming, constantly late products. Of course, the current market for AI is such that any product will sell at a high (or at least with good margins) price - but when these groups were cut it was done so thinking about the future.
I tend to agree with you. Raja needed to compete with Jensen, but he was also working on a movie as a side project. I was thinking about how he could compete with Nvidia… Those were the red flags. But when Intel cut the funding to AXG, ChatGPT was already out. The management was not flexible enough to make adjustments to the market. That is my complaint.

I think for Intel to compete with Nvidia, they need to have a person who is at least as good as Jensen (vision and execution).
 
Intel 3 products started shipping Q2 this year (Sierra Forest), 12 months after implied date.
That's the problem with giving a 6 month wide time it can be 6 months but it can be 12 months as well depending on from which date you calculate 🤣
IMO, We should be seeing 18A products Q3 next year if this roadmap was held.
We will see if 18A is good enough for them to go back
 
Do you know about why AXG was essentially nerfed and Raja was fired? Supposedly he was lying to the rest of the company and after ARC gpus were finally released a year behind schedule AND having software that was clearly only half-baked, Intel finally decided to pull the plug on funding AXG. All the delays, uncompetitive products, and missed deadlines led AXG to get cut hard.
I don’t have first hand knowledge, but from the technical deep dives I’ve seen on Alchemist, there were a few basic mistakes made on architecture that someone with Raja’s experience should have prevented from happening.

I would need to search hard for the technical articles talking about this, but the basic mistakes were along the lines of — if you take an iGPU and scale it up to a ‘big GPU’ you end up with bottlenecks in some areas that didn’t exist when the GPU was small. They built Alchemist like an iGPU scaled up but didn’t address those bottlenecks, so they’re using a lot more silicon for the same performance than if they made a few changes back in design. You might expect a CA new to GPUs to make some mistakes like this, but Raja having sign off should have caught this early on.

Alchemist under Raja also went for too many software features at once (“check every box that Nvidia and AMD has”) instead of focusing on making a certain set of features work really well. You only get the first impressions “benefit” once, and a more seasoned leader (Raja or under him) should have looked more closely at quality vs. quantity here. (ex: It’s really cool that Intel offered the ability to OC your GPU.. except it didn’t work at first, and the software was very buggy. Maybe this feature should have been skipped for 6 months after launch).

Also, I think he didn’t have enough contingency plans for when things did run late. I saw a few articles that some of the software was being developed in Russia for example, and that delayed maturity of the SW. With the obvious Russia stuff going on after Crimea in 2014, if Intel was developing a critical new product SW stack there, they should have been ready for possible geopolitical scenarios.

You may also see a pattern with Raja of overhyping stuff and under delivering — as he did a few times (“Poor Volta”) when the Vega architecture (which also had a ROPs or some other limitation preventing performance from scaling beyond a certain point) launched.

I don’t know the guy, but from 1 million miles away, he looked a little over his head with the scope of the project. It’s possible that Intel’s culture set him up to fail completely too of course, but when you’re paid that kind of money you really should figure out a way to succeed..

(Note - he had a herculean task to try to launch a 3rd major discrete GPU company, though I think Intel several years ago was in a good shape to do so — engineering talent galore, a cost model that would have allowed sharing with existing iGPUs, and fabs that could be re-used cheaply to get some GPUs out. It’s hard to tell if anyone could have succeeded, but we do know Alchemist under Raja came out very late and underperformed).
 
Back
Top