Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/index.php?threads/palmisano-intel-needs-to-revisit-its-strategy.20924/
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2021370
            [XFI] => 1050270
        )

    [wordpress] => /var/www/html
)

Palmisano: Intel Needs to Revisit its Strategy

XYang2023

Active member


Maybe the BoD can let PG retire gracefully.

The BoD can promote the CCG head, MJH to be the CEO. Given she already has a relationship with TSMC. She can start working a deal with TSMC to transition Intel to a design focus company.

The relief from capital burden enables it to focus on GPU/semi-custom accelerator lines, next gen CPUs, edge/automotive products, and software. It also needs to integrate different assets (Altera, Mobileye, Habana, etc) to produce better products, like what the CCG did with the ARC graphic team.

It then let the investors know once the strategy is stabilised, it will start share buyback strategically instead of dividend payments to reverse the current trend.
 


Maybe the BoD can let PG retire gracefully.

The BoD can promote the CCG head, MJH to be the CEO. Given she already has a relationship with TSMC. She can start working a deal with TSMC to transition Intel to a design focus company.

The relief from capital burden enables it to focus on GPU/semi-custom accelerator lines, next gen CPUs, edge/automotive products, and software. It also needs to integrate different assets (Altera, Mobileye, Habana, etc) to produce better products, like what the CCG did with the ARC graphic team.

It then let the investors know once the strategy is stabilised, it will start share buyback strategically instead of dividend payments to reverse the current trend.
And Intel can simply write off the fabs, shut them down, sell the equipment and layoff the fab engineers. One benefit is that it will help solve the fab engineer shortage and allow TSMC and Samsung internationally competitive salaries to US engineers. If US based engineers are not capable of doing technology development, then Intel needs to abandon it.
 
And Intel can simply write off the fabs, shut them down, sell the equipment and layoff the fab engineers. One benefit is that it will help solve the fab engineer shortage and allow TSMC and Samsung internationally competitive salaries to US engineers. If US based engineers are not capable of doing technology development, then Intel needs to abandon it.
Working a deal with TSMC is better and more efficient. It benefits both sides better. Intel can deleverage (prop up its finance) and TSMC can be more diversified with less capitals.
 
Working a deal with TSMC is better and more efficient. It benefits both sides better. Intel can deleverage (prop up its finance) and TSMC can be more diversified with less capitals.
I am sure TSMC would be willing to give a good deal to Intel products knowing that Intel shutting down the fabs will ensure that a potential competitor is killed off forever.
 
It is way too early to change Intel management. The company is still in the middle of the change of direction to IDM 2.0. Unfortunately these Wall Street people do not understand industry business cycles and expect everything to be done in a year or two max. It takes two years at least to build a fab and another year to ramp up a process there and that is without the time to design the process. Intel has huge amounts of liquid cash on its balance sheet so they have enough money to execute on their strategy. Eventually the CHIPS Act money will come as well.

Intel was like 5 years behind TSMC on process technology when Pat came back and that cannot be solved in a single process cycle. If there is something I think could have been done differently would be starting that US fab earlier, have it operating simultaneously with the Irish fab, getting the money to do this by stopping the distribution of dividends years ago.

But ironically while that could have prevented them from needing to outsource as much fabrication to TSMC, that US fab would have been built before the CHIPS Act money was available, so Intel would have needed to get more funding from other sources.
 
Last edited:
It is way too early to change Intel management. The company is still in the middle of the change of direction to IDM 2.0. Unfortunately these Wall Street people do not understand industry business cycles and expect everything to be done in a year or two max. It takes two years at least to build a fab and another year to ramp up a process there and that is without the time to design the process. Intel has huge amounts of liquid cash on its balance sheet so they have enough money to execute on their strategy. Eventually the CHIPS Act money will come as well.

Intel was like 5 years behind TSMC on process technology when Pat came back and that cannot be solved in a single process cycle. If there is something I think could have been done differently would be starting that US fab earlier, have it operating simultaneously with the Irish fab, getting the money to do this by stopping the distribution of dividends years ago.

But ironically while that could have prevented them from needing to outsource as much fabrication to TSMC, that US fab would have been built before the CHIPS Act money, so Intel would have needed to get more funding from other sources.
The problem is that PG is losing trusts from many people. Because his strategy is very expensive, at the same time, he has not been focusing on controlling costs. He also could do something about the internal inefficiency early but he didn't. Also, as he needed to deal with many aspects of the company (us government for chips act, the progress of the technology, the factory built-out), how could he have the time to do something to catchup with AI opportunity? He is the person who cancelled Rialto Bridge, the successor of the PVC GPU. They bet the wrong product for the current AI wave (they decided to focus on Gaudi). This really showed they lacked the understanding of the market. Could this be the reason that the previous DCAI head was replaced?

Anyways, I think Intel needs to rebuild credibility regardless of what options it takes.
 
He is the person who cancelled Rialto Bridge, the successor of the PVC GPU. They bet the wrong product for the current AI wave (they decided to focus on Gaudi). This really showed they lacked the understanding of the market. Could this be the reason that the previous DCAI head was replaced?
I don't know the answer to your question, but Sandra Rivera, the previous DCAI EVP, was assigned to be CEO of Altera, which looks like a challenging assignment and probably considered a promotion. More concerning, Rivera's replacement, Justin Hotard, has never led a large-scale chip design and development group. I've never understood Gelsinger's staff selection criteria.
 
I don't know the answer to your question, but Sandra Rivera, the previous DCAI EVP, was assigned to be CEO of Altera, which looks like a challenging assignment and probably considered a promotion. More concerning, Rivera's replacement, Justin Hotard, has never led a large-scale chip design and development group. I've never understood Gelsinger's staff selection criteria.


Regarding Sandra, I remembered in one of the Intel innovation events, she regarded Nvidia's GPUs as power hungry and unnecessary. I was quite shocked when hearing that. One of the reasons of using GPUs for the current AI workloads is programmability/flexibility. Once the workloads become more matured, maybe ASICs are more appropriate I think.
 
Regarding Sandra, I remembered in one of the Intel innovation events, she regarded Nvidia's GPUs as power hungry and unnecessary. I was quite shocked when hearing that. One of the reasons of using GPUs for the current AI workloads is programmability/flexibility. Once the workloads become more matured, maybe ASICs are more appropriate I think.
GPUs are programmable, but they are also SIMD devices, which makes applicability more limited than for CPUs, and programming them demands sophisticated software development tools. Nvidia GPUs can include three types of cores, CUDA, Tensor, and RTX, which are used by different application workloads. Also, GPUs from different manufacturers have different low-level programming models, so until you get to the higher levels of programming abstraction (PyTorch, for example), there's no compatibility between GPUs from different companies. (Unfortunately for software developers, CPUs have diverged in recent years, so a simple recompile don't ensure portability, as Microsoft has recently found between x86 CPUs and Qualcomm's CPUs.)

Nonetheless, I don't agree with the comments from Rivera you heard. CPUs with vector instructions aren't as efficient or performant, no matter how much Intel has ever wanted to believe that.

As for ASICs, there's no question they are the ultimate for power efficiency, and they can be best for performance. But ASICs are designed for specific algorithms or functions, much like the old floating point accelerators, which limits their applicability and complicates their utilization by applications. I'm not hopeful that AI applications will use ASIC accelerators in any widespread sense anytime soon.
 
  • Like
Reactions: C-4
As for ASICs, there's no question they are the ultimate for power efficiency, and they can be best for performance. But ASICs are designed for specific algorithms or functions, much like the old floating point accelerators, which limits their applicability and complicates their utilization by applications. I'm not hopeful that AI applications will use ASIC accelerators in any widespread sense anytime soon.
This will happen at the edge first. Where the power consumption and transistor budget are more limited.
 
The problem is that PG is losing trusts from many people. Because his strategy is very expensive, at the same time, he has not been focusing on controlling costs. He also could do something about the internal inefficiency early but he didn't. Also, as he needed to deal with many aspects of the company (us government for chips act, the progress of the technology, the factory built-out), how could he have the time to do something to catchup with AI opportunity? He is the person who cancelled Rialto Bridge, the successor of the PVC GPU. They bet the wrong product for the current AI wave (they decided to focus on Gaudi). This really showed they lacked the understanding of the market. Could this be the reason that the previous DCAI head was replaced?

Anyways, I think Intel needs to rebuild credibility regardless of what options it takes
You can't do massive layoff while at the same time asking money from government and encouraging employee that they are valuable and will be well taken care of. Doing massive layoff before 18A come out, it's a dangerous act. Not only that, 2021 was still a good year. Everything was put in motion until 2022's callapse.
 
The problem is that PG is losing trusts from many people. Because his strategy is very expensive, at the same time, he has not been focusing on controlling costs. He also could do something about the internal inefficiency early but he didn't. Also, as he needed to deal with many aspects of the company (us government for chips act, the progress of the technology, the factory built-out), how could he have the time to do something to catchup with AI opportunity? He is the person who cancelled Rialto Bridge, the successor of the PVC GPU. They bet the wrong product for the current AI wave (they decided to focus on Gaudi). This really showed they lacked the understanding of the market. Could this be the reason that the previous DCAI head was replaced?

Anyways, I think Intel needs to rebuild credibility regardless of what options it takes.
And there's no hype of PVC, Rialto Bridge is no difference. Only Gaudi was competitive in some metrics. Name one competitive advantage PVC has, it has lots and lots of chiplets for sure.
 
GPUs are programmable, but they are also SIMD devices, which makes applicability more limited than for CPUs, and programming them demands sophisticated software development tools. Nvidia GPUs can include three types of cores, CUDA, Tensor, and RTX, which are used by different application workloads. Also, GPUs from different manufacturers have different low-level programming models, so until you get to the higher levels of programming abstraction (PyTorch, for example), there's no compatibility between GPUs from different companies. (Unfortunately for software developers, CPUs have diverged in recent years, so a simple recompile don't ensure portability, as Microsoft has recently found between x86 CPUs and Qualcomm's CPUs.)

Nonetheless, I don't agree with the comments from Rivera you heard. CPUs with vector instructions aren't as efficient or performant, no matter how much Intel has ever wanted to believe that.

As for ASICs, there's no question they are the ultimate for power efficiency, and they can be best for performance. But ASICs are designed for specific algorithms or functions, much like the old floating point accelerators, which limits their applicability and complicates their utilization by applications. I'm not hopeful that AI applications will use ASIC accelerators in any widespread sense anytime soon.

Quite a few people think Gaudi is ASIC like:

In one of earning calls, PG admitted that Gaudi is less flexible than GPU in terms of programming, and once they could launch Falcon Shores, they could participate the AI market more meaningfully.

At the same time, they still need time to work on their software stacks (oneAPI). If they could launch Rialto Bridge instead of cancelling it, even it might have limited volume, if any CSP would purchase it, they could accelerate their software readiness for Falcon Shores. Koduri in one of this twitter posts stated that Rialto Bridge would sell well given the given the current market condition.
 
Quite a few people think Gaudi is ASIC like:

In one of earning calls, PG admitted that Gaudi is less flexible than GPU in terms of programming, and once they could launch Falcon Shores, they could participate the AI market more meaningfully.

At the same time, they still need time to work on their software stacks (oneAPI). If they could launch Rialto Bridge instead of cancelling it, even it might have limited volume, if any CSP would purchase it, they could accelerate their software readiness for Falcon Shores. Koduri in one of this twitter posts stated that Rialto Bridge would sell well given the given the current market condition.
I don't think the current gen and cancelled gen can be sold well, PVC is in the market now, but no major hyperscaler ever participate in using it at scale.
 
You think Intel will give up its 18A and close its fabs? And totally depend on TSMC like all those other fabless firms? Think again.
Intel is winning, that is why it got so many people "worried".
 
You think Intel will give up its 18A and close its fabs? And totally depend on TSMC like all those other fabless firms? Think again.
Intel is winning, that is why it got so many people "worried".
I don’t say Intel to give up its fab, I mean that once they need to expand the capacity they can find Samsung for additional capacity.
 
Quite a few people think Gaudi is ASIC like:

In one of earning calls, PG admitted that Gaudi is less flexible than GPU in terms of programming, and once they could launch Falcon Shores, they could participate the AI market more meaningfully.

At the same time, they still need time to work on their software stacks (oneAPI). If they could launch Rialto Bridge instead of cancelling it, even it might have limited volume, if any CSP would purchase it, they could accelerate their software readiness for Falcon Shores. Koduri in one of this twitter posts stated that Rialto Bridge would sell well given the given the current market condition.
Patrick Moorhead is generally smarter than to make architectural generalizations like that, especially when I'm not convinced he really knows what he's talking about.

I do agree that Gaudi 3 looks more like a Google TPU than it does an Nvidia GPU, but beyond that there are some significant architectural and run-time differences between the tensor cores in an Nvidia GPU and the TPCs in Gaudi 3. Nvidia GPUs get their parallelism from large numbers of single threaded tensor cores executing a global SIMD run-time model. Gaudi 3 uses a smaller number of Tensor Processing Cores which do internal parallelism with a VLIW SIMD runtime architecture within each TPC, and then the 64 TPCs work in parallel. Depicting an Nvidia GPU as a processor and Gaudi as an ASIC seems arbitrary in a technical sense. Gaudi also has dedicated Matrix Multiplication Engines, which are similar to the accelerators we see in some CPUs.

Which is better? I haven't seen comparative test results on the same applications, but there are some differences between Nvidia Blackwell and Gaudi 3 that seem interesting. Gaudi communicates with a CPU driving the overall application using PCIe, which means the communications are via I/O operations. Blackwell uses a cache coherent version of NVLink (NVLink-C2C) to communicate with the Grace CPU, so exchanges are made at the instruction level, which is far more efficient than I/O operations. Also, inter-node communications are over non-coherent NVLink for Blackwell, along with IB and Ethernet support for scale-out via state of the art Mellanox technology, while Gaudi uses 200Gb/sec Ethernet with RoCE support of unknown (to me) origin and latency. (For those who aren't fascinated by the networking differences, RoCE is the InfiniBand transport layer encapsulated in UDP/IP over Ethernet. RoCE latencies are typically higher than native InfiniBand latencies, because Ethernet switches use a switching table architecture that is inherently less efficient than InfiniBand's. On the other hand, InfiniBand takes special network management expertise that many data centers don't have and don't want.)

Does it matter which is better? I suspect that if a data center has been using Nvidia to run AI apps, they're going to continue to use Nvidia because any sort of porting to a different hardware architecture can result in a lot of expensive and time-consuming testing, tuning, and verification. On the other hand, for new applications in new installations, Gaudi 3 will certainly be cheaper than anything with the Nvidia name on it, the performance may be good enough for a lot of apps, especially if the focus is PyTorch. I don't know enough about the AI development market to know how open the application developers are to change.
 
Last edited:
Back
Top