Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/threads/intel-corporation-intc-q3-2023-earnings-call.19020/
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2021770
            [XFI] => 1050270
        )

    [wordpress] => /var/www/html
)

Intel Corporation (INTC) Q3 2023 Earnings Call

Daniel Nenni

Admin
Staff member
1698377476766.png


Interesting call. Pat certainly has a lot to say:

Pat Gelsinger

Thank you, John, and good afternoon, everyone. Before we begin, given our significant and now almost 50-year presence in Israel, we are deeply saddened by the recent attacks and their impact on the region. Our utmost priority is the safety and welfare of our people in Israel and their families. But I also want to recognize the resilience of our teams as they have kept our operations running and our factory expansion progressing. Our thoughts are with all of those affected by the war, and I am praying for a swift return to peace.

Turning to our results, we delivered an outstanding Q3, beating expectations for the third consecutive quarter. Revenue was above the high-end of our guidance and EPS benefited from both strong operating leverage and expense discipline. More important than our standout financial performance were the key operational milestones we achieved in the quarter across process and products, Intel Foundry Services, and our strategy to bring AI everywhere.

Simply put, this quarter demonstrates the meaningful progress we have made towards our IDM 2.0 transformation. The foundation of our strategy is reestablishing transistor power and performance leadership. While many thought our ambitions were a bit audacious when we began our five nodes and four-year journey roughly 2.5-years ago, we have increasing line of sight towards achieving our goal. Intel 7 is done with nearly 150 million units in aggregate of Alder Lake, Raptor Lake, and Sapphire Rapids already in the market. In addition, Emerald Rapids has achieved product release and began shipping this month.

In Q3, we began initial shipments of Meteor Lake on Intel 4, which we are now aggressively ramping on the most productive fleet of EUV tools in the industry, providing us with a greater than 20% capital efficiency advantage, compared to when EUV tools were first launched. High volume EUV manufacturing is well underway in Oregon and more recently in Ireland. Our FAB 34 in Ireland represents the first high volume EUV production in Europe, underscoring our commitment to establish geographically diverse and resilient supply. We are the only leading-edge semiconductor manufacturer at scale in every major region of the globe.

Our Intel 3 process is tracking to be manufacturing ready by year-end, supporting our first two Intel 3 products, Sierra Forest and Granite Rapids. In fact, our production stepping of Sierra Forest is already out of fab, and what we expect to be the production stepping of Granite Rapids has already taped in and is in the fab now. We are particularly excited by our move into the Angstrom era with Intel 20A and Intel 18A. Adding to our accelerating adoption of EUV are two key new innovations, RibbonFET and PowerVIA, representing the first fundamental change to the transistor and process architecture since we commercialized FinFET in 2012.

I have been studying SEM diagrams for almost 40-years. RibbonFET and PowerVIA are true works of art, the most exquisite transistors ever created. We expect to achieve manufacturing readiness on Intel 20A in the first-half of 2024. Arrow Lake, our lead product on 20A, is already running Windows and demonstrating excellent functionality. Even more significant, we hit a critical milestone on Intel 18A with the 0.9 release of the PDK with imminent availability to external customers.

In simple terms, the invention phase of RibbonFET and PowerVIA is now complete, and we are racing towards production-ready, industry-leading process technology. Our first products on Intel 18A will go into fab on schedule in Q1 ‘24 with Clearwater Forest for servers, Panther Lake for clients, and of course a growing number of IFS test chips. We expect to achieve manufacturing readiness for Intel 18A in second-half 24, completing our incredible five nodes and four years journey on or ahead of schedule.

While Intel 18A reestablishes transistor leadership, we are racing to increase that lead. We announced that innovation are plans to lead the industry in a move to glass substrate for high density, performance, and unique optical capabilities. We also announced our plans to begin installation of the world's first high NA EUV tool for commercial use by the end of the year to continue our modernizations and infrastructure expansions of our Gordon Moore Park in Oregon, home of our technology development team.

Moore's Law continues to be the foundational driver of semiconductor technology and economics, which, in turn, fuels broader innovation in every industry across the globe. We remain committed to be good stewards of Moore's Law and drive advancements until we have exhausted every element on the periodic table.

Importantly, our progress on process technology is now being well validated by third parties. We have made great progress with early IFS customers this quarter, which we expect to only accelerate with the release of the 0.9 PDK for Intel 18A.

A major customer committed to Intel 18A and Intel 3, which includes a meaningful prepayment that expedites and expands our capacity corridor for this customer. The customer is seeing particularly good power, performance and area efficiency in their design. This opportunity is very significant and highlights our full system's foundry capabilities and high-performance computing, big die designs, leadership performance and area efficient transistors, advanced packaging and systems expertise.

In addition, we are extremely pleased to announce today that we have signed with two additional 18A customers. Both are particularly focused in areas of high-performance compute and benefiting from power performance per unit silicon area. We have also made substantial progress with our next major customer and are expecting to conclude commercial contract negotiations before year-end.

Finally, we were also very happy to expand our growing foundry ecosystem by completing our strategic partnership with Synopsys in Q3 to include IP for Intel 3 and Intel 18A for both Intel internal and external foundry customers. With the rise of AI and high-performance computing applications, our advanced packaging business is proving to be yet another unique advantage. We have seen a surge of interest in our advanced packaging from most leading AI chip companies. With capacity corridors quickly available, this is proving to be a significant accelerant and on-ramp for Intel foundry customers.

During the quarter, we were awarded two customer AI designs for our advanced packaging and with an additional six customers in active negotiations, we expect several more awards by year-end. We have also established an important business relationship with Tower Semiconductor, utilizing our manufacturing assets in New Mexico, along with Tower investing capital expenditures of roughly $300 million for its use in this facility. This represents an important step in our foundry strategy, improving cash flows by utilizing our manufacturing assets over a significantly longer period of time.

Finally, we have submitted all four of our major project proposals in Arizona, New Mexico, Ohio, and Oregon, representing over $100 billion of U.S. manufacturing and research investments to the CHIPS Program Office and are working closely with them as they review these proposals. We look forward to providing a deeper update on our foundry business during our planned IFS industry event in Q1 of 2024.

We are on a mission to bring AI everywhere. We see the AI workload as a key driver of the $1 trillion semiconductor TAM by 2030. We are empowering the market to seamlessly integrate and effectively run AI in all their applications. For the developer working with multitrillion parameter frontier models in the cloud, Gaudi and our suite of AI accelerators provides a powerful combination of performance, competitive MLPerf benchmarks and a very cost-efficient TCO. However, as the world moves towards more AI-integrated applications, there's a market shift towards local inferencing. It's a nod to both the necessity of data privacy and an answer to cloud-based inference cost.

With AI accelerated Xeon for enterprise, Core Ultra launching the AIPC generation and OpenVINO enabling developers seamless and versatile support for a range of client and edge silicon, we are bringing AI to where the data is being generated and used rather than forcing it into the cloud. Our expansive footprint spanning cloud and enterprise servers to volume clients and ubiquitous edge devices positions us well to enable the AI continuum across all our market segments. The AI continuum enables AI everywhere.

DCAI exceeded our forecast this quarter with server revenue up modestly sequentially. We continue to see a strong ramp of our 4th Gen Xeon processor with the world's top 10 CSPs now in general availability and improving strength from MNCs. During the quarter, we shipped our 1 millionth 4th Gen Xeon unit and are on track to surpass 2 million units next month. 4th Gen Xeon includes powerful accelerators, demonstrating best-in-class CPU performance for AI, security and networking workloads.

Our AI-enhanced Xeons are primed for model inferencing, enabling seamless infusion of AI into existing workloads. This was visible this quarter with over one-third of 4th Gen Xeon shipments directly related to AI applications. We are the clear leader in AI CPU results as seen in MLCommons benchmarks today, and our road map provides significant further improvements with Granite Rapids expected to deliver an additional 2 times to 3 times AI performance on top of our industry-leading 4th Gen Xeon.

We continue to make excellent progress with our Xeon road map. Our 5th Gen Xeon processor code-named Emerald Rapids is in production and ramping to customers and will officially launch on December 14 in New York City. Sierra Forest, our first E-core Xeon is on track for first-half '24 with customers well into their validation process. Sierra Forest will feature up to 288 E-cores targeting next-generation cloud-native workloads, delivering even more price performance and power efficiency for our customers.

Granite Rapids, which shortly follows Sierra Forest, is also well into our validation cycle with customers. While the industry has seen some wallet share shifts between CPU and accelerators over the last several quarters, as well as some inventory burn in the server market, we see signs of normalization as we enter Q4 driving modest sequential TAM growth. Across most customers, we expect to exit the year at healthy inventory levels, and we see growth in compute cores returning to more normal historical rates off the depressed 2023.

More importantly, our successful road map execution is strengthening our product portfolio with Gen 4 and Gen 5 Xeon, Sierra Forest and Granite Rapids positioning us well to win back share in the data center. In addition, we expect to capture a growing portion of the accelerator market in 2024 with our suite of AI accelerators led by Gaudi, which is setting leadership benchmark results with third parties like MLCommons and Hugging Face. We are pleased with the customer momentum we are seeing from our accelerator portfolio and Gaudi in particular, and we have nearly doubled our pipeline over the last 90 days.

As we look to 2024, like many others, we now are focused on having enough supply to meet our growing demand. Dell is partnering with us to deliver Gaudi for cloud and enterprise customers with its next-generation power edge systems featuring Xeon and Gaudi AI accelerators to support AI workloads ranging from large-scale training to inferencing at the edge.

Together with Stability.ai, we are building one of the world's largest AI supercomputers entirely on 4th Gen Xeon processors and 4,000 Intel Gaudi2 AI accelerators. Our Gaudi road map remains on track with Gaudi3 out of the fab, now in packaging and expected to launch next year. And in 2025, Falcon Shores brings our GPU and Gaudi capabilities into a single product.

Moving to the client. CCG delivered another strong quarter, exceeding expectations for the third consecutive quarter, driven by strength in commercial and consumer gaming SKUs where we are delivering leadership performance. As we expected, customers completed their inventory burn in the first-half of the year, driving solid sequential growth, which we expect will continue into Q4. We expect full-year 2023 PC consumption to be in line with our Q1 expectations of approximately 270 million units.

In the near-term, we expect Windows 10 end-of-service to be a tailwind, and we remain positive on the long-term outlook for PC TAM returning to plus or minus 300 million units. Intel continues to be a pioneer in the industry as we ushered in the era of the AIPC in Q3 when we released the Intel Core Ultra processor code-named Meteor Lake.

Built on Intel 4, the Intel Core Ultra has been shipping to customers for several weeks and will officially launch on December 14 alongside our 5th Gen Xeon. The Ultra represents the first client chiplet design enabled by Foveros Advanced 3D packaging technology, delivering improved power efficiency and graphics performance.

It is also the first Intel client processor to feature our integrated neural processing unit, or NPU, that enables dedicated low-power compute for AI workloads. Next year, we will deliver Aero Lake as well as Lunar Lake, which offers our next-gen NPU, ultra-low power mobility and breakthrough performance per watt.

Panther Lake, our 2025 client offering, heads into the fab in Q1 '24 on Intel 18A. The arrival of the AIPC represents an inflection point in the PC industry, not seen since we first introduced Centrino in 2003. Centrino was so successful because of our time-to-market advantage, our embrace of an open ecosystem, strong OEM partnerships, our performance silicon and our developer scale. Not only are these same advantages in place today, they are even stronger as we enter the age of the AIPC.

We are catalyzing this moment with our AIPC acceleration program with over 100 ISVs already participating, providing access to Intel's deep bench of engineering talent for targeted software optimization, core development tools and go-to-market opportunities. We are encouraged and motivated by our partners and competitors who see the tremendous growth potential of the PC market. NEX is also seeing early signs of the benefits from growing AI use cases.

Our ethnic and IPU businesses are well suited to support the high I/O bandwidth required by AI workloads in the data center with growth expected to accelerate for both in 2024. Additionally, at the edge, as part of Intel's focus on every aspect of the AI continuum, NEX launched OpenVINO 2023.1, the latest version of the AI inferencing and deployment run time of choice for developers on client and edge platforms, with AI.io and Fit Match demonstrating how they use OpenVINO to accelerate their applications at our innovation conference.

We have leadership developer software tool chains that have seen a doubling of developer engagements this year. While NEX entered their inventory correction after client in DCAI, Q3 results beat our internal forecast and grew sequentially. We see continued signs of stabilization heading into Q4.

Finally, our Smart Capital strategy underpins our relentless drive for efficiency and our commitment to be great allocators of our owners' capital while consistently looking for innovative ways to unlock value for all our stakeholders. We remain on track to reducing costs by $3 billion in 2023, and we continue to see significant incremental opportunities for operational improvement as we execute on our internal foundry model.

In addition, in Q3, we made the decision to divest the pluggable module portion of our silicon photonics business, allowing us to focus on the higher-value component business and optical I/O solutions to enable AI infrastructure scaling. This marks the tenth business we have exited in the last 2.5 years, generating $1.8 billion in annual savings and a testament to our efforts to optimize our portfolio and drive long-term value creation.

Mobileye's solid Q3 and Q4 outlook continue to underscore the benefits of increased autonomy afforded by our initial public offering last year. In addition, we added TSMC as a minority investor in our IMS nano fabrication business in Q3. And earlier this month, we announced our plans to operate PSG as a stand-alone business beginning January 1. Similar to Mobileye and IMS, this decision gives PSG the mandate, focus and resources to better capitalize on their growth opportunities. We plan to report PSG results as a stand-alone segment in Q1, to bring in private investors in 2024 and to create a path to an initial public offering over the next two to three years.

In summary, we continue to deliver tangible progress 2.5 years into our transformation journey. We are on track with five nodes in four years. We are hitting or beating all our product road map milestones. We are establishing ourselves as a global at-scale systems foundry for both wafer processing and advanced packaging. We are unlocking new growth opportunities fueled by AI. And we are driving financial discipline and operational efficiencies as we continue to unlock value for our shareholders.

While we are encouraged by our progress to date, we know we have much more work in front of us as we continue to relentlessly drive forward with our strategy, maintain our execution momentum and deliver our commitments to our customers. I'd like to personally thank the Intel family for all their efforts.

With that, let me turn it over to Dave to go through our results in more detail and provide guidance for Q4.
 
Question about TSMC's comment on Intel 18A. I knew that would be a big one:

Timothy Arcuri
I do, I do, yes. Pat, can you just talk about the dynamics and maybe the allocation that you're getting from your major foundry partner? They went kind of out of their way to sort of go at the idea that your -- that 18A is going to be comparable to what they'll have in that same time frame. So now that it's becoming a little more apparent that you are making progress, has there been any change in that relationship and the allocation that you're getting from them? Thanks.

Pat Gelsinger
Yes. And first, I'd say we're -- come to a different conclusion than what you might have heard from them. We feel that our five nodes in four years, the leadership position that we expect with Intel 18A, this is a remarkable set of work. And as you heard me say in my formal comments, we think of 18A as a work of art. This is the finest transistor, right? And we've invented the last 30-years of transistors. This is the best one that's ever been built, right? And that and PowerVia, we feel very confident that we are on track to the leadership position that we described.

And as we've also said, hey, we're well underway on the things after that. And things like high-NA, the next generation of EUV or advanced packaging with glass, all of these are now being backed up and reinforced by their customer commitments. Three 18A customers now making commitments, the prepay customer I spoke about earlier, two additional customers, partners, we announced the ARM relationship in April, and they're now seeing very positive results on power performance area from 18A.

And as we think about the relationship with TSMC, hey, this is a great company and one that we partner with, one that we are a competitor to, one that we're a customer of. We collaborate with them. As you saw, they became an investor in the IMS business this quarter as well. And as a customer of those, we're very happy with how they're supporting us and our products as we're raising many of these products forward to there, a critical supplier to us as we're a critical supplier to them. This is one of the most critical relationships in the industry. I spent a lot of time personally on it.

We're very confident in our road map. And this is really an exceptional quarter for five nodes in four years and getting back to process leadership. We are well on our way to doing exactly what we said we would.
 
The Arm question:

Ross Seymore
Hi, guys. Thanks for asking a question. Pat, I wanted to follow-up on one of the topics you just mentioned about your partnership with ARM. The flip side of that is there's been reports recently of a number of people entering the CPU business for PCs using ARM architectures similar to what we've seen over the last couple of years on the data center side of things. Can you just talk about the competitive landscape of x86 versus ARM? And potentially, more importantly, if in fact, ARM was gaining traction, would you consider using that architecture and kind of broaden your technology internally?

Pat Gelsinger
Yes. Thank you, Ross. And overall, I think what you're seeing is the industry is excited around the AIPC. And as I declared this generation of AIPC at our Innovation Conference a couple of months ago, we're seeing that materialize and customers, competitors seeing excitement around that. ARM and Windows client alternatives, generally, they've been relegated to pretty insignificant roles in the PC business.

And we take all competition seriously. But I think history as our guide here, we don't see these potentially being all that significant overall. Our momentum is strong. We have a strong road map, Meteor Lake launching this AIPC generation December 14. Arrow Lake, Lunar Lake, we've already demonstrated the next-generation product at Lunar Lake, which has significant improvements in performance and capabilities.

We'll be signing Panther Lake, the next generation in the fab in Q1 and Intel 18A. We announced our AI Acceleration program, which already has over 100 ISVs part of it. We'll have -- we expect in the next two years over 100 million x86 million AI-enhanced PCs in the marketplace. This is just an extraordinary amount of volume. The ecosystem benefits that, that brings into the marketplace.

When thinking about other alternative architectures like ARM, we also say, wow, what a great opportunity for our foundry business. And given the results I referenced before, we see that as a unique opportunity that we have to participate in the full success of the ARM ecosystem or whatever market segments that may be as an accelerant to our foundry offerings, which are now becoming, we think, very significant around the ARM ecosystem with our foundry packaging and 18A wafer capabilities as well.
 
Question about TSMC's comment on Intel 18A. I knew that would be a big one:

Timothy Arcuri
I do, I do, yes. Pat, can you just talk about the dynamics and maybe the allocation that you're getting from your major foundry partner? They went kind of out of their way to sort of go at the idea that your -- that 18A is going to be comparable to what they'll have in that same time frame. So now that it's becoming a little more apparent that you are making progress, has there been any change in that relationship and the allocation that you're getting from them? Thanks.

Pat Gelsinger
Yes. And first, I'd say we're -- come to a different conclusion than what you might have heard from them. We feel that our five nodes in four years, the leadership position that we expect with Intel 18A, this is a remarkable set of work. And as you heard me say in my formal comments, we think of 18A as a work of art. This is the finest transistor, right? And we've invented the last 30-years of transistors. This is the best one that's ever been built, right? And that and PowerVia, we feel very confident that we are on track to the leadership position that we described.

And as we've also said, hey, we're well underway on the things after that. And things like high-NA, the next generation of EUV or advanced packaging with glass, all of these are now being backed up and reinforced by their customer commitments. Three 18A customers now making commitments, the prepay customer I spoke about earlier, two additional customers, partners, we announced the ARM relationship in April, and they're now seeing very positive results on power performance area from 18A.

And as we think about the relationship with TSMC, hey, this is a great company and one that we partner with, one that we are a competitor to, one that we're a customer of. We collaborate with them. As you saw, they became an investor in the IMS business this quarter as well. And as a customer of those, we're very happy with how they're supporting us and our products as we're raising many of these products forward to there, a critical supplier to us as we're a critical supplier to them. This is one of the most critical relationships in the industry. I spent a lot of time personally on it.

We're very confident in our road map. And this is really an exceptional quarter for five nodes in four years and getting back to process leadership. We are well on our way to doing exactly what we said we would.

CC Wei said two things on the TSMC call that impacts Intel. Here's the thing, CC Wei talks to TSMC's top customers so he knows what is going on and if he says something that proves not to be true on an investor call it could cost him his job, or it could cause TSMC some serious legal issues. CC Wei is too smart for that, my opinion.

CC said TSMC N3P is competitive with 18A in regards to PPA. 18A may be a work of art but the feedback from TSMC customers say otherwise.

CC also said TSMC N2 customer adoption is higher than N3 at a similar stage. This is significant since the top fabless semiconductor companies have to commit to the next node by year end so CC Wei knows who his N2 customers are. TSMC N2, Intel 18A, and Samsung 3nm are the choices and from what CC Wei said N2 adoption is going VERY well. Remember, TSMC N3 dominates the market so if N2 is greater than or equal to N3 that is huge!

Even if Intel has (3) 18A customers, they may also be using TSMC N2 for other products (Nvidia, QCOM, MediaTek, etc...).

Bottom line: PR spins aside, if you compare IFS 18A and TSMC N2 customer silicon in volume production, TSMC/Apple will win this race, absolutely.
 
CC Wei said two things on the TSMC call that impacts Intel. Here's the thing, CC Wei talks to TSMC's top customers so he knows what is going on and if he says something that proves not to be true on an investor call it could cost him his job, or it could cause TSMC some serious legal issues. CC Wei is too smart for that, my opinion.

CC said TSMC N3P is competitive with 18A in regards to PPA. 18A may be a work of art but the feedback from TSMC customers say otherwise.

CC also said TSMC N2 customer adoption is higher than N3 at a similar stage. This is significant since the top fabless semiconductor companies have to commit to the next node by year end so CC Wei knows who his N2 customers are. TSMC N2, Intel 18A, and Samsung 3nm are the choices and from what CC Wei said N2 adoption is going VERY well. Remember, TSMC N3 dominates the market so if N2 is greater than or equal to N3 that is huge!

Even if Intel has (3) 18A customers, they may also be using TSMC N2 for other products (Nvidia, QCOM, MediaTek, etc...).

Bottom line: PR spins aside, if you compare IFS 18A and TSMC N2 customer silicon in volume production, TSMC/Apple will win this race, absolutely.
CEOs need to tell investors truth in earnings call. But sometimes it will not be the "whole truth", and which could be changing with time also.
 
I think it is worth considering what CC said, as I don't think anything that was said is contradictory. TSMC said that N2 had 13% better performance @iso power to N3E at 0.9V. They also say that N3P is 5% better than N3E in this metric. By extension N2 is only ~8% better than N3P. The density bump of N2 vs N3P is also pretty small compared to what we are used to (~11%). In the past Pat said he thinks that 18A would have comparable density to N2 (which sounds an awful lot like close but no cigar to me). Going back to TSMC; if TSMC thought N3P was equal to or better than 18A in any metric, I assume CC would have said as much. The fact he said it was only "comparable in PPA" to me means that TSMC thinks that 18A has better PPA than N3P. I have no idea how accurate the competitive analysis teams are for intel and TSMC, but with how close N3P and N2 are, and TSMC's projections seeming to indicate that 18A PPA is somewhere in the middle, it sounds like these projections should be within the error bars. CC also didn't specify what exactly "N2 being more advanced than 18A" means. We know it is not from an architectural perspective, as it will be inferior until at least N2+BSPD; so we can assume that is not what he meant. Is it from a P-P perspective, HP density, HD density, SRAM, all of the above, some combination of the above? I couldn't tell you. Assuming that TSMC is right and 18A is somewhere in the middle, couldn't intel just say the same thing TSMC said and say that "18A is comparable to N2 and comes out earlier with greater maturity"? The only thing I can say without a doubt is that Dan you are right on the money when it comes to customer Si. They are the ones who will be the final say in the matter.

As for N2 having greater MSS than N3, I don't think that is literally possible. Even if intel and Samsung only sell a single wafer that is more marketshare then they have at 3"nm" right now. As I said before I suspect the "greater engagement" is due to higher technical complexity necessitating earlier collaboration. I suppose one alternate explanation could also be that TSMC expects a big increase to the number of fabless houses, which strikes me as unlikely given the cost trend of leading edge designs and tapeouts.

On a sidenote one thing that interested me was Pat talking up the area and performance. I don't really remember intel ever really talking about density. They even went so far as removing intel 4 being 2x intel 7 from their public roadmap when they unveiled the at the time 5N4Y back when 10nm SF was the first node and 20A was the last node :LOL:. From the beginning it was always power-performance, then performance leadership got thrown in with it.
 
I think what Pat G and CC Wei said are both true, as we have seen in the performance / density comparison between all 3 foundries. Intel wins in performance, and TSMC wins in density. I cannot find that pic now, but I remember that performance related, Intel 3 wins TSMC N2 slightly, in density TSMC 3 beat Intel 18A still with a big diff. But the performance, Intel is far away with 18A. I think the following statement from Pat says it all.

In addition, we are extremely pleased to announce today that we have signed with two additional 18A customers. Both are particularly focused in areas of high-performance compute and benefiting from power performance per unit silicon area.


But if I have to pick one who's underestimating who, I think it's CC Wei.

NVDA probably didn't say that statement to TSMC, as Jensen have said publicly that Intel 18A test chip 'look good'. I think that TSMC may get its noises from Qualcomm, MediaTek, etc those that are heavily into power efficiency related. AMD doesn't work with IFS, which make the sources a lot less reliable, I think. You also got Broadcom in HPC, who already said that they are not working with IFS. Qualcomm was early into IFS, but later back off, and never mention IFS again. I think that may say 18A is just not for mobile, which makes TSMC less impacted.

But its other platforms may not be that good. Its customers will feel more threats from Intel, which benefited from vertical integration, making Intel more capable of launching price war as more capacities come due. Plus Intel will pull Falcon Shore, its AI/HPC, back to its factory.

This poses question to AMD, and TSMC because they don't have complete information about 18A. And apparently Intel is gaining traction, which is nothing AMD/TSMC want



1698414292004.png
 
I think what Pat G and CC Wei said are both true, as we have seen in the performance / density comparison between all 3 foundries. Intel wins in performance, and TSMC wins in density. I cannot find that pic now, but I remember that performance related, Intel 3 wins TSMC N2 slightly, in density TSMC 3 beat Intel 18A still with a big diff. But the performance, Intel is far away with 18A. I think the following statement from Pat says it all.
I don't have any problems with techinsights. But I feel like TSMC would have a better idea what intel is up to. They would of course also know exactly what N2's capabilities are. Either way I suppose you could be right on this. If we take techinsight's projections as correct, and 18A leads in P at high voltages while losing to N3E in density you could say "PPA is comparable". And because N2 in the techinsights projections is a good bit denser with better performance you could very well say it is "more advanced than 18A".
But if I have to pick one who's underestimating who, I think it's CC Wei.
Why is that? The current TSMC doesn't strike me as arrogant.
NVDA probably didn't say that statement to TSMC, as Jensen have said publicly that Intel 18A test chip 'look good'. I think that TSMC may get its noises from Qualcomm, MediaTek, etc those that are heavily into power efficiency related. AMD doesn't work with IFS, which make the sources a lot less reliable, I think. You also got Broadcom in HPC, who already said that they are not working with IFS. Qualcomm was early into IFS, but later back off, and never mention IFS again. I think that may say 18A is just not for mobile, which makes TSMC less impacted.
I think it is worth pointing out that Jensen didn't specify which intel node they had TCs for (granted I think smart money is on 18A rather than intel 3). In TSMC's defense 18A "looking good" is not mutually exclusive with anything they said and it doesn't confirm anything Pat/intel have said.
But its other platforms may not be that good. Its customers will feel more threats from Intel, which benefited from vertical integration, making Intel more capable of launching price war as more capacities come due. Plus Intel will pull Falcon Shore, its AI/HPC, back to its factory.
When did intel announce that falcon shores was moving to 18A? Since it was Hibanna/AXG stuff I had just assumed it was at TSMC.
This poses question to AMD, and TSMC because they don't have complete information about 18A. And apparently Intel is gaining traction, which is nothing AMD/TSMC want
Intel's TD also cannot have complete information either because just like how IFS customers have NDAs with intel, intel BUs and IFS customers have NDAs with TSMC.
 
When did intel announce that falcon shores was moving to 18A? Since it was Hibanna/AXG stuff I had just assumed it was at TSMC.
when they cancel the product between PVC and Falcon Shore, as I recall, I don't remember the product name, but it was Intel 7, so pretty insignificant
 
Intel's TD also cannot have complete information either because just like how IFS customers have NDAs with intel, intel BUs and IFS customers have NDAs with TSMC.
I think even if it is true, it's not in effect now since internal foundry model start officially next year Q1. Intel now is still a very integrated, though they are making the move. But also Pat Gelsinger was the 486 lead architect. He definitely has insider information to both IFS and TSMC and know how to compare the two, right?
 
Just based on process specs:

TSMC N3 – 293MTx/mm2 – performance 1.82
TSMC N2 – 313MTx/mm2 – performance 2.05
Intel 18A – 238MTx/mm2 – performance 2.78

Intel does not release power numbers but from what I have heard inside the ecosystem TSMC leads in power savings so TSMC can claim leadership in two out of three PPAs. They also lead in cost so three out of four in PPAC.

As the PDKs near production (they are at v .9 now) we know PPA numbers based on specs and simulations. What happens with the actual silicon is still unknown since design and yield is always a factor.

There are some interesting papers at IEDM in December. Scotten, Paul, and I will be there so stay tuned.
 
View attachment 1511

Interesting call. Pat certainly has a lot to say:

Pat Gelsinger

Thank you, John, and good afternoon, everyone. Before we begin, given our significant and now almost 50-year presence in Israel, we are deeply saddened by the recent attacks and their impact on the region. Our utmost priority is the safety and welfare of our people in Israel and their families. But I also want to recognize the resilience of our teams as they have kept our operations running and our factory expansion progressing. Our thoughts are with all of those affected by the war, and I am praying for a swift return to peace.

Turning to our results, we delivered an outstanding Q3, beating expectations for the third consecutive quarter. Revenue was above the high-end of our guidance and EPS benefited from both strong operating leverage and expense discipline. More important than our standout financial performance were the key operational milestones we achieved in the quarter across process and products, Intel Foundry Services, and our strategy to bring AI everywhere.

Simply put, this quarter demonstrates the meaningful progress we have made towards our IDM 2.0 transformation. The foundation of our strategy is reestablishing transistor power and performance leadership. While many thought our ambitions were a bit audacious when we began our five nodes and four-year journey roughly 2.5-years ago, we have increasing line of sight towards achieving our goal. Intel 7 is done with nearly 150 million units in aggregate of Alder Lake, Raptor Lake, and Sapphire Rapids already in the market. In addition, Emerald Rapids has achieved product release and began shipping this month.

In Q3, we began initial shipments of Meteor Lake on Intel 4, which we are now aggressively ramping on the most productive fleet of EUV tools in the industry, providing us with a greater than 20% capital efficiency advantage, compared to when EUV tools were first launched. High volume EUV manufacturing is well underway in Oregon and more recently in Ireland. Our FAB 34 in Ireland represents the first high volume EUV production in Europe, underscoring our commitment to establish geographically diverse and resilient supply. We are the only leading-edge semiconductor manufacturer at scale in every major region of the globe.

Our Intel 3 process is tracking to be manufacturing ready by year-end, supporting our first two Intel 3 products, Sierra Forest and Granite Rapids. In fact, our production stepping of Sierra Forest is already out of fab, and what we expect to be the production stepping of Granite Rapids has already taped in and is in the fab now. We are particularly excited by our move into the Angstrom era with Intel 20A and Intel 18A. Adding to our accelerating adoption of EUV are two key new innovations, RibbonFET and PowerVIA, representing the first fundamental change to the transistor and process architecture since we commercialized FinFET in 2012.

I have been studying SEM diagrams for almost 40-years. RibbonFET and PowerVIA are true works of art, the most exquisite transistors ever created. We expect to achieve manufacturing readiness on Intel 20A in the first-half of 2024. Arrow Lake, our lead product on 20A, is already running Windows and demonstrating excellent functionality. Even more significant, we hit a critical milestone on Intel 18A with the 0.9 release of the PDK with imminent availability to external customers.

In simple terms, the invention phase of RibbonFET and PowerVIA is now complete, and we are racing towards production-ready, industry-leading process technology. Our first products on Intel 18A will go into fab on schedule in Q1 ‘24 with Clearwater Forest for servers, Panther Lake for clients, and of course a growing number of IFS test chips. We expect to achieve manufacturing readiness for Intel 18A in second-half 24, completing our incredible five nodes and four years journey on or ahead of schedule.

While Intel 18A reestablishes transistor leadership, we are racing to increase that lead. We announced that innovation are plans to lead the industry in a move to glass substrate for high density, performance, and unique optical capabilities. We also announced our plans to begin installation of the world's first high NA EUV tool for commercial use by the end of the year to continue our modernizations and infrastructure expansions of our Gordon Moore Park in Oregon, home of our technology development team.

Moore's Law continues to be the foundational driver of semiconductor technology and economics, which, in turn, fuels broader innovation in every industry across the globe. We remain committed to be good stewards of Moore's Law and drive advancements until we have exhausted every element on the periodic table.

Importantly, our progress on process technology is now being well validated by third parties. We have made great progress with early IFS customers this quarter, which we expect to only accelerate with the release of the 0.9 PDK for Intel 18A.

A major customer committed to Intel 18A and Intel 3, which includes a meaningful prepayment that expedites and expands our capacity corridor for this customer. The customer is seeing particularly good power, performance and area efficiency in their design. This opportunity is very significant and highlights our full system's foundry capabilities and high-performance computing, big die designs, leadership performance and area efficient transistors, advanced packaging and systems expertise.

In addition, we are extremely pleased to announce today that we have signed with two additional 18A customers. Both are particularly focused in areas of high-performance compute and benefiting from power performance per unit silicon area. We have also made substantial progress with our next major customer and are expecting to conclude commercial contract negotiations before year-end.

Finally, we were also very happy to expand our growing foundry ecosystem by completing our strategic partnership with Synopsys in Q3 to include IP for Intel 3 and Intel 18A for both Intel internal and external foundry customers. With the rise of AI and high-performance computing applications, our advanced packaging business is proving to be yet another unique advantage. We have seen a surge of interest in our advanced packaging from most leading AI chip companies. With capacity corridors quickly available, this is proving to be a significant accelerant and on-ramp for Intel foundry customers.

During the quarter, we were awarded two customer AI designs for our advanced packaging and with an additional six customers in active negotiations, we expect several more awards by year-end. We have also established an important business relationship with Tower Semiconductor, utilizing our manufacturing assets in New Mexico, along with Tower investing capital expenditures of roughly $300 million for its use in this facility. This represents an important step in our foundry strategy, improving cash flows by utilizing our manufacturing assets over a significantly longer period of time.

Finally, we have submitted all four of our major project proposals in Arizona, New Mexico, Ohio, and Oregon, representing over $100 billion of U.S. manufacturing and research investments to the CHIPS Program Office and are working closely with them as they review these proposals. We look forward to providing a deeper update on our foundry business during our planned IFS industry event in Q1 of 2024.

We are on a mission to bring AI everywhere. We see the AI workload as a key driver of the $1 trillion semiconductor TAM by 2030. We are empowering the market to seamlessly integrate and effectively run AI in all their applications. For the developer working with multitrillion parameter frontier models in the cloud, Gaudi and our suite of AI accelerators provides a powerful combination of performance, competitive MLPerf benchmarks and a very cost-efficient TCO. However, as the world moves towards more AI-integrated applications, there's a market shift towards local inferencing. It's a nod to both the necessity of data privacy and an answer to cloud-based inference cost.

With AI accelerated Xeon for enterprise, Core Ultra launching the AIPC generation and OpenVINO enabling developers seamless and versatile support for a range of client and edge silicon, we are bringing AI to where the data is being generated and used rather than forcing it into the cloud. Our expansive footprint spanning cloud and enterprise servers to volume clients and ubiquitous edge devices positions us well to enable the AI continuum across all our market segments. The AI continuum enables AI everywhere.

DCAI exceeded our forecast this quarter with server revenue up modestly sequentially. We continue to see a strong ramp of our 4th Gen Xeon processor with the world's top 10 CSPs now in general availability and improving strength from MNCs. During the quarter, we shipped our 1 millionth 4th Gen Xeon unit and are on track to surpass 2 million units next month. 4th Gen Xeon includes powerful accelerators, demonstrating best-in-class CPU performance for AI, security and networking workloads.

Our AI-enhanced Xeons are primed for model inferencing, enabling seamless infusion of AI into existing workloads. This was visible this quarter with over one-third of 4th Gen Xeon shipments directly related to AI applications. We are the clear leader in AI CPU results as seen in MLCommons benchmarks today, and our road map provides significant further improvements with Granite Rapids expected to deliver an additional 2 times to 3 times AI performance on top of our industry-leading 4th Gen Xeon.

We continue to make excellent progress with our Xeon road map. Our 5th Gen Xeon processor code-named Emerald Rapids is in production and ramping to customers and will officially launch on December 14 in New York City. Sierra Forest, our first E-core Xeon is on track for first-half '24 with customers well into their validation process. Sierra Forest will feature up to 288 E-cores targeting next-generation cloud-native workloads, delivering even more price performance and power efficiency for our customers.

Granite Rapids, which shortly follows Sierra Forest, is also well into our validation cycle with customers. While the industry has seen some wallet share shifts between CPU and accelerators over the last several quarters, as well as some inventory burn in the server market, we see signs of normalization as we enter Q4 driving modest sequential TAM growth. Across most customers, we expect to exit the year at healthy inventory levels, and we see growth in compute cores returning to more normal historical rates off the depressed 2023.

More importantly, our successful road map execution is strengthening our product portfolio with Gen 4 and Gen 5 Xeon, Sierra Forest and Granite Rapids positioning us well to win back share in the data center. In addition, we expect to capture a growing portion of the accelerator market in 2024 with our suite of AI accelerators led by Gaudi, which is setting leadership benchmark results with third parties like MLCommons and Hugging Face. We are pleased with the customer momentum we are seeing from our accelerator portfolio and Gaudi in particular, and we have nearly doubled our pipeline over the last 90 days.

As we look to 2024, like many others, we now are focused on having enough supply to meet our growing demand. Dell is partnering with us to deliver Gaudi for cloud and enterprise customers with its next-generation power edge systems featuring Xeon and Gaudi AI accelerators to support AI workloads ranging from large-scale training to inferencing at the edge.

Together with Stability.ai, we are building one of the world's largest AI supercomputers entirely on 4th Gen Xeon processors and 4,000 Intel Gaudi2 AI accelerators. Our Gaudi road map remains on track with Gaudi3 out of the fab, now in packaging and expected to launch next year. And in 2025, Falcon Shores brings our GPU and Gaudi capabilities into a single product.

Moving to the client. CCG delivered another strong quarter, exceeding expectations for the third consecutive quarter, driven by strength in commercial and consumer gaming SKUs where we are delivering leadership performance. As we expected, customers completed their inventory burn in the first-half of the year, driving solid sequential growth, which we expect will continue into Q4. We expect full-year 2023 PC consumption to be in line with our Q1 expectations of approximately 270 million units.

In the near-term, we expect Windows 10 end-of-service to be a tailwind, and we remain positive on the long-term outlook for PC TAM returning to plus or minus 300 million units. Intel continues to be a pioneer in the industry as we ushered in the era of the AIPC in Q3 when we released the Intel Core Ultra processor code-named Meteor Lake.

Built on Intel 4, the Intel Core Ultra has been shipping to customers for several weeks and will officially launch on December 14 alongside our 5th Gen Xeon. The Ultra represents the first client chiplet design enabled by Foveros Advanced 3D packaging technology, delivering improved power efficiency and graphics performance.

It is also the first Intel client processor to feature our integrated neural processing unit, or NPU, that enables dedicated low-power compute for AI workloads. Next year, we will deliver Aero Lake as well as Lunar Lake, which offers our next-gen NPU, ultra-low power mobility and breakthrough performance per watt.

Panther Lake, our 2025 client offering, heads into the fab in Q1 '24 on Intel 18A. The arrival of the AIPC represents an inflection point in the PC industry, not seen since we first introduced Centrino in 2003. Centrino was so successful because of our time-to-market advantage, our embrace of an open ecosystem, strong OEM partnerships, our performance silicon and our developer scale. Not only are these same advantages in place today, they are even stronger as we enter the age of the AIPC.

We are catalyzing this moment with our AIPC acceleration program with over 100 ISVs already participating, providing access to Intel's deep bench of engineering talent for targeted software optimization, core development tools and go-to-market opportunities. We are encouraged and motivated by our partners and competitors who see the tremendous growth potential of the PC market. NEX is also seeing early signs of the benefits from growing AI use cases.

Our ethnic and IPU businesses are well suited to support the high I/O bandwidth required by AI workloads in the data center with growth expected to accelerate for both in 2024. Additionally, at the edge, as part of Intel's focus on every aspect of the AI continuum, NEX launched OpenVINO 2023.1, the latest version of the AI inferencing and deployment run time of choice for developers on client and edge platforms, with AI.io and Fit Match demonstrating how they use OpenVINO to accelerate their applications at our innovation conference.

We have leadership developer software tool chains that have seen a doubling of developer engagements this year. While NEX entered their inventory correction after client in DCAI, Q3 results beat our internal forecast and grew sequentially. We see continued signs of stabilization heading into Q4.

Finally, our Smart Capital strategy underpins our relentless drive for efficiency and our commitment to be great allocators of our owners' capital while consistently looking for innovative ways to unlock value for all our stakeholders. We remain on track to reducing costs by $3 billion in 2023, and we continue to see significant incremental opportunities for operational improvement as we execute on our internal foundry model.

In addition, in Q3, we made the decision to divest the pluggable module portion of our silicon photonics business, allowing us to focus on the higher-value component business and optical I/O solutions to enable AI infrastructure scaling. This marks the tenth business we have exited in the last 2.5 years, generating $1.8 billion in annual savings and a testament to our efforts to optimize our portfolio and drive long-term value creation.

Mobileye's solid Q3 and Q4 outlook continue to underscore the benefits of increased autonomy afforded by our initial public offering last year. In addition, we added TSMC as a minority investor in our IMS nano fabrication business in Q3. And earlier this month, we announced our plans to operate PSG as a stand-alone business beginning January 1. Similar to Mobileye and IMS, this decision gives PSG the mandate, focus and resources to better capitalize on their growth opportunities. We plan to report PSG results as a stand-alone segment in Q1, to bring in private investors in 2024 and to create a path to an initial public offering over the next two to three years.

In summary, we continue to deliver tangible progress 2.5 years into our transformation journey. We are on track with five nodes in four years. We are hitting or beating all our product road map milestones. We are establishing ourselves as a global at-scale systems foundry for both wafer processing and advanced packaging. We are unlocking new growth opportunities fueled by AI. And we are driving financial discipline and operational efficiencies as we continue to unlock value for our shareholders.

While we are encouraged by our progress to date, we know we have much more work in front of us as we continue to relentlessly drive forward with our strategy, maintain our execution momentum and deliver our commitments to our customers. I'd like to personally thank the Intel family for all their efforts.

With that, let me turn it over to Dave to go through our results in more detail and provide guidance for Q4.

Base on the Intel Q4 2023 revenue guidance of $14.6-15.6 billion, Intel 2023 whole year revenue will be around $53.9 billion (with $500 million less or more variance). That means Intel 2023 revenue is dropping back to its 2011 or 2012 level. It's a serious problem and can lead to many negative consequences. Will Intel 2024 revenue get better?


1698426333711.png
 
Base on the Intel Q4 2023 revenue guidance of $14.6-15.6 billion, Intel 2023 whole year revenue will be around $53.9 billion (with $500 million less or more variance). That means Intel 2023 revenue is dropping back to its 2011 or 2012 level. It's a serious problem and can lead to many negative consequences. Will Intel 2024 revenue get better?


View attachment 1514

Great graphic, thank you. Intel lost 10 years or revenue ramp!
 
Secondly, Intel IFS is going to be considerable more expensive than TSMC, due to labor costs, etc. in each country.
You don't know that, since IFS pricing is unknown, and is probably subject to a strict NDA. And labor costs are only a relatively small portion of fabrication costs, as widely discussed on this forum. Most likely, for several years, Intel's foundry gross margins will just be lower than TSMC's, which is logical, since everyone's margins are lower than TSMC's.
 
You don't know that, since IFS pricing is unknown, and is probably subject to a strict NDA. And labor costs are only a relatively small portion of fabrication costs, as widely discussed on this forum. Most likely, for several years, Intel's foundry gross margins will just be lower than TSMC's, which is logical, since everyone's margins are lower than TSMC's.
It's quite well known that Intel is more expensive than other foundries. Also, other foundries have to charge a lot more for chips made in US because of labour and other costs: https://www.tomshardware.com/news/tsmc-to-charge-extra-for-us-made-chips
 
It's quite well known that Intel is more expensive than other foundries. Also, other foundries have to charge a lot more for chips made in US because of labour and other costs: https://www.tomshardware.com/news/tsmc-to-charge-extra-for-us-made-chips
We've seen that article before. IMO, it's clickbait. Short on facts, long on conjecture, as is the DigiTimes article the Tom's article quotes. "Well known"? How? Personally, I think it's likely true that Intel's US costs are higher than TSMC's costs in Taiwan, but we don't know what the facts are, and probably never will. As for pricing, producers make special deals for customers they consider critical, and less special deals for customers they feel aren't worth fighting for. And it's probably true that reducing perceived geopolitical risk has a monetary value. Companies might be willing to pay more for fabs outside of Taiwan, even if the fab margins really aren't much different.
 
Last edited:
We've seen that article before. IMO, it's clickbait. Short on facts, long on conjecture, as is the DigiTimes article the Tom's article quotes. "Well known"? How? Personally, I think it's likely true that Intel's US costs are higher than TSMC's costs in Taiwan, but we don't know what the facts are, and probably never will. As for pricing, producers make special deals for customers they consider critical, and less special deals for customers they feel aren't worth fighting for. And it's probably true that reducing perceived geopolitical risk has a monetary value. Companies might be willing to pay more for fabs outside of Taiwan, even if the fab margins really aren't much different.

We did a cost study a while back (Scotten Jones and I compared notes). This was done after Morris Chang mentioned a 50% cost uplift for US made wafers. We feel that there will be a 20% manufacturing cost difference between wafers manufactured in Taiwan and the US. The actual cost to the customer may vary of course. We also feel that the wafer manufacturing cost between Intel and TSMC in the US will be comparable so Intel could be competitive to TSMC. TSMC has a cost advantage but it is not double digit. Samsung on the other hand has a similar cost structure as TSMC. Cheaper in Korea than the US but Samsung does not report wafer margins. At 5nm I was told Samsung wafers were quoted at 10%-15% less than TSMC. Samsung 3nm is probably even less expensive since they are still trying to get that node established. Based on a teardown of a crypto mining chip it is not a PPA competitive node versus Intel and TSMC, not even close.

It cracks me up when people reference articles by Anton Shilov. He is a master click bater of the highest level. Rarely will you see original material from him, just re hashed words from other authors who may or may not know what they are talking about. These guys get paid per click so what do you expect?
 
Back
Top