Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/index.php?threads/hok-tan-on-semiconductors-q2-2024.20919/
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2021370
            [XFI] => 1050270
        )

    [wordpress] => /var/www/html
)

Hok Tan on Semiconductors Q2 2024

Daniel Nenni

Admin
Staff member
Hock Tan
Turning to semiconductors, let me give you more color by end markets. Networking. Q2 revenue of $3.8 billion grew 44% year-on-year, representing 53% of semiconductor revenue. This was again driven by strong demand from hyperscalers for both AI networking and custom accelerators. It's interesting to note that as AI data center clusters continue to deploy, our revenue mix has been shifting towards an increasing proportion of networking.

We doubled the number of switches we sold year-on-year, particularly the PAM-5 and Jericho3, which we deployed successfully in close collaboration with partners like Arista Networks, Dell, Juniper, and Supermicro. Additionally, we also double our shipments of PCI Express switches and NICs in the AI backend fabric. We're leading the rapid transition of optical interconnects in AI data centers to 800 gigabit bandwidth, which is driving accelerated growth for our DSPs, optical lasers, and PIN diodes. And we are not standing still. Together with these same partners, we are developing the next generation switches, DSP, and optics that will drive the ecosystem towards 1.6 terabit connectivity to scale out larger AI accelerated clusters.

Talking of AI accelerators, you may know our hyperscale customers are accelerating their investments to scale up the performance of these clusters. And to that end, we have just been awarded the next generation custom AI accelerators for these hyperscale customers of ours. Networking these AI accelerators is very challenging, but the technology does exist today. In Broadcom, with the deepest and broadest understanding of what it takes for complex, large workloads to be scaled out in an AI fabric. Proof in point, seven of the largest eight AI clusters in deployment today use Broadcom Ethernet solutions.

Next year, we expect all mega-scale GPU deployments to be on Ethernet. We expect the strength in AI to continue, and because of that, we now expect networking revenue to grow 40% year-on-year compared to our prior guidance of over 35% growth. Moving to wireless. Q2 wireless revenue of $1.6 billion grew 2% year-on-year, was seasonally down 19% quarter-on-quarter and represents 22% of semiconductor revenue.

And in fiscal '24, helped by content increases, we reiterate our previous guidance for wireless revenue to be essentially flat year-on-year. This trend is wholly consistent with our continued engagement with our North American customer, which is deep, strategic, and multiyear and represents all of our wireless business. Next, our Q2 server storage connectivity revenue was $824 million or 11% of semiconductor revenue, down 27% year-on-year. We believe though, Q2 was the bottom in server storage. And based on updated demand forecast and bookings, we expect a modest recovery in the second half of the year. And accordingly, we forecast fiscal '24 server storage revenue to decline around the 20% range year-on-year.

Moving on to broadband. Q2 revenue declined 39% year-on-year to $730 million and represented 10% of semiconductor revenue. Broadband remains weak on the continued pause in telco and service provider spending. We expect Broadcom to bottom in the second half of the year with a recovery in 2025. Accordingly, we are revising our outlook for fiscal '24 broadband revenue to be down high 30s year-on-year from our prior guidance for a decline of just over 30% year-on-year.

Finally, Q2 industrial rev -- resale of $234 million declined 10% year-on-year. And for fiscal '24, we now expect industrial resale to be down double-digit percentage year-on-year compared to our prior guidance for high single-digit decline.

So to sum it all up, here's what we are seeing. For fiscal '24, we expect revenue from AI to be much stronger at over $11 billion. Non-AI semiconductor revenue has bottomed in Q2 and is likely to recover modestly for the second half of fiscal '24.
 
Hock Tan
Turning to semiconductors, let me give you more color by end markets. Networking. Q2 revenue of $3.8 billion grew 44% year-on-year, representing 53% of semiconductor revenue. This was again driven by strong demand from hyperscalers for both AI networking and custom accelerators. It's interesting to note that as AI data center clusters continue to deploy, our revenue mix has been shifting towards an increasing proportion of networking.

We doubled the number of switches we sold year-on-year, particularly the PAM-5 and Jericho3, which we deployed successfully in close collaboration with partners like Arista Networks, Dell, Juniper, and Supermicro. Additionally, we also double our shipments of PCI Express switches and NICs in the AI backend fabric. We're leading the rapid transition of optical interconnects in AI data centers to 800 gigabit bandwidth, which is driving accelerated growth for our DSPs, optical lasers, and PIN diodes. And we are not standing still. Together with these same partners, we are developing the next generation switches, DSP, and optics that will drive the ecosystem towards 1.6 terabit connectivity to scale out larger AI accelerated clusters.

Talking of AI accelerators, you may know our hyperscale customers are accelerating their investments to scale up the performance of these clusters. And to that end, we have just been awarded the next generation custom AI accelerators for these hyperscale customers of ours. Networking these AI accelerators is very challenging, but the technology does exist today. In Broadcom, with the deepest and broadest understanding of what it takes for complex, large workloads to be scaled out in an AI fabric. Proof in point, seven of the largest eight AI clusters in deployment today use Broadcom Ethernet solutions.
Broadcom Ethernet switch ASICs are the highest volume data center switch ASICs, but the rest of this is bravado. Google, for example, has proprietary all-optical switch technology in their own network called Jupiter.


Broadcom also thinks Ethernet needs a re-design for AI use, because they're a founding member of the Ultra Ethernet Consortium:


This is what the Ultra Ethernet guys say they're doing:

  • Deliver a complete architecture that optimizes Ethernet for high performance AI and HPC networking, exceeding the performance of today’s specialized technologies. UEC specifically focuses on functionality, performance, TCO, and developer and end-user friendliness, while minimizing changes to only those required and maintaining Ethernet interoperability.

Which directly implies that Ethernet doesn't have what it takes yet.

And of course Nvidia is offering InfiniBand, which Broadcom doesn't build ASICs for. The last estimate I saw was that Nvidia is selling just under $3B/quarter in InfiniBand NICs and switches, though it's hard to tell how much is for AI and how much is for HPC, like weather forecasting, but I'd be surprised if Nvidia's AI networking revenue doesn't exceed Broadcom's.

Is it just me, or does Nvidia look like the only chip design company not telling BS stories lately?
 
Answering my own question, it looks like Cerebras is the real thing.

AI has been a boon to companies like Groq, Cerebras, Amphere, etc... I have not seen so many pivots since a Veterans Day parade! :ROFLMAO: What ever happened to the AI companies Intel bought for billions of dollars?




Plus others.
 
AI has been a boon to companies like Groq, Cerebras, Amphere, etc... I have not seen so many pivots since a Veterans Day parade! :ROFLMAO: What ever happened to the AI companies Intel bought?




And others.
Intel has never leveraged acquisitions well. Can you name one that was successful in the context of Intel's business strategy? I can't.
 
AI has been a boon to companies like Groq, Cerebras, Amphere, etc... I have not seen so many pivots since a Veterans Day parade! :ROFLMAO: What ever happened to the AI companies Intel bought for billions of dollars?
Habana was the source for Gaudi - a pretty good AI chip, but it got lost for while in Intel's diffuse AI strategy -

Notwithstanding the claims of IntelOne, Intel has been running 4 different HW strategies for data center for the past 5 years - FPGA, Onboard Xeon acceleration, GPU and Gaudi (treating MobileEye AI as a separate beast). But running 4 different strategies means you are focusing nowhere. I'm guessing we'll see this whittled down to a Xeon/Gaudi combo strategy for data center, and an AI PC strategy for client. soon.
Movidius is the only successful acquisition

Not sure that Movidius has made it into any significant Intel product - Don't think the Movidius-based Intel NCS2 is generating much volume or market pull.
 
Last edited:
So what about that, why does all these multi billion acquisitions end up with nothing to show? Equally importantly, what happened to the new businesses like GPU? Why is Intel consistently not able to deliver in an environment where their competitors are all killing it?

Feels like this goes beyond the simple story of a single point of failure at the fab around 14 - 10nm nodes, due to a lack of scale and/or lack of capex. And it implies there are serious problems at Intel that even an IFS split won’t be able to address.

What do you think?
 
AI has been a boon to companies like Groq, Cerebras, Amphere, etc... I have not seen so many pivots since a Veterans Day parade! :ROFLMAO: What ever happened to the AI companies Intel bought for billions of dollars?




Plus others.
In a sense, it wasn't wrong to do what they did. Yes, in hindsight, it looks stupid that they bought a bunch of useless companies. But back then, it would have been difficult for anyone to predict which company would succeed in AI. Since they had so much capital, these companies were all relatively cheap. So they bet on all of them. Sort of like venture capitalists bet on many companies, hoping one will succeed.
 
In a sense, it wasn't wrong to do what they did. Yes, in hindsight, it looks stupid that they bought a bunch of useless companies. But back then, it would have been difficult for anyone to predict which company would succeed in AI. Since they had so much capital, these companies were all relatively cheap. So they bet on all of them. Sort of like venture capitalists bet on many companies, hoping one will succeed.
They placed a bunch of hardware bets, via their acquisitions and organic growth (Xeon and new GPU), but then tried to wrap them all in under a single general purpose AI/HPC software infrastructure, so they could cover all possible growth areas at edge, in client and at the data center. Unfortunately, the big revenue opportunity, GenAI training in the data center, was concentrated in an area where they were only minimally prepared, compared to the likes of NVIDIA and even AMD, who both had far large-scale GPU experience than Intel. I think they hamstrung themselves by trying to make Xeon solve AI problems instead of investing far more in GPU (or Gaudi). But it's like I said earlier, when you mainly make money on your hammer, all problems look like nails.
 
So what about that, why does all these multi billion acquisitions end up with nothing to show? Equally importantly, what happened to the new businesses like GPU? Why is Intel consistently not able to deliver in an environment where their competitors are all killing it?

Feels like this goes beyond the simple story of a single point of failure at the fab around 14 - 10nm nodes, due to a lack of scale and/or lack of capex. And it implies there are serious problems at Intel that even an IFS split won’t be able to address.

What do you think?
Because flawed strategies came and went at Intel. Every one of them someone's notion of the next big thing, hated by someone else who was next in line for the job because it really was a bad idea, or it just wasn't their initiative. The merger of communications and computing (a big acquisition incubator) including WiMax, Itanium (developed by the old DEC Alpha team), multiple Ethernet switch companies, FPGAs, Optane (a.k.a PCMS), StrongARM, the list is long and embarrassing. GPUs have an especially ugly history. Many people tried to convince senior management that CPU cores were about to become nearly commodities, and Intel needed to broaden the portfolio. Intel apparently had only one pivot in its DNA, from memory to logic. The evidence is, so far, that it can't do any others.
 
The biggest weakness of Intel and AMD in GPUs is the software base. It is much weaker than NVIDIA's. And NVIDIA has been accumulating software for over a decade at this point.

StrongARM ended up landing on Intel's lap as part of the DEC lawsuit. They never particularly wanted it. But it had some success with devices like the Compaq iPAQ.

It could have been leveraged to make an SoC for Apple later when they made the iPhone had Intel wanted. Well we know what happened.

Itanium was successful in that it killed most of the RISC server vendors. HP and SGI were dumb enough to make a roadmap to move to the platform. Both are gone. Oracle has not released a new SPARC processor in years. Fujitsu moved from SPARC to ARM. That is how they are still around. IBM lucked out and gave up on Itanium early. So of the original RISC server architectures only Power is still around.

Optane is something they spent a long time developing. It was initially called Ovonyx memory. The problem with Optane is that it fell into a weird place where it wasn't competitive against either DRAM or NAND. It neither had the speed of the first nor the capacity of the latter.
 
Last edited:
Back
Top