Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/threads/sustained-rise-in-chip-inventories-amid-sluggish-smartphone-and-pc-demands.18788/
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2021770
            [XFI] => 1050270
        )

    [wordpress] => /var/www/html
)

Sustained rise in chip inventories amid sluggish smartphone and PC demands

Daniel Nenni

Admin
Staff member
1694704293656.png


Partly attributable to the influence of generative AI, the semiconductor sector has stabilized after its decline. Nonetheless, the industry continues to anticipate a complete rebound as both the smartphone and PC industries remain in their downturns.

VentureBeat, citing Omdia data, reported that the semiconductor industry has seen a quarterly growth of 3.8% in revenue in the second quarter of 2023, reaching US$124.3 billion. This is the first growth witnessed in five quarters after the global demand for chips experienced a sharp fall in the past 1-2 years.

According to Omdia, AI-related fields led the recovery in the chip industry, as shown in the data processing segment, which saw a quarterly growth of 15% in sales. Sales for NVIDIA, in particular, in the second quarter rose by almost 50% quarter-on-quarter.

However, the chip industry has yet to emerge from the trough. According to Bloomberg data, the median inventory days kept on rising among major chip suppliers based in China, Japan, South Korea, #Taiwan, and the USA since the first quarter of 2021, and no signs of decline have been identified.

 
AI is big now but when everyone realizes it’ll be like self driving cars that euphoria will turn to a more practical consumption of GPU and balance.

Upgrade cycles for computers and laptops will continue to push out as they are simply good enough. For phones Apple got a thing and everyone gets the leftovers.

There is real risk with Apple of Xi and CCP decides as the whole world turns against them to make them the Huawei example to the west. If it wasn’t for the love of Apples and the huge manufacturing base in China the response of banning Apple in China would have happened already. Push a country too far and anything is possible.

There will be a lot of empty shells going up. But if you are a construction worker it is good times
 
AI is big now but when everyone realizes it’ll be like self driving cars that euphoria will turn to a more practical consumption of GPU and balance.
An interesting perspective, but my sense is that we're so early in the adoption curve for AI of any type that it could drive demand for over a decade. The question in my mind is whether GPUs continue to be the center of attention, or whether that's the best hammer currently available, and every problem is looking like a nail. Color me a GPU skeptic in the long run.
 
An interesting perspective, but my sense is that we're so early in the adoption curve for AI of any type that it could drive demand for over a decade. The question in my mind is whether GPUs continue to be the center of attention, or whether that's the best hammer currently available, and every problem is looking like a nail. Color me a GPU skeptic in the long run.
I agree that when the market gets this big everyone is chasing the dollars and business.

But Nvidia may indeed have for this round a moat that is impenetrable. Think back to x86 and WINTEL how many runs were made on them by IBM, SUN, DEC, Motorola, and consortiums. Once a moat is assembled till there is an inflection it can be difficulty to impossible but hope and money always chase the dream
 
There will be a lot of empty shells going up. But if you are a construction worker it is good times

The China situation is a big concern in regards to demand. I saw the empty shell problem coming last year but now I think it will be much worse. Given the broad customer base of TSMC, other fabs are going to get hit worse, my opinion.
 
I agree that when the market gets this big everyone is chasing the dollars and business.

But Nvidia may indeed have for this round a moat that is impenetrable. Think back to x86 and WINTEL how many runs were made on them by IBM, SUN, DEC, Motorola, and consortiums. Once a moat is assembled till there is an inflection it can be difficulty to impossible but hope and money always chase the dream
Yeah, I don't think anyone can compete with NVidia in AI. It's not just about they chips, all AI libraries are written for CUDA, which NVidia owns. There is a massive chain of dependencies for modern AI software development stacks that run back to Nvidia chips.
 
Why did the industry just hand AI to Nvidia, rather than developing multiple vendors and thus preserving some control of pricing?

Alternatively, if CUDA is that critical, what prevents Intel and AMD from implementing their version of it?
 
Why did the industry just hand AI to Nvidia, rather than developing multiple vendors and thus preserving some control of pricing?

Alternatively, if CUDA is that critical, what prevents Intel and AMD from implementing their version of it?
Well, Nvidia did a good job of pushing CUDA to universities back when those were where the action was, and Nvidia did a good job investing in it. In return they got feedback.

But this lock is nothing like it used to be, though some commentators still repeat the old line. TensorFlow proved competitive performance was possible, and most modelling is now done in PyTorch or other higher level environments which compile down to intermediate forms. CUDA competes at the intermediate level with TensorFlow, ROCM, and some others. CUDA still has a maturity advantage to its quality of optimization and the size of library for special transforms but the gap has closed a lot. A key event will be when AMD releases production quality ROCM tuned for the MI-300, which is expected in the next few months. Intel also seems to be getting competitive results and they also invest now in the compilation of intermediate forms.

Nvidia also has an advantage of the maturity of its manufacturing and supply chain capacity for integration on silicon interposers, but Intel also has invested there (including use of EMIB instead of big monolithic silicon) and AMD has recently been ramping up the silicon interposer work to complement its use of organic interposers. This is probably driven by the need to use HBM for competitive bandwidth.

Nvidia is definitely ahead but the customer base has been working on diversification for years now and the latest round of MPUs look likely to bring some serious alternatives in 2024.
 
First adopter into something before anyone and getting ahead and building that MOAT before people realize what it is is critical. Think IBM360, WINTEL x86, ARM iOS and Android. Others come and are cheaper and maybe as good but the leader remains unless they really screw it up
 
First adopter into something before anyone and getting ahead and building that MOAT before people realize what it is is critical. Think IBM360, WINTEL x86, ARM iOS and Android. Others come and are cheaper and maybe as good but the leader remains unless they really screw it up

Nvidia is going to be hard to catch. Their leadership is the best in the business and Nvidia also has one of the best foundry teams. Let's hope they spread the silicon success amongst the foundries because Nvidia truly is a whale!
 
First adopter into something before anyone and getting ahead and building that MOAT before people realize what it is is critical. Think IBM360, WINTEL x86, ARM iOS and Android. Others come and are cheaper and maybe as good but the leader remains unless they really screw it up
And those examples show the exact avenue that other firms use to pry open the clam. Wintel was based on an open hardware ecosystems vs the proprietary IBM PC/mainframe world. ARM's story was also all about offering an ease of use/openness that no prior ISA had. Linux won the unix wars as well as practically all datacenters off of it's openness and decentralized development model. It also helps it wasn't bound to vendor hardware like SUN/IBM/etc. Finally android replaced all other non IOS mobile OSs off the back of offering one common software/app platform that was vendor agnostic. A closed hardware and software ecosystem is always bound to be broken open eventually. The question is how well can Nvidia sell it's products as that moat weakens? Given how well firms like redhat do based purely on service, how big the DC part of the AI bowl will be, and how excellent NVIDIA is on the chip design front; I have to assume the answer is that Jensen will be doing just fine.
 
Think IBM360, WINTEL x86, ARM iOS and Android.
AI does not fit that pattern. The few models which were coded directly to Nvidia CUDA are now old and unimportant. New and future models are written with high level frameworks like PyTorch or ONNX, and the underlying math has been evolving so fast that even past generations of Nvidia GPUs do not run them efficiently. So, there really is no architecture lock as there was with CPUs and gaming GPUs.

The real resource each player needs is an experienced team driving implementation and the commitment to stay at the leading edge. This will settle down to an oligopoly of 3 or 4 at some point due to the usual economics of the cost of being a player modulated by the market desire to spread purchases over several competing buyers. You can already see this dynamic occurring with the work on hardware-agnostic modelling with multiple intermediate forms feeding several proprietary optimizers, while the hardware at the leading edge is so complex only a few players are keeping up. But the current field is wider than the eventual oligopoly will be, and this is because the models are still evolving so fast that any particular hardware will lose advantage to newcomers, with many investors willing to take a risk on scoring an outsider win. Startups still feel their raw innovation and agility may be enough to beat corporate giants moving slow - and the giants are desperate to keep moving fast.
 
AI does not fit that pattern. The few models which were coded directly to Nvidia CUDA are now old and unimportant. New and future models are written with high level frameworks like PyTorch or ONNX, and the underlying math has been evolving so fast that even past generations of Nvidia GPUs do not run them efficiently. So, there really is no architecture lock as there was with CPUs and gaming GPUs.

The real resource each player needs is an experienced team driving implementation and the commitment to stay at the leading edge. This will settle down to an oligopoly of 3 or 4 at some point due to the usual economics of the cost of being a player modulated by the market desire to spread purchases over several competing buyers. You can already see this dynamic occurring with the work on hardware-agnostic modelling with multiple intermediate forms feeding several proprietary optimizers, while the hardware at the leading edge is so complex only a few players are keeping up. But the current field is wider than the eventual oligopoly will be, and this is because the models are still evolving so fast that any particular hardware will lose advantage to newcomers, with many investors willing to take a risk on scoring an outsider win. Startups still feel their raw innovation and agility may be enough to beat corporate giants moving slow - and the giants are desperate to keep moving fast.
Pytorch pretty much requires CUDA. You can use it with rocm, but rocm is not well supported yet. Intel GPUs are not supported at all except through third party (Intel) extension.

I have done a fair bit of work with PyTorch and trying to build models with anything but CUDA is a serious PITA. If you don't believe me go try to train a few complex models yourself in PyTorch using an AMD or Intel GPU and report back to me. It can be done, but lets say you are a product team trying to get to market as quick as possible. You are not going to want to spend all the extra time and effort getting your environment to work when you could have something that works perfectly out of the box.
 
Last edited:
Back
Top