Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/threads/opinions-split-over-ai-bubble-after-billions-invested.23951/
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2030770
            [XFI] => 1060170
        )

    [wordpress] => /var/www/html
)

Opinions split over AI bubble after billions invested

Daniel Nenni

Admin
Staff member
Illustration shows AI (Artificial Intelligence) letters and robot hand


(Reuters) — Companies announcing multi-billion-dollar investments in artificial intelligence have sparked debate over whether the sector is entering a bubble reminiscent of the dot-com boom. Investors are increasingly cautious, watching for signs that AI demand or returns might not justify the record-level spending.

A Bank of America survey found that 54% of fund managers believe AI stocks are in a bubble, compared with 38% who do not. The Bank of England recently warned that a sharp correction in AI-related markets could have “material” spillover effects on global financial stability. Singapore’s GIC investment chief Bryan Yeo noted that early-stage AI startups are being “valued right up there at huge multiples,” echoing bubble-era behavior.

Amazon founder Jeff Bezos said that while investor enthusiasm often blurs the line between good and bad ideas, such industrial bubbles can still produce lasting innovation. In contrast, “Big Short” investor Michael Burry has taken bearish positions against AI leaders like Nvidia, warning of overvaluation.

Others remain optimistic. Goldman Sachs economist Joseph Briggs argues AI investments are sustainable, while ABB CEO Morten Wierod sees long-term infrastructure constraints rather than overexuberance. OpenAI’s Sam Altman summarized the tension best: investors may be “overexcited,” he said, but fortunes — both gained and lost — will shape AI’s future.

 
Everyone is just learning to work with AI/ML at this point, and we are still in the very, very early stages of development. Once more people learn to really work with it and unlock its full potential, the world will change in ways most haven't even imagined. Everything has been developing at an accelerating pace for many years, and AI/ML will be no exception.
 
Burry's housing bubble call was about 2 years early, and I think his AI bubble call is at least 1 year too early as well.

I think AI is certainly a bubble. It's also a profound and revolutionary technological change.

I would not be betting against AI regardless of whether I think it's a bubble, because bubbles can always grow larger before popping.
 
Burry's housing bubble call was about 2 years early, and I think his AI bubble call is at least 1 year too early as well.

I think AI is certainly a bubble. It's also a profound and revolutionary technological change.

I would not be betting against AI regardless of whether I think it's a bubble, because bubbles can always grow larger before popping.
There's no way of knowing when it'll happen, could be tomorrow, could in a years time
 
The amount of investment in AI data centers is ridiculous. There is no economic rationale for it whatsoever.
All this money is being spent on perishable goods. Hardware that will become obsolete after a couple of years. This isn't a bridge or a railroad.

In my opinion once the costs come down and hardware plus software become more efficient it will be highly profitable but that clearly is not the case right now.
 
Bubbles like this could hit different companies at different times.

Nvidia getting hit is simply "a large drop in demand for their products". Nvidia is executing extremely well, and (lucky for them) because AI is still figuring out "the optimal approach", obsolence of 2 year old Nvidia products is automatic. IMO there's enough corporate cash. even without the "investment circle", to sustain this for years.

The software companies using AI are going to be the most chaotic -- some will bust (or bubble up further) as things happen, and quickly. But if one large company gets hit by a sell off - it's likely to affect all other traded software companies, my personal opinion of course.

Next - infrastructure (Oracle, Azure, etc.); these companies are probably more stable than the AI HW or SW makers as (like Foundries), there are always many other uses for their products, or other ways for them to sell capacity. When the AI bubble busts, these will probably be "slow" walkdowns in valuation.

Finally - Foundries - Samsung (RAM, products), TSMC, Intel, etc. -- I don't know how it'll go vs. the other types. Probably either lagging or leading what happens to Nvidia at this stage. (lagging in situations where others pick up the slack - say via a push for new mobile products, leading if it's forecasted AI demand for say, Samsung high end memory is expected to drop in the future). The opportunity for Foundry is that as the AI bubble grows, other markets become underserved (i.e. PC and gaming markets are getting less volume today) - so there will be pent up demand for other types of chips, which still require LOTs of wafers.
 
The amount of investment in AI data centers is ridiculous. There is no economic rationale for it whatsoever.
All this money is being spent on perishable goods. Hardware that will become obsolete after a couple of years. This isn't a bridge or a railroad.

In my opinion once the costs come down and hardware plus software become more efficient it will be highly profitable but that clearly is not the case right now.
I think the energy infrastructure that is being built will last a long time and create economic opportunities for decades to come.

I also think that AI will create new industries and business models that could not even be conceived of today.

In the early days of the internet people were simply taking offline business models and putting them online (eZines, eCommerce), but as the internet grew and evolved entirely new business models appeared like SaaS and social media and it's army of influencers.

Similarly with AI, people right now are mostly looking at it as a way to automate human workflows (even though as I've said in real use most AI is not really being used for that). And while hard to even imagine what the world is going to look like 10 years from now, I am sure that it's going to be a world that has been completely reshaped by AI.
 
I think asking fund managers and economists about the development of AI technologies and applications is like asking me about women's cosmetics. I know lipstick is generally red and greasy. I think my knowledge of lipstick probably exceeds their knowledge of AI.
 
I think asking fund managers and economists about the development of AI technologies and applications is like asking me about women's cosmetics. I know lipstick is generally red and greasy. I think my knowledge of lipstick probably exceeds their knowledge of AI.

I watched a panel last night with noted semiconductor professionals, including Dr. Ann Kelleher (retired Intel) talk about the AI bubble. I lost a couple of IQ points watching these people babble on about something they were not qualified to articulate, and I can assure you they had that question in advance. Sorry to knock Ann K but Intel failed during her tenure. I am always hoping she will let people learn from her mistakes but she has not owned up to them publicly yet. Given Intel missed AI why is Ann the one to answer a question about an AI bubble?

AI is a disruption like the smartphone or any other technology that changes how we work or do business. It will be a long and wide disruption that we are just beginning. Maybe there is an AI training bubble based on all of the press releases but AI inference will continue to grow and bring AI to the edge like smartphones did for social media. Just my opinion of course but I am not a luminary, I am in the trenches with the other 99% ers.
 
I watched a panel last night with noted semiconductor professionals, including Dr. Ann Kelleher (retired Intel) talk about the AI bubble. I lost a couple of IQ points watching these people babble on about something they were not qualified to articulate, and I can assure you they had that question in advance. Sorry to knock Ann K but Intel failed during her tenure. I am always hoping she will let people learn from her mistakes but she has not owned up to them publicly yet. Given Intel missed AI why is Ann the one to answer a question about an AI bubble?
I would guess AK didn't know how to just say "I don't know" in a panel context. I saw a link to a video of that panel, but now I can't find it. (If you can, I'd appreciate you posting it.) If I were her, I would have discussed some points about how AI chips have different requirements on chip fabrication than CPUs and mobile chips, and how that is affecting the industry. (Some differences that come to my mind is the increasing importance of near-compute memory, distributed memories, and high-speed inter-chip interconnects.) In other words, I think there's evidence that being a good foundry for AI chips is probably different than being a good foundry for CPUs, which are both different than being a good foundry for mobile SoCs. Personally, I think AK's views on those issues would have been interesting.
AI is a disruption like the smartphone or any other technology that changes how we work or do business. It will be a long and wide disruption that we are just beginning. Maybe there is an AI training bubble based on all of the press releases but AI inference will continue to grow and bring AI to the edge like smartphones did for social media. Just my opinion of course but I am not a luminary, I am in the trenches with the other 99% ers.
I think it's safe to say we are very early in AI hardware and software development. I'd be amused to read an argument that AI technologies in ten years won't be much different than the currently available technologies.
 
I would guess AK didn't know how to just say "I don't know" in a panel context. I saw a link to a video of that panel, but now I can't find it. (If you can, I'd appreciate you posting it.) If I were her, I would have discussed some points about how AI chips have different requirements on chip fabrication than CPUs and mobile chips, and how that is affecting the industry. (Some differences that come to my mind is the increasing importance of near-compute memory, distributed memories, and high-speed inter-chip interconnects.) In other words, I think there's evidence that being a good foundry for AI chips is probably different than being a good foundry for CPUs, which are both different than being a good foundry for mobile SoCs. Personally, I think AK's views on those issues would have been interesting.

I think it's safe to say we are very early in AI hardware and software development. I'd be amused to read an argument that AI technologies in ten years won't be much different than the currently available technologies.

This was last night at the Computer History Museum, so it was probably a different panel. She is on the speaking circuit I guess.
 
AI is a disruption like the smartphone or any other technology that changes how we work or do business. It will be a long and wide disruption that we are just beginning. Maybe there is an AI training bubble based on all of the press releases but AI inference will continue to grow and bring AI to the edge like smartphones did for social media. Just my opinion of course but I am not a luminary, I am in the trenches with the other 99% ers.
If inference is going to pervasively take place on edge clients, then do we still need all these multi GW of datacenters?

If inference must mainly run on GW data centers, then will AI be really that pervasive?
 
The other interesting question that came up was when will the semiconductor industry hit $1T. It has been asked many times over the last couple of years. Last night the answers were much more positive meaning it will happen before 2030. My guess was 2030 a while back because I like even numbers. I would never say 2029 or 2031. I also do not do odd numbers on my TV remote for sound or odd numbered headings when I am sailing unless of course it is a 5. I like 5s. It is so much fun being me. :ROFLMAO:
 
If inference is going to pervasively take place on edge clients, then do we still need all these multi GW of datacenters?
It depends on the inference application. If it relates to realtime activity like driving assists, language translation, email filters, photo or video editing, grammar checking, as a few examples, the requirement for low latency pushes the inference processing to the edge, but then you're dependent on the edge device for processing capability, and your low-latency data accessibility is modest. For cars and desktop computers, or local servers in an office or an enterprise datacenter, the inference processing available can be very high. (It needs to be for self-driving.) For applications where latency requirements are more relaxed, like multiple seconds, cloud datacenters have huge advantages for complex inference processing. Asking an AI application for an overview of a complex topic on your phone will work better in the cloud for (I'm guessing) several years to come. Also, enterprise inference applications which require potentially thousands or even millions of inference results per second will probably only be serviceable in the cloud for a long time to come.
If inference must mainly run on GW data centers, then will AI be really that pervasive?
Like it or not, I think the evidence is that AI inference will be in almost everything we do. Even watching TV. For example: asking your edge video device (like a Roku unit) to play the episode of the old Perry Mason TV show where Robert Redford was a guest star. That request seems unlikely to be an edge-only operation.
 
Last edited:

“Captain” Jonah Cheng Says the AI Bubble Is Two Years Away — Watch for These Two Warning Signs​

True to his famously blunt — some would say tactless — style, Cheng didn’t hold back. He believes the historic AI bubble is at least two years away from bursting.

One early sign to watch, he said, would be the emergence of large-scale AI-themed financial derivatives.

 
Seems very similar to dot com bubble. We all knew that the companies were vastly overvalued and the crash was coming, but the stocks kept going up regardless.
 
Back
Top