Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/threads/anti-ai-hype-article-in-wsj.20328/
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2021770
            [XFI] => 1050270
        )

    [wordpress] => /var/www/html
)

Anti AI hype article in WSJ

milesgehm

Active member
Probably paywalled. Good read:

Nvidia reported eye-popping revenue last week. Elon Musk just said human-level artificial intelligence is coming next year. Big tech can’t seem to buy enough AI-powering chips. It sure seems like the AI hype train is just leaving the station, and we should all hop aboard.

But significant disappointment may be on the horizon, both in terms of what AI can do, and the returns it will generate for investors.

The rate of improvement for AIs is slowing, and there appear to be fewer applications than originally imagined for even the most capable of them. It is wildly expensive to build and run AI. New, competing AI models are popping up constantly, but it takes a long time for them to have a meaningful impact on how most people actually work.

These factors raise questions about whether AI could become commoditized, about its potential to produce revenue and especially profits, and whether a new economy is actually being born. They also suggest that spending on AI is probably getting ahead of itself in a way we last saw during the fiber-optic boom of the late 1990s—a boom that led to some of the biggest crashes of the first dot-com bubble.

 
“The rate of improvement for AIs is slowing, and there appear to be fewer applications than originally imagined for even the most capable of them. It is wildly expensive to build and run AI. New, competing AI models are popping up constantly, but it takes a long time for them to have a meaningful impact on how most people actually work.

These factors raise questions about whether AI could become commoditized, about its potential to produce revenue and especially profits, and whether a new economy is actually being born. They also suggest that spending on AI is probably getting ahead of itself in a way we last saw during the fiber-optic boom of the late 1990s—a boom that led to some of the biggest crashes of the first dot-com bubble.”
 
A lot of the future / proposed "awesomeness of AI" -- self-driving cars, robots that perform work -- are things that probably require piles of compute to figure out in the first place, and refine a bit.. but once they're working, there's no real need for the level of investment we're seeing.

This seems like it could be a series of waves though -- it hits a wall with a few advancements being made. Then eventually someone else figures out what next to do with it.

I think the spending is getting ahead of itself, but it's got some more legs to run first, especially when military finds some more obvious applications.
 
Some thoughts on the points from the article:

- The pace of improvements in LLMs is slowing:
- Maybe it slowed in the last 6 months, but is it representative?​
- Have they seen GPT5? (I haven't seen GPT5, but rumor has it that it's impressive)​
- We need more and more data to train and there’s no more text on the internet:
- Perhaps we are indeed running out of text, but there's still video that contains a lot of new information.​
- Synthetic data is not as useless as it might seem. For example, in areas where we have solid scientific theories (e.g. physics) I'd expect we can use synthetic data effectively to give LLMs good intuitions.​
- People are working on data efficiency as well as on better architectures generally.​
- AI is too expensive:
- Current LLMs are something like not very bright but extremely knowledgeable and tireless interns that can work day and night for $20 a month. This seems like a bargain to me.​
- There are also crazy cases like AlphaFold converting what was a PhD-like amount of work to a few calls to the model.​
- The use cases are too narrow:
- I use LLMs for more than half of the things I do for work and also for some hobbies -- this seems broad enough.​
- ChatGPT is the fastest growing app ever: perhaps people are just curious, but why are they paying for it when you can chat to a robot for free?​

Overall, I’m not 100% confident that we won’t have another AI winter, but the argumentation here doesn’t seem very strong.
 
I think the challenge is not "AI" .... but who is making money on it? Nvidia is dominating by making the fastest hardware and integrating the systems and having 90% of the market in the fastest growth time. Who will get weeded out (hint, it wont be Nvidia)? The rational question for investors is "what is the real financial impact ... other than Nvidia"

And yes we will see the AI "digestion phase" from Hyperscalers in the next 2 years. It will occur 1 week after Processor and memory companies confidently state "we dont see any digestion phase coming"
 
Back
Top