There can be a bubble and there can still be high potential AI investments at the same. The bubble, if there is indeed a bubble, is mostly related to Large Language Models, especially their training. As an LLM skeptic, I am concerned that some of the huge datacenter investments might not be so relevant in post-LLM strategies. For me, it's too soon to tell.
Agreed re: Bubble but with opportunities. You can already see DC investments turning sour as this space matures - take a look at Apple switching gears to leverage Google/Gemini increasingly rather than it's own product, which presumably was/is using a lot of data center capacity.
I have found it interesting that LLM implementation improvements, which lately is mostly the increase in the number of parameters they're trained with, don't seem to be forecast-able. The only predictor seems to be more is better. The primary reasoning improvements I've read about seem to be related to large-scale human "prompting" and "tuning" certain results with targeted editing.
I'm seeing a split here;
The large providers - OpenAI, Grok, etc. are all definitely going down the path of "give me more compute/memory capacity, and I can make a bigger model". We're even seeing this playout in real-world applications of AI models -- see Tesla FSD -- it is getting better over time, but the compute requirements are continually increasing to make that happen. But we're also seeing diminishing returns -- like almost everything else that becomes "better" through increased complexity. (i.e. high tech goods in general).
On the flip side, "Local" LLMs are largely used at consistent fixed sizes, and the capability of each is greatly improving at a given size. I thnk we'll see a world with a few large "MCP" AI models (Tron reference), and then a large plethora of local models that are either wholly or partially derived from the larger models. That will reduce Datacenter demand at some point, but not necessarily quickly.
The future AI supply chain model is still up for grabs.
The lack of determinism in output quality improvement in a way reminds a bit me of mineral extraction, you don't really know what you'll get until you dig or drill. And LLM qualitative performance predictions don't seem even that good. I think this is the real AI risk - you don't really know what you'll get until you spend hundreds of billions on datacenters.
John Carmack suggested that at some point in the future we'll look back to this period and realize we already had the ingredients for AGI, and didn't need nearly the compute or complexity we're spending on it to make it work.
I think the crash is just going to come down to economics. For me, LLMs are "fun", and help me reduce the amount of time to (re)search topics I want to learn about.. but I'm not going to open my wallet very deeply for this "privilege". However, the applications of LLMs - such as unsupervised full self driving, or semi-autonomous robots might make me spend more money than I'm willing to spend today.
I would like to hear success stories of LLMs improving logistics - which would matter to everyone.
...
P.S. "Ethical/Societal" challenge here -- what will the impacts be to the users of the world if the LLM they are depending on was trained only in English or Chinese, but live translates to their langauge? Will there be cultural or other backlashes, or a demand for "natively trained" AIs? Are LLMs trained in specific texts likely to be biased towards certain cultural ties created by users of those langauges?