Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/threads/just-how-bad-would-an-ai-bubble-be.23564/
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2021770
            [XFI] => 1050270
        )

    [wordpress] => /var/www/html
)

Just How Bad Would an AI Bubble Be?

XYang2023

Well-known member
The entire U.S. economy is being propped up by the promise of productivity gains that seem very far from materializing.

If there is any field in which the rise of AI is already said to be rendering humans obsolete—in which the dawn of superintelligence is already upon us—it is coding. This makes the results of a recent study genuinely astonishing.

In the study, published in July, the think tank Model Evaluation & Threat Research randomly assigned a group of experienced software developers to perform coding tasks with or without AI tools. It was the most rigorous test to date of how AI would perform in the real world. Because coding is one of the skills that existing models have largely mastered, just about everyone involved expected AI to generate huge productivity gains. In a pre-experiment survey of experts, the mean prediction was that AI would speed developers’ work by nearly 40 percent. Afterward, the study participants estimated that AI had made them 20 percent faster.

 
It's different than the dotcom crash because back then there were multiple sectors to the US economy. Nowadays it's 7 tech companies hoovering up all the profits and they're all in on AI, which has yet to really be monetized for anything other than displacing workers (not a productivity gain but a financial one) and making the internet even worse for your mental health. Whether it pops is irrelevant to 95% of the population, because for them it will be bad either way.
 
The Atlantic article linked above was written by a "staff writer", which means he doesn't know what he's talking about, and the article is meant to be sensational and catch eyes and clicks. "The entire US economy" is not being propped up by the promise of AI. More like the stocks of some tech companies like Meta and Microsoft are being propped up by the hope of new applications and revenue streams, but the notion that the entire US economy is being propped up by current LLM and agent technology is just silly. If anything, the entire US economy is being propped up by deficit spending by the federal, state, and local governments much more than AI speculation and spending.

This is the research article referred to in The Atlantic article:


Four authors who work for a non-profit founded by one of the authors, Beth Barnes, a former OpenAI employee. The paper reads well, until I looked for details. First of all, much of the code being discussed seems to be written in Python, because that's the only programming language I see discussed in the article. I queried some of the repositories listed in the appendix, and Rust was listed, but Rust is so different and more complex to use than Python, I am skeptical that results for Rust (or Go, C, or C++) would be comparable to results for Python.

Python is a simplistic interpreted language with automatic memory management and only partial support (I'm being generous) for parallelism. It is simplistically object-oriented, but simple enough to use that it is taught in high schools (and even earlier in many private schools). Open source code is widely available in Python for use as a basis for modification, or just to download for use as code snippets or subroutines. Using AI for code generation for Rust, Go, C, C++, or assemblers are at another level altogether, and can be very useful for generating examples for specific problems.

I haven't talked to any currently active software engineers who think AI is especially helpful for debugging, unless the problem is in language syntax, or simple stuff like mistakes in variable usage (e.g. static versus stack temporal allocations). No enthusiasm for AI debugging for the subtle problems in very large programs that drive software engineers nuts.

This paper smacks of attention-getting for the authors and their research organization. METR is referred as a non-profit, but that isn't surprising given the obscure organization of OpenAI itself.
 
Last edited:
Bad article but good topic.

Just from my point of view, today AI is an excellent research tool for people who are already experts in the field and can filter out the AI hallucinations. Companies are deploying it internally by data mining terabytes of company data to help employees ramp up and be more productive which will boost profits and reduce headcount. Customer service is where I am seeing it on a personal side. My car dealership has gone AI for appointments and such.

For semiconductor design, AI will help with the employment gap we are currently experiencing. Less people will be needed for sure. EDA companies are already using Generative AI to help customers be more productive by using 30 years of design data. Some companies will need less engineers, others will use the same amount of engineering resources but just be more productive.

This is all Generative AI. Agentic AI is next where we will move from creating content to taking action. We are already seeing this in EDA with some start-up companies so stay tuned.

AI has been compared to the California gold rush days which I can see with AI company valuations. It is not a Dot.com type of thing but there are some seriously over valued companies that will come crashing down.
 
The Atlantic article linked above was written by a "staff writer", which means he doesn't know what he's talking about, and the article is meant to be sensational and catch eyes and clicks. "The entire US economy" is not being propped up by the promise of AI. More like the stocks of some tech companies like Meta and Microsoft are being propped up by the hope of new applications and revenue streams, but the notion that the entire US economy is being propped up by current LLM and agent technology is just silly. If anything, the entire US economy is being propped up by deficit spending by the federal, state, and local governments much more than AI speculation and spending.

This is the research article referred to in The Atlantic article:


Four authors who work for a non-profit founded by one of the authors, Beth Barnes, a former OpenAI employee. The paper reads well, until I looked for details. First of all, much of the code being discussed seems to be written in Python, because that's the only programming language I see discussed in the article. I queried some of the repositories listed in the appendix, and Rust was listed, but Rust is so different and more complex to use than Python, I am skeptical that results for Rust (or Go, C, or C++) would be comparable to results for Python.

Python is a simplistic interpreted language with automatic memory management and only partial support (I'm being generous) for parallelism. It is simplistically object-oriented, but simple enough to use that it is taught in high schools (and even earlier in many private schools). Open source code is widely available in Python for use as a basis for modification, or just to download for use as code snippets or subroutines. Using AI for code generation for Rust, Go, C, C++, or assemblers are at another level altogether, and can be very useful for generating examples for specific problems.

I haven't talked to any currently active software engineers who think AI is especially helpful for debugging, unless the problem is in language syntax, or simple stuff like mistakes in variable usage (e.g. static versus stack temporal allocations). No enthusiasm for AI debugging for the subtle problems in very large programs that drive software engineers nuts.

This paper smacks of attention-getting for the authors and their research organization. METR is referred as a non-profit, but that isn't surprising given the obscure organization of OpenAI itself.

I think the article made some valid points. Imo, to use gen AI well requires understanding of software engineering, which is not trivial.

I think using Python is also appropriate. Although Python is simple, to use it well is not trivial. Google, OpenAI, Anthropic, etc all use Python extensively. Most machine learning research activities and projects are based in Python.
 
I think using Python is also appropriate. Although Python is simple, to use it well is not trivial. Google, OpenAI, Anthropic, etc all use Python extensively. Most machine learning research activities and projects are based in Python.
We're going to have to agree to disagree. Using AI to assist with programming such a simplistic language is inefficient, which is why productivity went down. Python is popular with the AI applications crowd because the target users are often not computer scientists, they are scientists and experts in other fields, like pharma, biology, medicine, etc., so using more complex languages is a non-starter. This is why SQL is used even as a query language for even NoSQL databases. If Python was really the primary subject of the paper, it was silly.
 
We're going to have to agree to disagree. Using AI to assist with programming such a simplistic language is inefficient, which is why productivity went down. Python is popular with the AI applications crowd because the target users are often not computer scientists, they are scientists and experts in other fields, like pharma, biology, medicine, etc., so using more complex languages is a non-starter. This is why SQL is used even as a query language for even NoSQL databases. If Python was really the primary subject of the paper, it was silly.
I tend to disagree.

I use C++, Java, and Python. On the surface, Python seems simple, but it can be quite complex. The engineers at Google are certainly computer scientists.


The threading mechanism in the latest Python release is becoming more like those in other languages.

Ideally, Python should adopt an approach similar to Mojo's to improve performance.
 
The reason Python is suited for gen AI is its extensive ecosystem which helps with verification. Without verification, gen AI is less useful.
 
Start ups and meme stocks will all crash and it wont matter. VC loses money

But if we entered a period of digestion and then 0-10% growth. The impact will be large.

Listen for terms "digestion" and "we are evaluating our growth plans" from hyperscalers.

That said, Apple, Nvidia and TSMC and Broadcom will survive.....they are great companies and can adjust to whatever the new market is.

Intel might do great since they have no significant AI Product sales ...
 
Start ups and meme stocks will all crash and it wont matter. VC loses money

But if we entered a period of digestion and then 0-10% growth. The impact will be large.

Listen for terms "digestion" and "we are evaluating our growth plans" from hyperscalers.

That said, Apple, Nvidia and TSMC and Broadcom will survive.....they are great companies and can adjust to whatever the new market is.

Intel might do great since they have no significant AI Product sales ...
Intel has a fairly comprehensive product suite for running AI locally. Personally, I prefer this approach, as I'm not comfortable with others indexing my code.

 
Back
Top