Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/index.php?threads/ai-ml-chips-will-create-new-demand.18347/
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2021370
            [XFI] => 1050270
        )

    [wordpress] => /var/www/html
)

AI/ML Chips will create new demand

Arthur Hanson

Well-known member
As the cost of AI/ML computing goes down, it will create new demand as it becomes more economic to apply AI/ML to more and more tasks. With AI/ML being used to create more software for everything in the virtual world and in automating the physical world AI/ML semis will create demand where none existed. From what I have seen in the past, in just three years I see the market for AI/ML chips of more varied types and applications doubling in that period of time. Because AI/ML can feed on themselves like no other discovery in history, the market will have many surprises to the upside as applications never considered become reality. Much of this new demand will come from AI/ML compounding on itself to a degree no other discovery by mankind will even come close to matching. Any thoughts, comments or additions sought and welcome. Also appreciated is what companies may create and dominate these new areas.
 
Last edited:
You are a little late to the party. It has already begun and we are tens of $Bn into the frenzy already. Advances in "classical" computers are likely to be sidelined simply because so many engineers are being siphoned into the new hotness. Sure, there are new server chips from AMD, Intel, Ampere, etc. which will get designed in but the number of permutations at each vendor will be cut back, with more just buying an OTS design from an ODM and heck even those will be few to choose from since the ODMs are designing AI gear too.

It is a positive sum game in revenue but a near-zero sum game in engineering resources. Do the math. And memory, curiously, may be negative sum since AI machines have lower ratios of DRAM than classic servers, so if server numbers stagnate, so will DRAM.

Mobile will continue for the near term with the trend Apple, Google, Qualcomm already figured out, lots of area given over to inference acceleration. Less interest in the next greatest core. More memory bandwidth at lower power per bit moved, and probably stressing the memory vendors for more DRAM capacity at lower battery drain.
 
Tanj, are you saying that there aren't enough engineering resources? In what areas? Can you expand?
 
As the cost of AI/ML computing goes down, it will create new demand as it becomes more economic to apply AI/ML to more and more tasks. With AI/ML being used to create more software for everything in the virtual world and in automating the physical world AI/ML semis will create demand where none existed. From what I have seen in the past, in just three years I see the market for AI/ML chips of more varied types and applications doubling in that period of time. Because AI/ML can feed on themselves like no other discovery in history, the market will have many surprises to the upside as applications never considered become reality. Much of this new demand will come from AI/ML compounding on itself to a degree no other discovery by mankind will even come close to matching. Any thoughts, comments or additions sought and welcome. Also appreciated is what companies may create and dominate these new areas.
I suspect full scale launch of MSFT Copilot will lead to PC refresh cycle - driven by higher compute need.
 
Tanj, are you saying that there aren't enough engineering resources? In what areas? Can you expand?
Imagine that you are in the business of kitting out data centers - the dominant market for servers. You, and your competitors, have spent the last 10 years balancing your resources so that you have just enough engineers to design the boards, chassis, network, racks, power, etc. that allows you to have a full catalog of VMs and container types for the market. You have been lulled by 6 years of crawling progress at Intel which has allowed you to actually reduce the number of new designs your teams make because, heck, what is new.

Then AMD comes along and you get busier. First you sample Rome. Then, darn, you realize it is for real a good CPU and AMD has some momentum. You scramble to hire a few more, and bring them up to speed. Maybe you tinker with Ampere or in-house ARM, maybe RISC-V. Darn, you are spending a lot on engineering all of a sudden, and ouch it is hard to recruit for hardware.

But that barely compares to the horror of realizing that there is now a land grab going on in AI, both training and inferencing, and the machines you desperately need to stake a claim in this new world are nothing like the ones you were just strolling along with last year. You need a whole bunch of Nvidia but you want to have some edge so you pillage your prior projects to put all your aces on figuring out how to have an advantage. Cooling, networking, host servers, reliability, memory, storage, more networking, privacy, partitioning, assigning and tearing down VMs in entirely new ways. Finding the sites, deciding which other assets they need to work with. And your hiring is almost all going to be backfill because it is not like there is a pool of engineers who know both the AI modules and know your cloud - no your best bet is to reassign some of your best people, maybe even shelve a project or two to get some entire teams who know how to work together. You may find a few great hires to add to them - probably recruiting from competitors like they recruit from you, because where else? The rest of the new hires are going into patching up the traditional server projects you had going, though a few of those are rationalized below the cut list.

And as for that capex you were going to spend on the new ordinary servers, well you need capex for the hot newness in AI and inferencing and LLM and .. oh, thankfully the customers are more interested in those too.

To be clear, I have been out of that world for nearly 3 years now, while AI was just beginning to curve up but had to fight for budget. I don't ask my old buddies what they do now, no insider knowledge. Just guessing how it goes when $10Bn or more of new stuff is suddenly the highest priority on a finite org with a serious need for competent engineers.
 
I see huge opportunities to capitalize on the obscene amount of technical advancements. We are jumping on it.

Competency needs to be home grown. They aren't coming out of the universities. They suck.
 
Competency needs to be home grown. They aren't coming out of the universities. They suck.

Google has engineering there, too. Not sure about AWS.

Taiwan is not just a chip fab colossus, it also has most of the ODMs for computers from mobile to server. Indeed, it built that competency well before TSMC rose to the top.
 
You are a little late to the party. It has already begun and we are tens of $Bn into the frenzy already. Advances in "classical" computers are likely to be sidelined simply because so many engineers are being siphoned into the new hotness. Sure, there are new server chips from AMD, Intel, Ampere, etc. which will get designed in but the number of permutations at each vendor will be cut back, with more just buying an OTS design from an ODM and heck even those will be few to choose from since the ODMs are designing AI gear too.

It is a positive sum game in revenue but a near-zero sum game in engineering resources. Do the math. And memory, curiously, may be negative sum since AI machines have lower ratios of DRAM than classic servers, so if server numbers stagnate, so will DRAM.

Mobile will continue for the near term with the trend Apple, Google, Qualcomm already figured out, lots of area given over to inference acceleration. Less interest in the next greatest core. More memory bandwidth at lower power per bit moved, and probably stressing the memory vendors for more DRAM capacity at lower battery drain.
You are a little late to the party. It has already begun and we are tens of $Bn into the frenzy already. Advances in "classical" computers are likely to be sidelined simply because so many engineers are being siphoned into the new hotness. Sure, there are new server chips from AMD, Intel, Ampere, etc. which will get designed in but the number of permutations at each vendor will be cut back, with more just buying an OTS design from an ODM and heck even those will be few to choose from since the ODMs are designing AI gear too.

It is a positive sum game in revenue but a near-zero sum game in engineering resources. Do the math. And memory, curiously, may be negative sum since AI machines have lower ratios of DRAM than classic servers, so if server numbers stagnate, so will DRAM.

Mobile will continue for the near term with the trend Apple, Google, Qualcomm already figured out, lots of area given over to inference acceleration. Less interest in the next greatest core. More memory bandwidth at lower power per bit moved, and probably stressing the memory vendors for more DRAM capacity at lower battery drain.
I was not late, I invested in the picks and shovels end with TSM, AMAT and Micron many years ago. I knew that who ever won they would use these resources.
You are a little late to the party. It has already begun and we are tens of $Bn into the frenzy already. Advances in "classical" computers are likely to be sidelined simply because so many engineers are being siphoned into the new hotness. Sure, there are new server chips from AMD, Intel, Ampere, etc. which will get designed in but the number of permutations at each vendor will be cut back, with more just buying an OTS design from an ODM and heck even those will be few to choose from since the ODMs are designing AI gear too.

It is a positive sum game in revenue but a near-zero sum game in engineering resources. Do the math. And memory, curiously, may be negative sum since AI machines have lower ratios of DRAM than classic servers, so if server numbers stagnate, so will DRAM.

Mobile will continue for the near term with the trend Apple, Google, Qualcomm already figured out, lots of area given over to inference acceleration. Less interest in the next greatest core. More memory bandwidth at lower power per bit moved, and probably stressing the memory vendors for more DRAM capacity at lower battery drain.
I was not late, I invested years ago in TSM, AMAT and MU knowing whoever won would need to use the picks and shovels.
 
Back
Top