Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/index.php?threads/will-fab-building-drive-down-prices-effective-life-span-of-the-cutting-edge.18094/
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2021370
            [XFI] => 1050270
        )

    [wordpress] => /var/www/html
)

Will Fab Building drive down prices? Effective life span of the cutting edge

Arthur Hanson

Well-known member
Will supply and demand dictate prices or is the market demand high enough to maintain ever increasing prices? Also, is the price per performance increasing at a rate fast enough to keep demand rising? What do readers feel about the effective lifespan of new processors before they become obsolete to the point they can no longer be repurposed and need to be replaced? Are older chips have a second life or are they just tossed? Thanks, any similar views or observations appreciated. Will the market support all the new fabs being built around the world?
 
Will supply and demand dictate prices or is the market demand high enough to maintain ever increasing prices? Also, is the price per performance increasing at a rate fast enough to keep demand rising? What do readers feel about the effective lifespan of new processors before they become obsolete to the point they can no longer be repurposed and need to be replaced? Are older chips have a second life or are they just tossed? Thanks, any similar views or observations appreciated. Will the market support all the new fabs being built around the world?
Honestly, since the Conroe, there have been no really major improvements in processors. In fact, I have a Penryn (which is minor upgrade) and it does most things fine. Obviously, there are different usage cases, and additional cores and performance can make a difference for some people, but even processors 10 to 15 years old are not useless in the sense no one can use them. But, the better question is, are the computers they are in still useful? Maybe not.

I say that because with an old computer, you have a greater possibility of failure. Is it worth repurposing these machines, when you're suffering with a mechanical hard drive that could go at any time? So, I don't know that processors necessarily become too slow before the computer just becomes old, and either needs more memory, a faster HD, etc... I typically don't recommend upgrades for older computers, for the simple reason what will go next?

Now, for me, I can do it, because it costs nothing for me to replace parts, except for the cost of the parts, but for people that have to pay for someone do it? Replace the computer.

We're not in the days of the 286 where you got a new computer and were just shocked at how fast it was. Nor are they made by IBM, and will last 300 years. They are disposable, and the processors go with them.

BTW, the computer I use the most is a quad core Tremont NUC. It's more than fast enough while being a 10 watt processor, it has no fan, and it's tiny and agreeable. My Alder Lake is faster, but, it really makes no difference for most of the stuff I do.
 
Honestly, since the Conroe, there have been no really major improvements in processors.
Seriously? Conroe used FSB architecture, had a discrete memory controller supporting much slower memory than the latest CPUs (DDR2), PCIe Gen 2, no credible SSD support, and you might get Win 8 to run, but like a pig compared to a $350 Intel NUC. Ugh.
 
Seriously? Conroe used FSB architecture, had a discrete memory controller supporting much slower memory than the latest CPUs (DDR2), PCIe Gen 2, no credible SSD support, and you might get Win 8 to run, but like a pig compared to a $350 Intel NUC. Ugh.
Conroe was the last big jump in CPUs. The rest have been much more iterative. Even then, it was tiny compared to the 286 jump, or even the 486 jump.

And it's also older than 10-15 years. I still have a high-end Penryn, and it's fine for a lot of stuff. An i7-4790 is about 10 years old, and is more than fast enough for what most people use computers for. But, not all.

My Penryn is more than fast enough for browsing, watching movies, editing files, etc... Do I want to use it for compiling? No. How many people compile code though?

Conroe proved what I was arguing for a long time before it came out, the integrated memory controller wasn't a big part of what made Pentium 4 slow. I think they added it to Nehalem, if memory (forgive the pun) serves me correctly. But, every release after Conroe was relatively minor, compared to the dramatic improvement Conroe offered. Now, enough of those do add up, but my point being, it is the least obsolete part in that computer, and the computer is getting wonky because of its age.

Of course, my IBM PC and PC/AT still work fine. IBM quality at that time ... Except the cases rust if you're not careful.
 
Conroe was the last big jump in CPUs. The rest have been much more iterative. Even then, it was tiny compared to the 286 jump, or even the 486 jump.
The word you're looking for is incremental. Yes, going from 16bit to 32bit to 64bit instruction cores are a big deal, but I still completely disagree with your premise. You can have the fastest cores in the world, but they're useless without faster memory, memory access paths, and I/O, and those improvements were not incremental.
And it's also older than 10-15 years. I still have a high-end Penryn, and it's fine for a lot of stuff. An i7-4790 is about 10 years old, and is more than fast enough for what most people use computers for. But, not all.
A Penryn and an i7 of any vintage are not comparable in performance.
My Penryn is more than fast enough for browsing, watching movies, editing files, etc... Do I want to use it for compiling? No. How many people compile code though?
Imagine a Win11 OS update on it. Come back in a few days when it's done. Lots of people play games on their computers. Not me, but lots of people. And video editing.
Conroe proved what I was arguing for a long time before it came out, the integrated memory controller wasn't a big part of what made Pentium 4 slow. think they added it to Nehalem, if memory (forgive the pun) serves me correctly.
I think you have an error in grammar. The integrated memory controller made a big difference in performance, and it was first offered in Nehalem. Hundreds (thousands?) of Intel engineers argued for an integrated memory controller back then for a long time, but Intel had a captive multi-billion dollar chipset business in clients and servers that was fab'd on the N-1 process. The chipsets were a huge financial win. Then AMD integrated the memory controllers, and finally the Intel CPU designers were allowed to integrate. On Nehalem and then Itanium.

But, every release after Conroe was relatively minor, compared to the dramatic improvement Conroe offered. Now, enough of those do add up, but my point being, it is the least obsolete part in that computer, and the computer is getting wonky because of its age.
I still think you're incorrect, big time.
 
The word you're looking for is incremental. Yes, going from 16bit to 32bit to 64bit instruction cores are a big deal, but I still completely disagree with your premise. You can have the fastest cores in the world, but they're useless without faster memory, memory access paths, and I/O, and those improvements were not incremental.

A Penryn and an i7 of any vintage are not comparable in performance.

Imagine a Win11 OS update on it. Come back in a few days when it's done. Lots of people play games on their computers. Not me, but lots of people. And video editing.

I think you have an error in grammar. The integrated memory controller made a big difference in performance, and it was first offered in Nehalem. Hundreds (thousands?) of Intel engineers argued for an integrated memory controller back then for a long time, but Intel had a captive multi-billion dollar chipset business in clients and servers that was fab'd on the N-1 process. The chipsets were a huge financial win. Then AMD integrated the memory controllers, and finally the Intel CPU designers were allowed to integrate. On Nehalem and then Itanium.


I still think you're incorrect, big time.
Actually, you're completely wrong on 64-bit and 32-bit. The main purpose for 64-bit systems was so they can see more memory, not because x86-64 is faster. In fact, it's not for almost all workloads, and can be slower. It also slightly limits the clock speed. You don't typically need to do arithmetic on numbers bigger than 2 billion, and the code density is slightly less. But, either way, it's well known 64-bit is not significantly faster than 32-bit, it always has been known. Now, updates to processors not related to that did increase performance, and since 32-bit processors went away, you could say 64-bit processors were faster, but not because they were 64-bit. Now, if you have something that does to huge integer mathematics, it could be, but that's very limited. Most times when it was tested, it was about 1% slower, plus or minus.

You're missing the point about an i7 and Penryn. I'm not sure why it's so difficult to understand, but I'll try again. If you're not waiting on the processor in the first place, there's no substantial difference. If my Penryn easily meets the needs of what I'm doing with it, no, there's not a significantly better experience with an i7. But, I'm not sure why you're even bringing that up, since I never said they had comparable performance. Maybe not reading what I'm writing correctly? I mean, it's pretty clear to me I never said they had comparable performance, so why are you struggling with it?

Yes, and most people don't play games that require a high end CPU, percentage wise. But, I have been clear to say some people can benefit, but there's a lot that can do fine with an older processor. Again, please actually read what I'm writing, instead of trying to change it so it fits in your narrative.

I think you have a problem with understanding what I'm writing, the grammar is correct. Many people were saying the Pentium 4 was slow, and had high latency because of the memory controller was still on the chipset. In fact, Conroe had significantly lower latency than Pentium 4, and was on the chipset as well. Oh, and it blew the door off the Athlon 64, which had the memory controller integrated.

You're completely wrong about memory, particularly about latency, which has improved only very slowly, and that's what CPUs care more about. Last I checked, DDR5 still has higher latency than DDR4. iGPUs like the memory bandwidth, and more cores benefit a bit more, but then larger caches make it less important. But, in any case, memory performance relative to CPUs has not improved dramatically. It's been a slow row, which has been the case for CPU performance in general for a while.

Now, you have to keep in mind, I may be older than you, and really saw big changes with generations. That's the only way I can think of your responses as being sensible. I've seen generations increase by 3x (286), and 2x (486), and Pentium also around there. Oh, and by the way, none of those were changes from 16-bit to 32-bit. The 386 was, and it showed the lowest improvement in performance (but added two important new modes, and got rid of the dreaded 16-bit segments), but in terms of performance, you're wrong. 286 was huge, 486 was huge, Pentium was huge.

Then we got stuck in a bit of a malaise, until Conroe came out, and instantly blew the doors off of existing processors. And every processor after it was iterative, whereas Conroe was not an iteration on the Pentium 4, so it was the correct term. (Sandy Bridge pulled a decent amount of tech from the Pentium 4 though, and Conroe could be said to be based on the mobile line, so wasn't purely a new design like Pentium 4 was) . Look up the Conroe reviews if you don't remember, they were shocking. Instantly killed the Pentium 4, and put AMD in far second place until Ryzen. And yes, they got improvements in each generation, which over time added up, but nothing like Conroe over Pentium 4, or even its direct ancestor, Yonah. And no, Nehalem didn't have that level of improvement.
 
Last edited:
Actually, you're completely wrong on 64-bit and 32-bit. The main purpose for 64-bit systems was so they can see more memory, not because x86-64 is faster.
I never said 64bit cores were inherently faster than 32bit cores.
In fact, it's not for almost all workloads, and can be slower. It also slightly limits the clock speed. You don't typically need to do arithmetic on numbers bigger than 2 billion, and the code density is slightly less. But, either way, it's well known 64-bit is not significantly faster than 32-bit, it always has been known. Now, updates to processors not related to that did increase performance, and since 32-bit processors went away, you could say 64-bit processors were faster, but not because they were 64-bit. Now, if you have something that does to huge integer mathematics, it could be, but that's very limited. Most times when it was tested, it was about 1% slower, plus or minus.
I never said 64bit cores were inherently faster than 32bit cores.
You're missing the point about an i7 and Penryn. I'm not sure why it's so difficult to understand, but I'll try again. If you're not waiting on the processor in the first place, there's no substantial difference. If my Penryn easily meets the needs of what I'm doing with it, no, there's not a significantly better experience with an i7. But, I'm not sure why you're even bringing that up, since I never said they had comparable performance. Maybe not reading what I'm writing correctly? I mean, it's pretty clear to me I never said they had comparable performance, so why are you struggling with it?
I'm struggling with your manner of discussion. You're arguing just on the basis of your personal use model, which is very mundane, and seemingly extending it to the general case of most personal computers. But, obviously, just having this discussion with you about ancient CPUs makes my judgement questionable.
Yes, and most people don't play games that require a high end CPU, percentage wise. But, I have been clear to say some people can benefit, but there's a lot that can do fine with an older processor. Again, please actually read what I'm writing, instead of trying to change it so it fits in your narrative.
You mean most people in your age group? I'll try not to be so narrow-minded.
I think you have a problem with understanding what I'm writing, the grammar is correct. Many people were saying the Pentium 4 was slow, and had high latency because of the memory controller was still on the chipset. In fact, Conroe had significantly lower latency than Pentium 4, and was on the chipset as well. Oh, and it blew the door off the Athlon 64, which had the memory controller integrated.
Were you on the Conroe design team? It sounds like you were.
You're completely wrong about memory, particularly about latency, which has improved only very slowly, and that's what CPUs care more about. Last I checked, DDR5 still has higher latency than DDR4.
DDR5 is optimized for increasing throughput by having two sub-channels with independent clocks, and does indeed have slightly higher latency (maybe 10%) depending on the specific implementation. Of course, CPUs don't access DRAM directly, they only see caches.
iGPUs like the memory bandwidth, and more cores benefit a bit more, but then larger caches make it less important. But, in any case, memory performance with respect to CPUs has not improved dramatically. It's been a slow row, which has been the case for CPUs in general for a while.
Ok, you do know about caches. DDR2 did about, what? 800MT/sec? And DDR5 does what? Up to 6400MT/sec? Not an inconsiderable improvement.
Now, you have to keep in mind, I may be older than you, and really saw big changes with generations. That's the only way I can think of your responses as being sensible. I've seen generations increase by 3x (286), and 2x (486), and Pentium also around there. Oh, and by the way, none of those were changes from 16-bit to 32-bit. The 386 was, and it showed the lowest improvement in performance (but added two important new modes, and got rid of the dreaded 16-bit segments), but in terms of performance, you're wrong. 286 was huge, 486 was huge, Pentium was huge.
OK boomer.
Then we got stuck in a bit of a malaise, until Conroe came out, and instantly blew the doors off of existing processors. Look it up if you don't remember. Instantly killed the Pentium 4, and put AMD in far second place until Ryzen. And yes, they got improvements in each generation, which over time added up, but nothing like Conroe over Pentium 4, or even its direct ancestor, Yonah. And no, Nehalem didn't have that level of improvement.
Nehalem did, for servers. I admit to personal bias, I'm a server/datacenter sort of guy. I've never had anything to do with client system development. Hmmm, that's a tiny bit of a lie, but just a rounding error.
 
I never said 64bit cores were inherently faster than 32bit cores.

I never said 64bit cores were inherently faster than 32bit cores.

I'm struggling with your manner of discussion. You're arguing just on the basis of your personal use model, which is very mundane, and seemingly extending it to the general case of most personal computers. But, obviously, just having this discussion with you about ancient CPUs makes my judgement questionable.

You mean most people in your age group? I'll try not to be so narrow-minded.

Were you on the Conroe design team? It sounds like you were.

DDR5 is optimized for increasing throughput by having two sub-channels with independent clocks, and does indeed have slightly higher latency (maybe 10%) depending on the specific implementation. Of course, CPUs don't access DRAM directly, they only see caches.

Ok, you do know about caches. DDR2 did about, what? 800MT/sec? And DDR5 does what? Up to 6400MT/sec? Not an inconsiderable improvement.

OK boomer.

Nehalem did, for servers. I admit to personal bias, I'm a server/datacenter sort of guy. I've never had anything to do with client system development. Hmmm, that's a tiny bit of a lie, but just a rounding error.
You did say "Yes, going from 16bit to 32bit to 64bit instruction cores are a big deal," They really weren't, although 16 bit to 32 bit was, because of the 16-bit segments. Look it up, Mille.

My personal use model involves not only being part of a group that developed an operating system (OS/2, look it up mille), but supporting thousands of machines in jobs. So, I have a pretty good idea of what people need, but always make the caveat that different usage patterns are different. I'm not advancing that EVERYONE can use an older processor, only that a lot of people will not have problems with it. Oh, and by the way, Intel still sells Atom based processors, and sells a lot of them. Because they are fine for a lot of people. But, you know better?

With regards to the Conroe, no, I have never worked for Intel, but I'd have been damn proud of being part of that team. I just know what I'm talking about, and know how dramatic an improvement that was. Some of us speak the truth, as best we know, while others questions their veracity based on their own personal view of the world. As they say, a thief thinks everyone steals.

But, instead of sounding like you don't know anything about the chip, do a search on it. They still have articles on it, and people at the time were astounded. It completely altered the CPU landscape. Do you know how to do a search on The Google? I strongly recommend it, rather than disparage this incredible design.

Again, you are struggling with comprehension when you talk about DDR5. My point was, DDR4 still had better latency, and many applications (including games) did better with DDR4, or the same. And that latency mattered more, in general, in CPUs, which is why they don't use GDDR RAM. Thus, as most people know, RAM hasn't really kept up with CPUs very well. Bandwidth is easy, compared to latency, just as adding cores is easy next to faster single-threaded performance, but they are less broadly useful.

I wrote an article on Mainframes (which were the first computers I worked with), on Tom's, about 10 or so years ago. You know, real servers, not toys. Nehalem was a nice chip, by the way, but it was not a huge leap forward like Conroe was. There hasn't been one since.

But, I'll help you with The Google, since I know you youngsters don't like to exert yourselves too much "After years of wandering in the wilderness, Intel has recaptured the desktop CPU performance title in dramatic fashion. Both the Core 2 Extreme X6800 and the Core 2 Duo E6700 easily outperform the Athlon 64 FX-62 across a range of applications—and the E6600 is right in the hunt, as well. Not only that, but the Core 2 processors showed no real weaknesses in our performance tests. " From https://techreport.com/review/intels-core-2-duo-and-extreme-processors/

From Anandtech "
Intel's Core 2 Extreme X6800 didn't lose a single benchmark in our comparison; not a single one. In many cases, the $183 Core 2 Duo E6300 actually outperformed Intel's previous champ: the Pentium Extreme Edition 965. In one day, Intel has made its entire Pentium D lineup of processors obsolete. Intel's Core 2 processors offer the sort of next-generation micro-architecture performance leap that we honestly haven't seen from Intel since the introduction of the P6.

Compared to AMD's Athlon 64 X2 the situation gets a lot more competitive, but AMD still doesn't stand a chance. The Core 2 Extreme X6800, Core 2 Duo E6700 and E6600 were pretty consistently in the top 3 or 4 spots in each benchmark, with the E6600 offering better performance than AMD's FX-62 flagship in the vast majority of benchmarks. Another way of looking at it is that Intel's Core 2 Duo E6600 is effectively a $316 FX-62, which doesn't sound bad at all." https://www.anandtech.com/show/2045/19

Conroe WAS that good. Now stop diminishing it, or I'll yell at you to get off my lawn.
 
Yeah Conroe was that good of a jump. Intel was stuck on Netburst high clock, low performance per clock. High tdp. Then Conroe comes out. low tdp, low clocks, high performance per clock. The lowest of the Conroe at 1.83ghz was beating the highest of intels previous high end clocked at 3.73ghz. The cause? Competition. AMD was eating into Intels marketshare; consumer AND Servers because AMD had made, like Intel with Conroe, processors that were based on high performance per clock, not highest clocks with low performance per clock.

Intel knuckled down, and had thousands of people working 24/7 on Conroe and the next chips, came out with the tick-tock strategy that worked brilliantly. New CPU architecture changes one year, new node shrink the next. It allowed Intel to fluff about with a lot of money. Where, they got lazy by making just tiny iterations each year pretty much. Because after Conroe came out, AMD dived into their bulldozer era of slowness. Where the biggest leap that intel had on the journey since Conroe was changing transistors to FinFet transistors. I think that was the Nehalem leap which was still quite low. But the largest speedup in using computers came from the introduction of the SSD.

But ta152h is right. From Nehalem onwards, until competition started showing up again...AMD with Ryzen, it was slow going with Intel. Smallish 5% / 10% incremental leaps for far too many years. Intel became arrogant and spent waaay too much money on share buy backs while offering minimal gains.

It's only since competition became a thing again and Intel started losing ground to competition that Intel had to push forward and innovate, change, create new architectures, get them ready to roll out against the competition that is taking away their market share.

That competition which started with Ryzen gen 1. +50% gains over bulldozer micro-architecture (which put them just under intels performance), ryzen gen 2 (same performance as intel), ryzen gen 3 (more performance than intel and at less tdp...), Now after quite a few years since ryzen came onto the scene and intel had started shaking their booty to compete, we are now in them lovely days of competition where consumers benefit from increased performance, and sometimes the costs get lower too. AMD's cpu's at the moment are super low.
 
Will supply and demand dictate prices or is the market demand high enough to maintain ever increasing prices? Also, is the price per performance increasing at a rate fast enough to keep demand rising? What do readers feel about the effective lifespan of new processors before they become obsolete to the point they can no longer be repurposed and need to be replaced? Are older chips have a second life or are they just tossed? Thanks, any similar views or observations appreciated. Will the market support all the new fabs being built around the world?

It will not drive down prices. I keep repeating for the 100th time. Most of the shortage is on 200mm nodes, not the leading edge, where most of the fab expansion money is going.

Leading edge is almost irrelevant outside of laptop/PC/server CPU, and telecom gear space
 
It will not drive down prices. I keep repeating for the 100th time. Most of the shortage is on 200mm nodes, not the leading edge, where most of the fab expansion money is going.

Leading edge is almost irrelevant outside of laptop/PC/server CPU, and telecom gear space
ADAS?
 

You don't need leading edge even for military radars, but what will be pushed up to the physical limit of supplies will be RF semiconductors. They mostly come from speciality, and mixed signal fabs, unless it's a so so ISM band electronics.
 
You don't need leading edge even for military radars, but what will be pushed up to the physical limit of supplies will be RF semiconductors. They mostly come from speciality, and mixed signal fabs, unless it's a so so ISM band electronics.
You're thinking about the radar and mixed-signal requirements. I'm thinking about the pure computational/memory requirements; they don't need to be in the same silicon as the mixed-signal stuff, and I'd suspect the computational requirements will make up more of the cost.
 
You're thinking about the radar and mixed-signal requirements. I'm thinking about the pure computational/memory requirements; they don't need to be in the same silicon as the mixed-signal stuff, and I'd suspect the computational requirements will make up more of the cost.

2d (azimuth + velocity towards the receiver) radar needs, even with FMCW, and some interference protection will likely be sufficiently addressed by the simplest of ASICs. Compared to aircraft radars, that's peanuts.
 
2d (azimuth + velocity towards the receiver) radar needs, even with FMCW, and some interference protection will likely be sufficiently addressed by the simplest of ASICs. Compared to aircraft radars, that's peanuts.
But I'm not talking about that either, I'm talking about the computation behind self-driving vehicles.
 
Back
Top