Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/threads/is-moores-law-ending-slowing-why-is-it-even-a-debate.8962/
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2021770
            [XFI] => 1050270
        )

    [wordpress] => /var/www/html
)

"Is moore's law ending/slowing" - why is it even a debate ?

Joe Talin

Member
There's is an ongoing debate whether moore's law(i.e. cost per transistor going down 2x per node) has stopped/slowed.

There are experts supporting both sides.

But it seems like a relatively objective thing to check(just check transistors/chip, die size, and cost per wafer over a few representative chips over some nodes. maybe add in yield). So why is there even a debate, why so many opinions?
 
Daniel, scott did a very good post, but my question is different - why is there so much debate for something that's relatively easy to measure - cost per transistor, at least for chips / nodes we already have?

Debates about Moore's law is much like debates about the constitution, you have many different interpretations, let me provide a few

1) The strictest meaning of the words, this would seem where you are, surely a fair argument to call it dead by this point of view.

2) The intent of the law, so the timescales are a bit longer, but why use a new word just to express 'doubling every x years' as opposed to what we had before, Moore's law is a perfect word to describe the idea of exponential growth over a given length of time.

3) The benefit it was intended to provide, computations getting cheaper / faster / denser / more efficient. Not all of these metrics scale at the same rate at the same time, cost can be driven more by market than manufacturing.

4) Whose law is it anyway? .. So Intel was years ahead of the pack and is stumbling, but the other guys are still keeping their original pace, by all accounts they could marathon past Intel at the same pace for at least another 10 years.


I'm very optimistic for a completely different reason, automation + *very* cheap solar = extremely low manufacturing cost.
 
Last edited by a moderator:
First of all, for a long time people got used to Moore's law as a prediction of how fast our silicon would improve. If we take Intel, their processors now are only faster by ~15% on each new gen, recently I saw benchmark of 32nm Sandy bridge vs. 14nm Kabylake and, 6 years later, a Kabylake system is not twice the speed of a Sandy Bridge one.

I would say Intel is the main reason why we today feel the progress has stalled and hence, the question of whether or not Moore's law is still valid.

Second, cost/transistor is no longer the only metric that matters.
While cost/transistor keeps going down, the development costs for Finfet is huge and growing at each new node. So huge that I would guess it could significantly impact those cost per transistor charts if one could factor in all R&D and CAPEX and divide that by total units manufactured. This is much harder to compute and is not how accounting works, so it is not as objective and easy to measure.

Third, because development costs are so high, there are many segments where improvements have stalled because they are at 28nm for many years already. This is where 22 and 12nm fdsoi from GloFo and others will be exciting to watch.

I would summarize it in this way:
1. The big move from plannar to bulk left too many people behind who could not justify the big entry fee;
2. All the recent focus on bulk while allowed the foundries to make impressive advancements in little time (to a point they are finally taking the lead from Intel) shifted investments away from plannar which in turn saw much slower pace of development and kept all those people left behind from #1 with no feasible upgrade path;
3. Those that moved to bulk have taken part of the gains in cost/transistor not to make faster/cheaper silicon, but instead used it to boost their operating margins in order to make up for the increased R&D expenses. I am looking at Intel here (and maybe that is why the industry no longer says how many transistors in each new chip gen);

A lot of this is speculation from my part and I am out of my league in some of these topics, but would love to hear your reactions to these thoughts.
 
Daniel, scott did a very good post, but my question is different - why is there so much debate for something that's relatively easy to measure - cost per transistor, at least for chips / nodes we already have?

The debate occurs because "Moore's law" is a vaguely defined thing that gets viewed many different ways. For example:

- What do you mean by "cost/transistor"? Is that the price the end use pays? The price the fab pays? Using what sort of amortization schedule to cover the cost of building the fab, acquiring the equipment, manufacturing the masks, etc.
Do we track leading edge nodes or some sort of average over all nodes? Do we care about comparisons within a single company (all-Intel, all TSMC) or whoever happens to be leading at some point in time?

- Do we talk about logic transistors, SRAM transistors, DRAM transistors, or flash transistors? (Soon, supposedly, to be joined by Optane transistors whatever they are -- ReRAM?)

- Even if you insist on logic transistors only, what's the time frame of relevance? Process transitions do not happen continuously, so how much slack are you going to allow before insisting that the next data point is "not on the line" rather than just following more or less the usual noisy scatter? And many who make one claim or another seem to limit themselves to CMOS (or even more aggressively to CMOS finFETs on traditional Si), and insist that this particular technology is reaching some limit.
Even if that's true, there's no obvious reason that changes to the precise data-point claimed (allow GAA or QW transistors, allow III-V materials, allow TFETs, allow 2D materials or topological insulators) don't remove the limit that is claimed. And no-one can be sure when these alternatives will kick in with, of course, opinions varying from RSN to never.

Part of the problem is that different people look at different things and extrapolate far too aggressively from their little corner to the entire universe. And so we see people (explicitly or implicitly) saying things like "III-V materials will never happen because there's no obvious way to get them into the mainline leading nodes that's makes economic sense at that particular transition". The problem with these sorts of claims is that even if a particular technology is, right now, less ideal than Si CMOS for mainline CPUs, it may be better than all alternatives for something else. That something else might be memory or storage related; it might be space related; it might be related to displays or sensors or to specialized scientific equipment. Point is, there may be an ALTERNATIVE path to commercialization that pays for the R&D needed to truly understand the material or new device, without having to immediately replace Si CMOS.

In other words
- there isn't agreement about what the law "is"
- the data isn't available (for pretty much any claim about what the law "is") to make a clear judgement call
- there are perfectly reasonable grounds for claiming that even if there's a year or three slowdown along one particular dimension of the IC world (eg leading-edge CMOS logic density), that slowdown does not mean the end of the road; we have no idea how the many alternatives to Si CMOS will play out.
 
Daniel, scott did a very good post, but my question is different - why is there so much debate for something that's relatively easy to measure - cost per transistor, at least for chips / nodes we already have?
Great topic, and I did read Scott’s blog and extensive Q&A in the comments.

The naive technologists believe that Moore’s Law is all about performance (MIPS, capacity, and power), and these attributes will continue until physical limits are met. If that’s how they want to look at it that’s fine.

Scott’s original blog pertains to the physical limits noted above and the incremental cost of production of a transistor or gate. It does not include the NRE of chip development which needs to be amortized, although this is covered in the comments. The comments also reflect that there will be some products that don’t move beyond a particular node, or are slow to move.

Moore’s Law has been driven on the assumptions that historically all products, no matter how simple, would migrate to the next node in order to realize a loaded cost benefit. It is this assumption that drove investments into new nodes and factories… that once the leading edge chips move on the lesser chips would backfill when the cost was right. With each node that’s harder to do due to NRE costs. When the backfill isn’t guaranteed to happen the risk for investment is higher.

There is an entire industry that depends on Moore’s Law continuing, therefore they continue to promote it as a self-fulfilling prophecy. As long as the industry can continue to attract investors who believe in the dream it will continue. But to me it feels a bit like a Ponzi scheme.
 
mgsporer , about NRE costs:

Yes, they are an issue to all but a few key applications. But assuming the benefit of having the latest node would be large enough , i imagine some platform will make it accessible , affordably to lower volume applications, be it via highly efficient FPGA(zvi orbach did some work on that), structured asics, 3d assembly(where you buy some IP already in silicon form) - And some sort of higher level design platforms that drive abstraction and reuse higher(berkley did some interesting work on that) or optimizing design at the level of a human engineer(for math based designs), or automate more of the design process(i hear deep learning may be useful for pcb layout, why not for chip layout?). or even just the theoretical goal of human level AI.

So technically i think the NRE(for chip design) could become reasonable again(altough that may take some time, and slow things). I wonder though about process design.Assuming we'll have human level AI, process design would still be awfully expensive , right ?
 
As others mentioned, there are a lot of interpretations of Moore's law. For me in the end it translated into "How long will scaling continue?".

There is an entire industry that depends on Moore’s Law continuing, therefore they continue to promote it as a self-fulfilling prophecy.

This reminds me of a conference I attended about 4 years ago about CMOS image sensors. The resolution race was at its peak, pixel size had reached about 1.1um. The question came up if pixel size shrinking will continue.

The CTO of one of the leading image sensor makers said: "Of course it will continue, otherwise most of us will loose their jobs."

What happened? Although there may be some sensors with pixel sizes just below 1um, pixel shrinking stopped and efforts concentrated on other topics like improving image quality, increasing readout speed, etc.

The already achieved resolution was simply good enough for most applications, and the drawbacks of smaller pixels were too big.

My personal feeling is that this translates somehow to the current situation in semiconductor industry (but I have neither real insight nor am I an expert):
- design/process costs are so high that more or less only cell phones and Intel can pay it
- even Intel does not be in a real rush to push to the next node, maybe gains are too small?
- this leaves cell phones: While it is fascinating to have a super computer in the pocket (with I do not know how many billion transistors), I do not know why this has to be, and even less, why I would need even more processing power? If anyone says: "But it is still a bit slow reacting", I say: "This is a software issue in reality, faster hardware will not fix it". I guess, most use cases could be achieved with 1/10 of the processing power. Software guys mainly make use of the processing power, because it is there (and no need for them to think about how to optimize things). Similarly I think that also the hardware would not be much slower if better optimized for area. I guess also hardware designers make use of the the transistors because they are there.
- I think this scaling will stop soon, because mass market has no need for it, and the niche markets cannot pay for it, and gains of a smaller transistors (power, speed) are low anyway
- Maybe then more effort will go into making logic smaller by design (e.g. "can I achieve a faster H.265 encoder with half the transistor count?") - which in turn might further reduce the desire for more transistors...
- Process technology might concentrate on improving existing processes (power consumption, speed) by other means than shrinking

BTW: I have to admit that I had the same thoughts already 1-2 years ago... So it is hard to predict when there will be a stop, or if it will be hard or soft stop.

Regards,

Thomas
 
mgsporer , about NRE costs:

Yes, they are an issue to all but a few key applications. But assuming the benefit of having the latest node would be large enough , i imagine some platform will make it accessible , affordably to lower volume applications, be it via highly efficient FPGA(zvi orbach did some work on that), structured asics, 3d assembly(where you buy some IP already in silicon form) - And some sort of higher level design platforms that drive abstraction and reuse higher(berkley did some interesting work on that) or optimizing design at the level of a human engineer(for math based designs), or automate more of the design process(i hear deep learning may be useful for pcb layout, why not for chip layout?). or even just the theoretical goal of human level AI.

So technically i think the NRE(for chip design) could become reasonable again(altough that may take some time, and slow things). I wonder though about process design.Assuming we'll have human level AI, process design would still be awfully expensive , right ?

The problem with flexible/programmable solutions is that they're much less efficient than fixed/custom ones, even if you consider things like fine-grain programmable signal processing fabrics, because there's a tradeoff between efficiency and flexibility. If you start from a custom solution, a programmable heterogeneous fine-grained fabric aimed at a single purpose (e.g. signal processing) will typically be ~3x bigger/higher power, multi-purpose reconfigurable hardware (e.g. FPGA) will be ~10x bigger, software-programmable targeted hardware (e.g. GPU) will be 30x bigger, fully flexible software-driven (e.g. CPU) will be 100x bigger -- in all cases the power costs scale similarly.

For example, even a bleeding-edge 7nm FPGA would struggle to get similar power to a cheap 28nm ASIC, and would cost more even allowing for NRE -- so what's the point?

Of course if you must have much more flexibility/reconfigurability then a programmable solution (hardware/software) is needed, but it will take more power and cost more and these are the two things that drove Moore's Law.
 
Back
Top