Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/threads/end-of-moores-law-due-to-industry-structure.5594/
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2021770
            [XFI] => 1050270
        )

    [wordpress] => /var/www/html
)

End of moore's law due to industry structure ?

I

ippisl

Guest
With current competitors on top end process being TSMC and the GF/Samsung alliance, with intel probably not being a real competitor(almost no fabless is working with them), that's only 2 companies. Such industry structure offers very little competition.

Could this means that future improvements in manufacturing cost will be mostly absorbed by foundries, and we're really at the end of moore's law("price per transistor") due to economics, even if the tech can be improved ?
 
Remember, Moore's Law was really just an observation and the foundries never really practiced it. Intel however seems bound by Moore's Law and it may be their undoing. Example, one of the tactics Intel uses to follow Moore's law is restrictive design and manufacturing practices. This may continue to work for internal CPU designs but not so much for SoCs and certainly not for the wide range of products seen by the foundries.

A simple comparison is cars: Intel makes muscle cars with raw horsepower where the foundries make hybrids which are more focused on cost of ownership. Unfortunately for Intel, Samsung and TSMC are now making Tesla class SoCs that are defying Moore's Observation. The Apple A8x is the latest example and the A9x will continue that trend, absolutely.

The foundry formula is a 30% performance gain / 25% power saving per node based on the leading customer's application and right now that is SoCs. Lets call this Morris Chang's Law which I believe is much more sustainable than Moore's given the current time to market requirements of smartphones.
 
Last edited:
Intel are not competing with the foundries either on process or products. So long as Intel are the performance leaders for x86 CPU and that's what the market wants (servers, desktops, laptops) and where Intel make their money they're absolutely right to carry on being ultra-aggressive on process and having restrictive design and manufacturing -- the foundries don't want or need (or can afford) such a process, neither do their big SoC customers.

How long the power savings per node can continue is an interesting question, because operating at lower and lower voltages -- which is where power savings have partly been coming from -- is starting to hit a brick-wall limit due to process and device variation. The oft-quoted "x% power saving" means that you can get the same speed as the old node at maybe 100mV lower Vdd (e.g. 0.7V instead of 0.8V), but this isn't always cumulative -- when the next node comes along it's often the same comparison again (e.g. new node at 0.8V, newer node at 0.7V). Once you can't drop the voltage any more you can get more performance but the power saving per operation starts to level off.

The FinFET card has already been played (which is why 14nm/16nm FinFET is so attractive compared to 28nm) but from now on there's really only device shrinkage, which in turn now only helps save power if capacitances go down. Higher integration (density) isn't of any benefit in itself unless it saves money, it's already difficult to find anything useful to do with all that silicon. (8 cores in a phone? Why?)

It will be very interesting to see how many customers think the cost of 10nm (and beyond) is worth paying if for each node the cost increase goes up and the power saving goes down...

(of course there will always be some like the fruity company, but you've got to sell a *lot* of chips to make it worth it)
 
Daniel, are people really seeing 30% performance increase per dollar, per node? And if so, why does nvidia complain so loudly about 20nm ?

And wasn't in the past it was 50% performance improvement per node, and the 30% reflects more market power for TSMC ?
 
IanD You could build GPU's and CPU's with all that new silicon, virtual reality will eat those like peanuts(currently to get top of the line VR, you need $1000 in GPU's and $1000 PC), and it could become very popular very fast.
 
IanD You could build GPU's and CPU's with all that new silicon, virtual reality will eat those like peanuts(currently to get top of the line VR, you need $1000 in GPU's and $1000 PC), and it could become very popular very fast.

Not in smartphone SoCs, which are the biggest market for foundries right now. And nobody can compete with Intel on x86 CPU. That leaves manycore ARM servers which of course could do what you suggest, but right now their market is infinitesimal -- of course if that changes Intel could be in big trouble...

(but that's been predicted for years and still hasn't happened)
 
Back
Top