Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/index.php?threads/tsmc-says-moores-law-is-not-dead-in-first-blog.11672/
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2021370
            [XFI] => 1050270
        )

    [wordpress] => /var/www/html
)

TSMC Says Moore's Law is not Dead in first blog

Daniel Nenni

Admin
Staff member
2019/08/14

34
by Godfrey Cheng, Head of Global Marketing, TSMC

It has been nearly 3 months since I joined TSMC. As with anyone joining a new company, I have been drinking from a firehose of information and data. One of the key topics that I first dug into was Moore's Law, which simplified states: The number of transistors in an integrated device or chip doubles about every 2 years.

Moore's Law is actually misnamed as a law as it more accurate to describe it as a guideline of historical observation and future prediction of the number of transistors in a semiconductor device or chip. These observations and predictions have largely held true for the past several decades. As we approach a new decade, some appear to share an opinion that Moore's Law is dead...

 
I read something like a $11 billion investment into a coalition. It looks like you were right about the fabless ecosystem running things.
 
Automata will totally eliminate the distinction between separate memory and processing. Also it will be interesting to see how much of the nanotechnology developed is applied to mems, sensors and especially integration and use in biological uses. The numerous technologies developed at tremendous costs are far to valuable to have their uses limited to specific areas. Morris Chang clearly stated MEMS are one of mankind's greatest opportunities.
 
Not having trained as an engineer, it continually surprises me that, despite the importance of data access and latency, the approach to transferring data has not changed for over a century. This seems particularly important when applied to AI implementations where fixed data paths are still typically preconfigured. With memory taking up much of a semiconductor design, I am surprised we haven't done more to improve it. Bringing memory closer to compute is only part of the solution. We need to enable intelligent memory, reducing the need to seek information or even remember where our information is. Systems have evolved to the point they can interact less like machines and more like the intelligent actors that they are.
 
processing data in isolation has limited value, the most interesting things require joins, which requires data to be moved. Selection can be done by indexing, which is pretty flexible and effective. So, processing in memory has a steep hill to climb before it becomes proven. At the moment most proposals just amount to wishes by Flash vendors that they could have a more profitable commodity. Repeating the mantra that it saves power is true, but only for bulk data moves. In higher algorithms the move may be worth the energy. Unfortunately the vast quantities of data slopping around demand the cheapest storage that can be found, so that will continue to dominate.

Now, what Mr. Cheng is describing in his blog is packaging memory close to the processing elements, which is rather different than smart storage. This is just blending technologies (memory optimized wafer vs. compute optimized) in proximity so that the working set of the computation expands, and bandwidth increases, with the shorter distances reducing power. This is a subtly different and far more sensible cost proposition, because it multiplies the value of the computation. Data has value when it is being used, and the compute elements are where data gets used, so the compute elements have the highest value density. Augmenting them with additional local memory allows that specialized memory to bathe in the flow of money, not just data.

Engineers need to think about value and cost, not just function.
 
Back
Top