Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/threads/can-intel-recover-even-part-of-their-past-dominance.23972/page-3
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2030770
            [XFI] => 1060170
        )

    [wordpress] => /var/www/html
)

Can Intel recover even part of their past dominance?

Beyond a given point, lowering voltage further increases power consumption per operation.

If you look at power-delay product, this tells you the energy needed to do something, Dynamic power (CV^2) drops with lower voltages (leakage less so) but so does clock speed, more rapidly as you get closer to the threshold voltages. For a given gate type (e.g. ELVT, ULVT, LVT, SVT) and clock speed (and activity percentage) there is a supply voltage where PDP reaches a minimum, and this where the power consumption is also a minimum -- as VDD drops you have to run slower but with more parallel circuits, which works for many things but not all. And if you're really bothered about power efficiency, you also need to vary VDD with process corner and temperature, and also circuit activity and clock speed.

For the circuits we've looked at in N3 and N2 which are relatively high activity (e.g. DSP, FEC...) the lowest PDP is usually with ELVT, but has never been as low as 0.225V -- for lower activity circuits where ELVT leakage is too high compared to dynamic power, ULVT can be better. But there's no single "best" answer (transistor type, voltage, frequency), it all depends on what the circuits are doing... ;-)
Electricity accounts 80% of BTC mining cost.
You can try a better solution and there's a lot of money to be made .
Intel was once a wantobe player.
 
Electricity accounts 80% of BTC mining cost.
You can try a better solution and there's a lot of money to be made .
Intel was once a wantobe player.
Which is true -- I was trying to correct the misapprehension that lower voltage is always better for efficiency/energy use, because it's not. However the minimum PDP VDD is well below anything used by chips like CPUs and GPUs today where lower voltage does always improve efficiency, depending on process corner (and circuit, and clock speed, and activity, and transistor type, and phase of the moon...) it's usually around 0.4V or a bit lower which is in the depths far below where CPUs lurk... ;-)

But you can't just take a chip designed for "normal-voltage" operation and drop the supply voltage massively because it won't work, at least not reliably -- if you want to operate down in this region you need to use special libraries and also tool precautions and new timing checks, because gate delay variation and sensitivity to supply voltage drops gets rapidly worse. TSMC enforce special rules for ULV operation, and the voltage where this happens varies with transistor type (ELVT, ULVT, LVT, SVT) -- which then causes bigger issues with mixing transistor types (e.g. uncorrelated Vth) because the delay tracking between types gets worse and worse.

All this imposes some penalties on design which reduce performance and increase area (as does going slower and more parallel), so you don't want to do this for a chip which spends most of its time (and dissipates most of its power) at higher Vdd (e.g. 0.5V and above) like a CPU. However if you have a chip which has one job to do and where power consumption is all-important and you're willing to use adaptive supply voltage, it's a price worth paying -- we've been doing this for some time now, the typical power saving is similar to a complete process node step, and the worst-case saving is closer to two process nodes... :-)
 
If Intel design an SRAM that scales, they could regain the lead. But SRAM is like other memory, scaling seems to have ended. SRAM is 90% of the area of a logic chip.

This leads to an observation, who knows memory better than Samsung? Maybe Samsung is a dark horse in the battle to scale SRAM.
 
Back
Top