[content] => 
    [params] => Array
            [0] => /forum/index.php?threads/how-much-real-variation-is-there-in-frequency-voltages-on-a-modern-high-volume-process.14004/

    [addOns] => Array
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2021171
            [XFI] => 1050270

    [wordpress] => /var/www/html

How much real variation is there in frequency@voltages on a modern high volume process?


Active member
Just curious -

How much *real* variation is there on clock speeds or the voltage curve for various clock speeds on modern processes? (and/or is leakage heavily varied -- i.e. waste power to make a chip function).

Back in the old days, Intel would sell wide varieties of clock speeds, i.e. 20, 25, 33, 50 MHz, and often the lower end chips had tons of additional margin. It definitely felt like artificial spread "back then" to sell as many chips as possible at as high of prices as possible.

FF to today I sort of see two scenarios (hence my question)

One approach is "the smartphone approach": one chip for millions of phones, with no major emphasis on clock speeds or other traits - just performance expected and battery life. Do these OEMs just take a very conservative "almost all chips will hit this frequency at this voltage/power level"? and are we at a point where if you had 10 x iPhones, would some iPhones have noticeably (more than 3%?) battery life than others doing the exact same tasks, because of CPU only differences? Would/could some iPhones actually clock higher than others and complete tasks faster with all else equal? (Why don't we see "Pro" iPhones with 10% more clock speed for a higher price?)

The other approach is in the PC market where you see a lot of core differentiation, with only minor clock speed differences. The core differences make logical sense to me - if something doesn't work, you have to disable something or throw away the chip, and all processes have some defects per wafer. The minor clock speed differences still often feel like the old school artificial segmentation.

I'm skipping dense server chips as I'm not as familiar with this market but recognize that could drive even more considerations..

So.. Is the clock / freq / voltage curve variation on modern processes only seen at the very top end of frequency pushes (i.e. Intel trying to launch 5.3 GHz chips), is it all over? and is it substantial? Does leakage vary enough per chip independently of these parameters to require strict binning?