…then you should read about this benchmark result showing how digital power varies with process corners, for high-speed data networking chip, not exactly the type of IC targeting mid-performance mobile application. Before discussing the benchmark results, we need to have some background about this kind of ASIC chip. Such a chip is really at the edge in term of performance (running frequency), that’s why the just processed wafers are tested and binning is exercised. Intel uses binning to categorize the maximum frequency the CPU IC can reach. The higher is the frequency, the most expensive the chip will be sold. In this case, the goal is to extract as many IC as possible from wafers, in order to keep the chip price as (reasonably) low as possible. When the binning detects chips ranked in “slow” category, these chips are not trashed, but will be corrected in the field by using adaptive supply voltage (ASV). At this point, you may suspect that exercising a higher VDD on this chip will have a negative impact on the power consumption (according with the VDD[SUP]2[/SUP] law). Binning allows you to correct the impact of process variations to keep a chip running at the desired high frequency by applying higher VDD, impacting the dynamic power consumption.
Those who still think that power consumption is only an issue for mobile application should imagine dozens (if not hundreds) high performance chips closely packed in racks. The “cost of ownership” linked with such a system is going higher with chip power consumption: you need to guarantee excellent power dissipation at chip level (most expensive package, thermal drain etc.), at system level you may have to deploy an expensive cooling strategy (from Wikipedia we learn that for 100 watts dissipated in a server, you have to spend another 50 watts to cool them!). At the end of the day, somebody will pay for the electricity bill! Add to this pure $ expenses, the company image value degradation, in the eyes of eco-concerned customers and you finally realize that lowering the power, or increasing the power efficiency, should be the next semiconductor industry concern, not only for mobile applications…
The main conditions for this benchmark are listed below, if you want to know about the complete picture, I suggest you to read the full article from Ian Dedic, Chief Engineer at Fujitsu Semiconductor Europe, posted in Linkedin “FD-SOI design community” group here:
- The benchmark uses extracted parasitics with typical clock rate, optimized library mix (different Vth and gate lengths),
- fanout and tracking load taken from a high-speed data networking chip
- high gate activity and 100% duty cycle
- maximum Tj because this is the maximum power condition needed for system design
- supply voltage is adjusted for each case (ASV) to normalize the critical path delay (clock speed) to the same value as slow corner 28nm
- The FDSOI forward body biasing (FBB) (used to decrease Vth) is adjusted to get minimum power across process corners
From the first table showing Digital voltage (Vdd) variation in respect with the process conditions to keep the critical path delay to the same value, we already can suspect that enabling the FBB allows keeping Vdd almost flat. Such an effect can only occur on FDSOI technology, with regular transistor architecture or FinFET. Now if we look at the results in term of maximum power consumption (dynamic + leakage), the impact of forward body bias is very impressive, as we see a 31% difference for the same technology node (14FD-SOI) and slow conditions… and up to more than twice power consumption for the device in 28HPM with ASV.
Unfortunately, there is no benchmark with 14Bulk and FinFET, but the author makes the assumption that 14FDSOI (standard transistor architecture) with ASV only is very similar to 14FDSOI with FinFET. That we can say for sure is that you can’t exercise FBB effect on 14Bulk FinFET technology. Thus the great improvement on maximum power consumption on FDSOI technology is clearly due to the forward body bias effect, and such an improvement is a great benefit for high performance chips. Eliminating “Slow” process corner device could be possible, but extremely costly as a chip maker pays for the complete wafer. Using adaptive supply voltage is a way to keep high performance at the same level for any chips, even coming from a slow process corner, but at the expense of higher maximum power consumption. Finally, FDSOI is the only way to keep the device cost minimum (thanks to ASV) and minimize the power consumption (thanks to FBB).
As a reminder or if you did not read one of the previous blog about FDSOI, you have the opportunity to visualize the forward body bias effect on the above picture. It’s only possible to apply a forward bias to the “body”, or the substrate of the wafer, on silicon on insulator (SOI) technology, as the buried oxide is playing a role similar than a gate on a standard architecture, except that it only change the threshold value (Vth). If the threshold becomes lower than nominal, it becomes possible to lower Vdd (or not to increase it) to get the same performance value than for higher Vdd. Because the dynamic power consumption is a function of square Vdd, the FDD impact is terrific on power consumption…
lang: en_USShare this post via: