Intel has been incredibly successful by designing high performance server SoC to address the data center market segment, and the chance to see the company loosing large market share is pretty low, at least in the short term. Now, if we look at the really long term, 2030 or even 2040, like did the Semiconductor Industry Association (SIA) in a recent report (“Rebooting the IT Revolution: A Call to Action”) launched in September 2015, we realize that the current way of designing chips will have to drastically change. Designing SoC for performance only, even on the most advanced technology nodes, even by shifting node down whenever it’s possible, will simply not be sustainable.
If you don’t trust me, just take a look at the diagram below: the total energy of computing (Benchmark curve) would pass the world’s energy production by 2037 if the ways we design computing systems don’t change.
At first, we have to say that the authors have not limited their investigations to the different SoC design approaches, but have evaluated yet to come non-silicon devices, the impact of 3-D design, near threshold operations, just to name a few. As far as I am concerned, I propose to investigate within a field that I know: Silicon devices, SoC design techniques and Si fabrication technology.
During the last 15 years, we have seen two types of chip makers, both being very successful by developing SoC for two completely different markets. One group lead by Intel or Cisco is developing SoC targeting data centers or networking is targeting always higher performance (computing power or bandwidth capacity), whatever the power consumption, assuming this power doesn’t prevent the chip to run normally.
The other group lead by Qualcomm or Apple developing application processor SoC for battery powered mobile systems. This group has learned how to provide the highest CPU, GPU or DSP performance while keeping the power consumption as low as possible, using design techniques like clock gating or power island at chip level and power management units (PMU) usage at system level. We should not forget their technology partners like foundries, TSMC, Samsung or GlobalFoundries, who have systematically developed low power technology option, as well as the IP vendors providing low power version of the foundation IP.
It’s interesting to notice that Intel’s tentative to penetrate the mobile segment have been frequent but never successful. Is it due to a kind of “company culture” focused on pure performance, preventing to support the right technology option (low power), or to the designers themselves, reluctant to adopt design techniques radically different from what has been used for decades to create successful CPU SoC?
No matter are the reasons, probably a mix of company culture or short term marketing (Intel is successful on the data center segment with 99% market share, according with Bloomberg, so why changing now), the data manipulation, computing and networking, is growing exponantially, in every category. The diagram below, extracted from the SIA report (and very similar to forecast built by Cisco that you can easily find on the web), clearly shows that the data growth is exponential. If you look at the top 3 contributors, Multimedia, Consumer IoT and Industrial IoT, the industry consensus is that it will continue to grow… in fact for IoT and IIoT we just see the begining of a much larger deployement ! If you consider that a large part of the world is not yet involved, but strongly desire to participate to the data feast, this will reinforce the exponential growth trend. If no action is taken in the mid term, the computing industry will face a real issue by 2035/2040…
As of today, a data center is a building full of server racks, which need to be cooled via an expansive air conditioning system. The electricity bill is high and more than 50% of this electricity is used for the cooling system itself. Now, if you look at the server chips, they need and efficient package in respect with power dissipation, plus additional heat-sink. In other word, at every step you pay a price penalty due to the high power dissipated by the chip.
If we want the companies managing data centers (Google, Amazon, etc.) to radically change for a power conscious architecture, don’t expect them to make this change by altruism, the proposed solution should provide a lower Cost of Ownership (CoO). This means that the overall cost should be lower at the end of the year. Could we define a server architecture providing equivalent performance (MIPS, latency, bandwidth) but with much lower power dissipation, leading to drastically optimized electricity bill? I don’t know, but this could be a research track to be explored immediately, I mean searching for a solution which could be implemented in the next 3 to 5 years, instead of expecting the emergence of a magic material to replace the Silicon (which may arise). If you look forward, 2037 is not so far away from now. It’s as close as 1995…Share this post via: