Does the evil sounding phenomenon known as Dark Silicon create a big opportunity for FPGA vendors as was predicted recently by Pacific Crest Securities? John Vinh posits that using multiple cores as a method of scaling throughput is flattening out, and the use of FPGA’s to perform computation can help off-load and thus overcome this issue.
The root cause of the so called Dark Silicon phenomenon has nothing to do with evil Sith Lords or a post-apocalyptic Mad Max world left without any power to run IC’s. Any ASIC designer will tell you that it is essential to manage on chip power through controlling clocks, voltages and turning modules on and off as needed. Clock gating is the main tool in this camp, given that clock trees and flops consume a tremendous share of an ASIC’s power. Of course lowering clock rates helps, but this comes at a direct cost of performance, the very thing we are trying to squeeze out of these designs.
Dark Silicon is more of an effect than a cause. When there are more gates on an ASIC than can be run within the thermal constraints of the design, the silicon that cannot be run is called Dark Silicon. It’s really better to think of it as a percentage of what needs to be switched off, rather than specific blocks that never run. However this is nothing new.
What is new is the disparity between gates available and the ability to run them. Multicores helped push through the performance barrier when clock rates for CPU’s plateaued between 3 and 4 GHz. But multicores are also running out of steam. But does adding FPGA’s programmed to perform computation in server farms really solve this problem?
The rule of thumb for converting a software task to an FPGA is that is provides about a 10X improvement in performance. But when Microsoft used a hybrid FGPA-CPU combination for its Bing search engine, they realized a 2X improvement.
So there is clearly a cost in the hybridization process. What the Pacific Crest piece overlooks is the silicon utilization question for FPGA’s. Yes they can get higher throughput than a general purpose CPU with software algorithms, but where do they stand with regards to gate utilization and power consumption per computational operation? I bet that a CPU can perform more hardware work per unit power and silicon than an FPGA. They are optimized to do just that. FPGA’s certainly run slower than dedicated CPU’s.
FPGA’s in this case are just able to solve an algorithm with less computation. So they have an advantage, but apparently not as big as you‘d expect – vis-a-vis the 2X gain, not the 10X you might expect in this scenario. And, as you might guess an ASIC would do even better for a hard wired algorithm. The easiest example to understand this is Bitcoin mining. People quickly stopped using processors and went to FPGA’s for generating Bitcoin hashes. FPGA’s were a lot faster than software for generating hashes, but the dedicated ASIC’s are orders of magnitudes faster.
Dark Silicon is real and is causing people to design differently, but the move to FPGA’s is more about how many gates need to be toggled to solve a particular problem based on how general purpose the hardware is. Companies like Google, Microsoft and the other search and big data providers have enough clout to build their own ASIC’s for search and computation. And let’s not forget Oracle with some significant in house chip design expertise. And these ASIC’s will probably run pretty fast – and yet they will still need to worry about Dark Silicon.
Share this post via:
More Headwinds – CHIPS Act Chop? – Chip Equip Re-Shore? Orders Canceled & Fab Delay