Most of the buzz on network-on-chip is around simplifying and scaling interconnect, especially in multicore SoCs where AMBA buses and crossbars run into issues as more and more cores enter a design. Designers may want to explore how NoCs can help with a more power-aware approach.
An initial reaction from non-NoCers might be: How can adding a layer of interconnect infrastructure possibly reduce power consumption? It does seem counterintuitive at first, until thinking about one point: power consumption and power management are not the same thing. Rather than just optimizing power consumption at the RTL or IP block or domain level, power management implies a system-level view and understanding how performance and power are coupled.
Adding a NoC to provide high-speed interconnect between IP blocks provides a high-level communication channel for managing the entire SoC, with all operations supervised in software. The NoC already knows what the traffic pattern looks like system-wide, information that can be used in power decisions. This makes it a much simpler approach to power management than trying to use a low-level protocol like JTAG, or implementing user-defined sideband signals in AXI.
One breakthrough approach is DVFS – dynamic voltage and frequency scaling. With a single CPU core, DVFS is relatively straightforward. A particular core is usually sentient enough to discern how busy it is and how to respond with the accelerator to changes in workload, using just enough voltage and clock to get the job done in the allotted time.
“Analysis of Dynamic Voltage Scaling for System Level Energy Management”, Dhiman et al, usenix.org
At the system level, things change. Combinations of operations spread across cores of various types make the performance and power optimization problem much more interesting. ARM big.LITTLE takes care of deciding which CPU core to put on which thread. What about GPUs, or audio processing, or always-on DSP cores, or a DSP managing baseband? We’ve seen the efficiency equation for microcontrollers many times, and the notion that sleeping is good:
Efficiency = work per unit time per unit of energy
In an SoC, sleep is usually not an all-or-nothing endeavor. An operating system running over a NoC can predict what resources are needed or not needed next when the current tasks pass their result somewhere else. Rather than reactive, power management using a NoC can be proactive, improving responsiveness and resulting an overall reduction in system power consumption as IP blocks work together effectively.
The technique involves a power disconnect protocol. Kicking the plug on a power domain is really undesirable, especially in an interconnected resource that may be accessed by multiple cores. A disconnect protocol deals with two concepts: fencing, and draining. Fencing involves the handling of incoming requests to powered-down blocks, while draining involves the completion or state storage of in-process operations in blocks about to be powered down.
Since a NoC is a reliable transport mechanism, and talks to all the IP blocks, it provides the basis for effective and safe power-up and power-down of blocks and domains without losing transactions in the process. A NoC also makes scaling much easier as designs evolve; rather than completely redesigning the power management strategy each time, new IP blocks can be snapped into a higher-level power-aware infrastructure.
A power disconnect protocol is only one idea. Arteris has outlined several other ideas for using NoCs to enhance SoC power management in a new online post, exploring some in a bit more detail. The message: NoCs are for more than optimizing interconnect.
What’s your experience in system-level power management? Have you seen benefits in using a NoC to manage power? Are there considerations in Android or other operating systems in how SoC power should be managed?