RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

RedHawk Excels – Customers Endorse

RedHawk Excels – Customers Endorse
by Pawan Fangaria on 05-28-2014 at 11:00 am

Since a few years, I have been following up Ansys Apachetools for semiconductor design, verification and sign-off. RedHawk is the most prominent platform of tools from Ansys, specifically for Power, Noise and Reliability Sign-off. It has witnessed many open endorsements from several of Ansyscustomers through open presentations, about which I have talked in the past. For a product, what can be a better promotion than its users speaking out for it? This is a win-win situation where a product earns revenue by satisfying its customers’ needs and in turn utilizes that revenue in enhancing it further to satisfy new needs of its customers, i.e. continuous improvement. One doesn’t need to go, acquire something new from the market to satisfy those needs. This philosophy is truly reflected in RedHawk’s development as I see it since last few years.

While RedHawk has been leading in the power, noise and reliability sign-off space since long, last year Ansys added significant capabilities into RedHawk by improving its capacity and performance to handle large designs with billions of transistors at sub-20nm and at very high clock speed of the order of 3+ GHz. That was a fourth generation release (with product named as RedHawk[SUP]TM[/SUP]-3DX), right in time when customers needed it. A great extension for 3D-ICs was provided which supported both concurrent and model-based multi-die simulations of designs with silicon interposer and TSVs. While simulation of all chips could be done at full layout detail, model-based approach allowed CPM[SUP]TM[/SUP](Chip Power Model) for some of the chips. A multi-tab, multi-pane GUI was provided to view and analyze voltage drop hotspots and other characteristic in the whole 3D stack at once. The sign-off accuracy and coverage was enhanced with the use of new event and state propagation engines that could be used in vector-based, VectorLess[SUP]TM[/SUP] and mixed-excitation modes, to gain maximum coverage without loss of accuracy.

This month, Ansys announced RedHawk 2014 platform which supports FinFET-based semiconductor design (along with all earlier process technology based designs). As FinFET-based designs working at low operating voltages exhibit lower noise and reliability margins, greater emphasis has been provided on accuracy of analysis. The platform has been added with ‘Distributed Machine Processing’ (DMP) capability which improves memory footprint and simulation runtime each by about 2-3x over its previous release and handles large designs of the order of billions of transistors at flat simulation accuracy.

RedHawk-CPA is another great and unique capability in the 2014 platform which provides chip-package co-simulation and co-analysis. This is done by merging a fully distributed package parasitic network with an on-die power delivery network, thus allowing the tool to provide immediate feedback on the quality of the package design as well as the impact of package parasitic on the chip performance.

A testimony of RedHawk 2014 handling tighter EM (electromigration) limits and new EM rules (such as those considering current direction, metal topology and via types for power as well as signal nets) posed by FinFETs is its certification by TSMC. In order to counter heat reliability issue, a novel concept of CTM (Chip Thermal Model) has been introduced which can very accurately capture the thermal distribution for FinFET devices and enable a thermal-aware EM analysis.

Also, ESD integrity, an important step in reliability has been enhanced by careful ESD design planning to check degraded diode protection and reduced wire capacity.

While looking at Ansys’s agenda in DACwhere they will provide more detailed views and information about this new release of RedHawk 2014 platform, I observed several of their customer presentations scheduled at DAC where the actual users of these tools will be speaking about how they were benefited from RedHawk and other tools of Ansys. It will be interesting to closely watch those presentations to know about the real value of these new enhancements in RedHawk, at least RedHawk-3DX. Of course, a few of them, if they would have used customer beta version of RedHawk 2014, may reveal about that as well.

Here are some of the important customers I noted who would primarily talk about their experience with RedHawk, results and recommendations –

Jun 2, 1:00 PMSamsung: Chip-Package-System based Power Integrity Analysis Flow for 14nm Mobile Designs
Jun3, 3:00 PMNXP: Noise Coupling Analysis for Advanced Mixed-Signal Automotive ICs
Jun 4, 12:00 PMSTMicroelectronics: Designing Smart Power-Grid with Reduced Die-Area Using RedHawk
Jun 4, 1:00 PMLSI: Silicon Correlation of RedHawk Dynamic Voltage Drop in High Power SoC for Storage Application

There are other interesting presentations by Applied Micro, Cienaand Synapse. Also, there are product specific sessions and multiple other customer presentations at various locations within DAC premises. Look at the Ansys page here for more details.

Registerfor any of these presentations. Ansys is exhibiting at booth #1413, it will be worthwhile to pass through that. Stay tuned to hear from me on details of some of these interesting presentations at a later date.

More Articles by Pawan Fangaria…..

lang: en_US


Understanding QoR in FPGA synthesis

Understanding QoR in FPGA synthesis
by Don Dingee on 05-28-2014 at 8:00 am

We’ve all heard this claim: “Our FPGA synthesis tool produces better quality of results (QoR).” If you’re just hoping for a tool to do that automagically, you’re probably doing it wrong. Getting better QoR depends on understanding what an FPGA synthesis tool is capable of, and how to leverage what it tells you. Continue reading “Understanding QoR in FPGA synthesis”


DRM2PDK: From design rule manual to process design kit

DRM2PDK: From design rule manual to process design kit
by Daniel Nenni on 05-28-2014 at 3:00 am

Exactly a year ago Sage Design Automation launched its revolutionary iDRM product, enabling for the first time to graphically capture design rules and compile them into checks automatically – no programming required. Using the graphical design rule editor, users could draw the layout topology that describes the design rule, could add measurements by drawing arrows between edges and objects and then use these measurements as parameters in a logic expression that represents the rule that needs to be kept. Once a rule has been captured in iDRM, it serves not only as a clear visual description and formal documentation of the design rule, but it also becomes an executable expression: Push a button and the design rule compiler will scan through a physical design database and will look for all instances that use a similar pattern to that design rule and take all the relevant measurements of the parameters (variables) that were used in the rule definition. The result is a complete list of all such instances, each with complete information of the relevant measurements, orientations, location, etc.


Draw design rule / View matches and errors / Get report of all values

iDRM now becomes a broad platform for anything design rules
iDRM design compiler has already been used to develop design rules, analyze them, create checks and verify DRC decks. Now, for this upcoming DAC, Sage-DA has broadened the scope and function of the iDRM platform to also include PDK parameters. At DAC, Sage-DA will demonstrate how PDK parameters for generating parametrized cells (Pcells) can be automatically created and updated from the iDRM design rule source. The iDRM platform can thus be used as a single point of entry for design rules enabling consistency and accuracy across a broad array of EDA tools and flows.

Process Design Kits (PDKs) include specific technology files for the creation of parametrized cells e.g. PCells and PyCells. Parametrized cells are widely used in custom digital, analog or mixed signal design. They are pieces of programming code that generates physical layout instances based on the Pcell parameter values. Pcells must obey to all the relevant design rules in order to generate DRC-correct physical instances. Currently, the tech files for these cells are created manually based on the information in the design rule manual (DRM). Anytime there is a change or update in the DRM, the relevant information needs to be updated in the respective Pcell technology file. This is a cumbersome and error prone process. With this new capability of iDRM, the relevant tech file parameters and resulting generated layout will be updated automatically once the DRM is updated with iDRM. This not only saves time, but it also ensures consistency and eliminates potential errors that may hinder the integrity of the physical design database.

Where to see: Demos of this new DRM2PDK capability will be held at the Design Automation Conference (DAC) on June 2-4 in San Francisco, at the Si2 booth #1107 at 12:00 PM on Monday June 2nd, and at the Sage-DA booth (#1423) throughout the exhibit hours.

Sage Design Automationprovides design rule consistency and closure between manufacturing process limitations, their respective DRM (design rule manual) representation and their DRC deck implementation.Sage-DA’s breakthrough iDRM (integrated design rule management) technology integrates an easy-to-use graphical design rule capture with instantaneous checking capability. iDRM enables non-programmers to quickly capture design rules and generate correct-by-construction checks, accelerates the development and availability of design rule checks for new process technologies and ensures their correctness and consistency, delivering higher yield and faster production ramp-up of integrated circuits (ICs) in advanced process technologies.

More Articles by Daniel Nenni…..

lang: en_US


Two New ESL Tools for Power and Thermal at DAC

Two New ESL Tools for Power and Thermal at DAC
by Daniel Payne on 05-27-2014 at 6:47 pm

Gary Smith published a list of what to see at DAC, and I noticed that he listed DOCEA Power in a category of ESL Thermal. I’ll be meeting the DOCEA engineers on Wednesday at DAC to learn more about their two newest ESL products:

  • Thermal Profiler
  • Power Intelligence

In general DOCEA Power tools allow you to manage power and thermal analysis at the ESL level, which is a higher level of abstraction than RTL or gates. This ESL approach can:

  • Save 40-70% on power consumption by early power architecture exploration
  • Secure your specification and avoid design re-spins
  • Improve power and thermal budget tracking all along the design
  • Speed up power and thermal management software validation and debug

deFacto Technologies and DOCEA are making a joint presentation on Monday, June 2 at 5PM in Room 258/260:

  • Joint design flow to fill the gap between architecture and RTL during low-power design exploration

Pascal Vivet from CEA-LETI is presenting in Session 16 of the Design Track on Tuesday, June 3 from 10:30AM – 12:00PM in Room 105:

  • Thermal Modeling Methodology for Fast and Accurate System Level Analysis. Application to a Memory-on-Logic 3D circuit.

On Thursday, June 5 at 9AM to 5PM in Room 202 there’s a workshop:

You can visit DOCEA at DAC in booth #2223, and look for my blog next week. To schedule a DAC meeting, use this online form.

Docea Power develops and commercializes a new generation of methodology and tools for enabling faster more reliable power and thermal modeling at the system level. Based on its Aceplorer platform, the Docea Power solutions use a consistent approach for executing architecture exploration and optimizing power and thermal behavior of electronic systems at an early stage of any electronic design project. The company is headquartered near Grenoble, France, and in San Jose, CA, and has sales and application support offices in Japan and Korea.

lang: en_US


Different Approaches to System Level Power Modeling and Analysis for Early Design Phases

Different Approaches to System Level Power Modeling and Analysis for Early Design Phases
by Daniel Payne on 05-27-2014 at 3:14 pm

At DATEthis year in Dresden, Bernhard Fischer from Siemens CT(Corporate Technology) has presented an interesting summary of the various techniques used for power modeling and analysis at the architectural level. He went through the pros and cons of using spreadsheets, timed virtual platforms annotated with power numbers and a dedicated system level power modeling tool (Aceplorer). He also had an example application (a signal processing design) to show results mostly using the two latter approaches.

We mostly agree with the outcome of the study, and we encourage any person interested in power modeling at the architectural level to ask for a copy of the slides. I would like to add to the presentation a few points collected through my experience, working with Docea Power customers.

One general observation is that early power estimates are due way before any virtual platform is available. Most companies use a power model to answer Requests for Information (RFI) or Request for Quotations (RFQ) with statistical use case descriptions or dynamic representations of the most important use cases. Power architects are of course interested in the availability of a timed virtual platform as it provides a way to get a better use case representation, but they can’t wait till it is available, so they need to have a model of their own. Besides, power estimations need to take into account a complete device, not just the most important functional blocks, and this often includes analog and RF blocks and voltage regulators. Adding the different non-functional power consumers to the Virtual Platform further delays its delivery (which is mostly developed at the usage of system architects and software teams) and makes it more difficult to debug.

Another observation is that power architects expertise is not on functionality and SystemC coding and compiling, but rather on modeling the power behavior of blocks (leakage vs dynamic), on describing power management schemes, and on estimating the impact of different power reduction techniques. One core aspect we have found is to track power numbers (and indeed for a complex design there is a lot of data to manage), in order to be able to explore quickly any new configuration requested.

From a practical point of view, power architects have to perform a number of what-if analyses, to explore the best configuration on the number of power domains and voltage clustering. This is true for most complex designs today. To enable this exploration, power architects need an easy representation of supply rails and power domains definition. This is natively supported in Aceplorer as the tool is dedicated to power modeling. The methodology allows getting very accurate results as described in this paper from Jongho Kim (Samsung SLSI) presented at DAC 2013.


Our approach with regards to the link with virtual platforms is to get the best of both worlds! Virtual Platforms are an excellent way to describe the use cases. Whenever a virtual platform is available, with the right timing information and simulating meaningful use cases for the power architect, the activity of blocks can be monitored, captured in traces and used to describe use cases in Aceplorer. With this approach, the non-functional blocks are represented only in the power model, where they are needed and not in the virtual platform where they don’t add value.

The Aceplorer model is probably going to be pre-existing the virtual platform and a specific team in charge of its development and maintenance. The two models can be put in sync and we have demonstrated last year at DAC a mechanism to even have the virtual platform drive the Aceplorer simulation and get power and temperature feedback. This mechanism is ideal to develop and debug optimized power and thermal management policies and provide with more accurate leakage power numbers and better what-if analysis capabilities. This is in my humble opinion, how to get the best of both worlds…

Docea Power exhibits at DAC (booth #2223). If you are visiting the show in San Francisco next week, visit our booth to learn more.

Written by: Ridha Hamza, DOCEA Power

lang: en_US


The Chip Design Game at the End of Moore’s Law

The Chip Design Game at the End of Moore’s Law
by Paul McLellan on 05-27-2014 at 2:58 pm

I just came across and interesting video from last year’s Hot Chips conference. Dr. Robert Colwell of DARPA discusses how the processor design industry is likely to change after it becomes too difficult to continue scaling transistors to ever-smaller dimensions. This is likely to occur sometime within the next decade, so companies need to be planning for the transition today.

Today Dr Colwell heads up programs looking into promising future technologies, but his talk draws a lot on his many years at Intel as a CPU architect. He even points out that CPU microarchitecture has been pretty ineffective. Since he started designing processors that ran at 1MHz the performance has improved 3,500 times due to improvements in semiconductor performance. Due to changes in how processors are architected (pipelines, branch prediction, caches etc) maybe just 50X.

The big challenge is that CMOS has been such a wonderful technology and we have had a free ride. Until recently every process generation is faster, more area, lower power and lower cost. It is an exponential that has gone from his boyood 6-transistor radio to multi-billion transistor chips today. In future the improvements will be much less from process generation to generation. If we have to live with architectural improvements primarily then progress will be very slow. Of all the technologies that DARPA is looking at, he has about 50. But he reckons that only 2 or 3 are truly promising.

He sees 7nm as being the end of the road and Moore’s Law is over around 2020 or 2022. Worse, Intel actually makes all its real money moving its processors into the next node and if 5nm is not coming or is too expensive then designing in 7nm is less attractive.

He has on one of his slides something that I’ve said for some time: When Moore’s Law ends it will be economics that stops it, not physics. Follow the money.

This has implications for many industries that are fed on semiconductors. For example, almost all improvements in cars for ages have been things like engine control units, navigation and so on. Basically processors. So improvements in cars will get a lot slower.

In case the video above doesn’t work then it is here.


More articles by Paul McLellan…


SEMICON West 2014 Preview

SEMICON West 2014 Preview
by Paul McLellan on 05-27-2014 at 12:46 pm

There is a really big conference coming up in San Francisco…no, not DAC although of course that is coming up too. I’m talking about SEMICON West which is much bigger, filling all 3 Moscone exhibit halls. It is July 8-10th. It is, of course, the semiconductor equipment industry (and solar) show.

The opening keynote on Tuesday is by Mark Adams, the President of Micron. I am sure one of the things that he will talk about is the Micron Hybrid Memory Cube (HMC), one of the very first 3D chips to go into volume production. It consists of 4 DRAM die on top of a base logic die.

The Wednesday keynote is The Art of the Possible: How Manufacturers are Leveraging Digital Technologies to Drive Business Transformation in a Connected World by Sanjay Ravi of Microsoft where he is the Worldwide Managing Director, Discrete Manufacturing Industry.

One of the most interesting areas at SEMICON is always the TechXPOTs which are two on-the-exhibit-floor presentation areas. Every morning and afternoon there are sessions for some particular area that is hot right now with 6 or 7 presenters from different companies.

TechXPOT south on Tuesday has Next Generation MEMS in the morning and Variability Control in the afternoon. on Wednesday the session that most interests me is in the morning on Supply Chain Challenges for 10nm and Beyond. Then in the afternoon it is Productivity Solutions for 300mm and Smaller. Finally on Thursday it is 3D Printing in the morning and Breakthrough Research Technologies in the afternoon.

TechXPOT north is on Testing into the Future on Tuesday morning, the Future of 3D NAND Flash in the afternoon. Wednesday morning is Bringing Silicon Photonics in Production and then in the afternoon Automotive Innovation. On Thursday morning Disruptive Compoun Semiconductor Technologies.

Also, new this year, is the semiconductor technology symposium STS. This is a comprehensive technology and business conference addressing the key issues driving the future of semiconductor manufacturing and markets, aligned with the latest inputs from the ITRS. Discover the trends shaping near-term semiconductor technology and market developments in areas including 450mm, advanced processes and materials, lithography, packaging, and test.

The website with full details, registration and more is here. Registration is discounted until June 6th.

SEMICON West is the flagship annual event for the global microelectronics industry. It is the premier event for the display of new products and technologies for microelectronics design and manufacturing, featuring technologies from across the microelectronics supply chain, from electronic design automation, to device fabrication (wafer processing), to final manufacturing (assembly, packaging, and test). More than semiconductors, SEMICON West is also showcase for emerging markets and technologies born from the microelectronics industry, including micro-electromechanical systems (MEMS), photovoltaics (PV), flexible electronics and displays, nano-electronics, solid state lighting (LEDs), and related technologies.


More articles by Paul McLellan…


Mark Milligan Joins Calypto. Plus Google at DAC

Mark Milligan Joins Calypto. Plus Google at DAC
by Paul McLellan on 05-27-2014 at 12:07 pm

I talked to Mark Milligan this morning, who has just joined Calypto as VP Marketing. I first met Mark back when he was at CoWare and I was at VaST or maybe it was Virtutech. Then he moved on and ran marketing at SpringSoft which, I’m sure you remember, Synopsys acquired. I asked him what encouraged him to join Calypto.

He said that there is a lot of technology that has been brewing for years. When Mark was at the Open SystemC Initiative (OSCI) he said that synthesis from C was the holy grail. But back then the technology was immature. The other big problem was that there wasn’t a very good verification flow. Having simulated everything at the C (or SystemC) level it all had to be resimulated at RTL to make sure that the high level synthesis had done its job.

Four things have changed in the intervening years:

  • High level synthesis technology in Catapult has got really good
  • The sequential equivalence checking (SLEC) technology that Calypto originally developed has got to be good, so that now there is a formal verification flow
  • Power has become the dominant constraint in algorithmic design, and in the FinFET era dynamic power in particular
  • The complexity of the designs that people are doing using this technology is mind-boggling

The approach is not applicable to every type of IP, often there is legacy RTL or that RTL is simply the right level to do the design. But modems often have changing specifications, graphics is always about improving the algorithms. By being able to simulate the design at the C or SystemC level there is a huge gain in performance. But the biggest thing driving adoption is that there is serious pain in some of these areas and an RTL-based approach doesn’t work. It is simply not possible to iterate designs fast enough, to move them from one process generation to the next and so on.

At DAC, Calypto have customer presentations on how companies are using the products to do these types of design. Calypto have 3 products. The Catapult high-level synthesis (that came from Mentor originally). The sequential logical equivalence checking (SLEC). And the PowerPro sequential power reduction technology. Catapult and PowerPro allow design to be done at high-level and SLEC gives a corresponding verification flow.

But the big coup is that Google will be presenting how they use Catapult for design of video processors, that they call VP9. By doing it at the C level they can share code with partners in a quasi-open-source way. Their partners can also feed improvements back to Google. Further, by moving up to the C level it is much easier for “software types” to do hardware design. It is all about being able to take the algorithm and efficiently implement it in silicon (either and FPGA or gates or even just run it on a processor) and meet the power budget.

Calypto’s DAC booth is #2333.


More articles by Paul McLellan…


Methodics @ #51DAC!

Methodics @ #51DAC!
by Daniel Nenni on 05-27-2014 at 11:00 am

This is the biggest year ever for Methodics at DAC, with lots to show, and a team of people excited to talk to customers and potential customers alike. Methodics will also be giving away Pebble Smartwatches!

Methodics theme for DAC2014 is “IP and design management done right”. A key part of this message is to show how their unique open approach which includes open interfaces, open architecture, and open data – not only helps customers and partners, but also sets them apart from the competition who focus on the traditional closed approach of EDA companies to lock customers into there solutions.

At their booth Methodics will be demonstrating ProjectIC – their lifecycle management platform, and VersIC their analog design data management platform.

For ProjectIC they will have three demos that match to the different use models of the platform:

  • ProjectIC for Digital Design: Focuses on the design collaboration, release management and automatic bad release rejection aspects of ProjectIC. These capabilities of ProjectIC allow designers to focus on core elements of design without the need to become DM and release experts. Bad release rejection ensures that a bad check-in by a single designer does not stall the whole development team.
  • ProjectIC for Integrators: Shows how ProjectIC simplifies complex integration of modern SoC’s containing 10’s or 100’s of IP. Ccentral and shared configurations help teams stay coordinated, while hierarchal release and bug tracking means changes to underlying IP are propagated automatically (and correctly) up the design.
  • ProjectIC for Enterprise IP Management: Follows the lifecycle of an IP and shows how the ProjectIC connects IP creators with consumers. A dynamic IP catalog is seamlessly integrated into a designers workspace to show the real-time status of the IP, while IP caching builds on capabilities in the underlying DM and file systems to make all IP appear local to designers and deliver real-time workspace creation even for designs that pull IP from multiple repositories around the globe.

For VersIC they have a unified demo that shows DM and release management seamlessly integrated into the analog design environment (Cadence Virtuoso for the demo, but Synopsys Custom Designer is also supported) as well as their new verification management capabilities.

Beyond the demos, there are a number of other activities they are involved in:

  • IP Track Panel Discussion– Monday 4:00pm : The panel will discuss whether next-generation highly configurable IP is breaking design and verification. Other companies in the panel are Google, Codasip, and Duolog
  • Cadence Theater – Tuesday 12:30 pm : Presents how Methodics IP and design management solutions seamlessly integrate into Cadence’s analog, digital, and mixed signal design environments.
  • ChipEstimate IP Talks – Wednesday 1:30pm : Explores the lifecycle of the IP and what is needed at each stage in the lifecycle.

Throughout DAC they will be running a daily giveaway of a Pebble SmartWatch to a random person seen with their DAC button (above). The buttons can be collected at the Methodics booth (#1407).Methodics delivers state-of-the-art semiconductor data management (DM) and IP Lifecycle management for analog, digital and SoC design teams. Methodics clients for analog and digital designers integrate natively making capabilities seamless to users. Building our solutions on top of standard Subversion and Perforce infrastructure ensures data is safe, always available, and that the tools can take advantage of the latest advancements from the software configuration management community. Methodics highly scalable and industry proven solutions are ideal for small specialized IP design teams, as well as large multinational, multisite, SoC design teams.

More Articles by Daniel Nenni…..

lang: en_US


Dark Silicon

Dark Silicon
by Paul McLellan on 05-26-2014 at 5:29 pm

One of the problems with chips today is that of so-called “dark silicon”. We can put massive functionality on an SoC today. A billion transistors, and that is just at 28nm. But power constraints (both leakage and dynamic power) limit how much of the chip can be powered up at any one time. In some cases this is not that big an issue: if your cell-phone is not making a call then don’t power up the transmit/receive logic. But in other cases it is a huge problem. There is absolutely no point in putting a 16-core processor on a chip and then finding that 10 is the maximum number of cores that can be on at any one time. The cores are identical, so it is not like the transmit/receive logic case that I just mentioned.

At the Linley Microprocessor Conference a couple of weeks ago, Drew Wingard, the CTO of Sonics presented on this, although the presentation was actually called Power Management Advances for Heterogenous Mobile Systems although as a marketing guy I prefer to talk about the problem of dark silicon

They see it as an opportunity for further power optimization. One of the thing a network-on-chip (NoC) buys you is that the power management can be much more automated. The NoC “knows” if a block is powered down when a message arrives for it and can buffer the message, power up the block and then deliver it. This makes it possible to do much more aggressive power management without depending on the embedded software people, who barely understand how the chip works, to do it through flipping register bits.

There are also lots of chip-level techniques for reducing power. Reduce the clock frequency. Turn the clock off when nothing useful is being done on that part of the chip. Power down the block completely. But for all these to be done safely it all needs to be architected into the IP. There are huge savings but it is very hard to get at them , for example, with a standard bus-based fabric: the bus must be powered if any of the attached cores are on and then the internals of the block may be powered off or not be being clocked.

The alternative approach taken by Sonics is to use the network to enable power domains. The network itself can also be partitioned inside power domains. With intelligence in the network it is much easier to shut down power domains safely and automatically wake up components by holding traffic and then sending a request to the system power manager.

SonicsGN is a power-aware on-chip network that can cope with all this. For example in the above diagram the network handles the SoC for a tablet computer including domain partitioning, clock gating and domain on/off control. This enables much finer grained power control, is safer, requires less CPU overhead and all sorts of other benefits (not least that because it is in hardware the CPU might not even need to be on). Potentially these approaches can save half of the total SoC power consumption.