BannerforSemiWiki 800x100 (2)

Synopsys Acquires Coverity

Synopsys Acquires Coverity
by Paul McLellan on 02-19-2014 at 5:27 pm

Synopsys announced this afternoon that they are acquiring Coverity for $375M subject to all the usual reviews.

There are a couple of other big EDA connections. Aki Fujimora, who was CTO of Cadence, is on the board. And Adreas Kuehlmann is the VP of R&D. He used to run Cadence Berkeley Laboratories before moving to the other end of the bay bridge. Before I moved to the Mission District in San Francisco, the building I backed onto was Berry Street and Coverity are based in offices just across the street. I interviewed him for DAC.com. He was the president of CEDA despite no longer being really in EDA, but as a software guy I’m interested in software devleopment methodology. I think it must be the shortest distance I have ever had to go for an interview.


Although I’m sure Coverity sells software to groups developing software to run on large SoCs, their market is not so restricted and they serve the general software development market. The heart of their technology is a static analysis engine for software called SAVE (static analysis verification engine). Their main products do a full static analysis of large code-bases and finds quality and security defects, including full interprocedural analysis, not just one source file at a time. Another product finds holes in test and prioritizes how to fix them.

This is an interesting acquisition since it isn’t really firmly in the EDA space. Of course, Mentor has had product in the software space for a long time, but focused on embedded and software for SoCs, so not so far from their mainline business.

Or as the press release puts it:Software complexity and the resulting quality and security issues are dramatically increasing. Today, more than six million professional software developers across the world write more than 60 million lines of code every day, deployed to fulfill mission-critical, safety-critical and security-critical tasks. Many of those deployments are fragile or even failing, resulting in delayed or lost revenue, recalled products, loss of customer trust, and even safety issues. Since spinning out of a Stanford research project 10 years ago, Coverity has been developing revolutionary technology to find and fix defects in software code before it is released, improving software security. Bringing together the Synopsys and Coverity teams opens up opportunities to increase penetration into the semiconductor and systems space where Synopsys excels. The acquisition also enables Synopsys to enter a new, growing market geared toward enterprise IT and independent software providers that Synopsys doesn’t currently address.

Synopsys press release is here.


More articles by Paul McLellan…


One SPIE session not to miss

One SPIE session not to miss
by Beth Martin on 02-19-2014 at 4:19 pm

The time is nigh for another meeting of the practitioners of the lithographic arts, dark and otherwise, at the SPIE Advanced Lithography symposium.

I love this conference for the engagement you see, both in the sessions and in the hallways. People actually meet and talk and argue. There’s always interesting gossip, exciting technologies, and spirited debate about the future of lithography. Each year, you also see more DFM topics. SPIE is like the bridge of a great ship from which you can witness the merging of two, once-separate, seas. Just take a peek at the program and you’ll notice the significant presence of DFM, or more broadly (as one of the conferences is eloquently titled), “design-process-technology co-optimization for manufacturability.”

In fact, one-third of the plenary presentations on Monday, February 24, covers the topic of the design-manufacturing-test flow, specifically, dealing with patterns throughout the design and manufacturing flow. The presenter is Joseph Sawicki, VP of the Design-Silicon division at Mentor Graphics, and the premise is that design style-based or systematic defects have become major challenges to yield ramp. The defects are driven by the difficulty in lithography at advanced nodes. Part of the solution is to be found in EDA software. He will discuss some of these EDA-based yield solutions that span design, manufacturing, and test. He refers to this set of EDA tools as a “pattern-aware” EDA flow and says it will minimize risk and enhance manufacturing.
For example, there are powerful new methods of identifying pattern failures hiding in yield loss. While diagnosis-driven yield analysis has been around for a while, the new generations of this software-based diagnosis of test failures includes integration with DFM tools and new algorithms that remove the noise, or ambiguity, from the statistical analysis. In practice, this means finding the offending defect quickly and with high confidence.

Another EDA technology Sawicki will mention is new OPC methods. Mentor engineers have a number of papers at SPIE about model-based OPC, SEM-countour based OPC model calibration, resist toploss modeling, and neighbor-aware fragment feedback with “matrix” OPC, among others. Sawicki will also talk about technologies that will be ready to help find and fix failure mechanisms in emerging process nodes and tools that give designers visibility into the risks of production.
Sawicki is a dynamic speaker, and the topic is timely. Process ramp and yield ramp is under pressure at the emerging nodes, and I can verify that this trend has kicked EDA innovation into high gear. SPIE is February 23-27 at the San Jose convention center. Pre-registration ends Feb 19, so sign up now online. You can still register in person the day of the event.

More articles by Beth Martin…


Xilinx: Delivering a Generation Ahead

Xilinx: Delivering a Generation Ahead
by Paul McLellan on 02-19-2014 at 4:15 pm

Last week was Xilinx’s investor day. Xilinx believe they are now a process generation ahead. They did over $100M in 28nm designs in FY2013 (Xilinx FY ended March 2013) and did over over $100M in Q4 2013 calendar year alone (and this is almost all true production volume, with only about 5% prototypes) with a plan greater than $350M for the whole fiscal year 2014 (which ends in March 2014) and twice that in fiscal year 2015. That’s revenue momentum.
Continue reading “Xilinx: Delivering a Generation Ahead”


Brian Krzanich Does Reddit AMA

Brian Krzanich Does Reddit AMA
by Paul McLellan on 02-19-2014 at 1:03 pm

Do you know what an AMA is on Reddit? It stands for “ask me anything”. A person, often a famous person like Bill Gates (last week) but sometimes just someone who does an interesting job (like astronaut) or was in an interesting situation (like the hijack from Ethiopia last week).

Today, it is someone who argualble is all three: Brian Krzanich, CEO of Intel. Most of the people asking questions are not semiconductor professionals so they didn’t ask the questions I would have (and it was over by the time I discovered it had taken place).

He is very upfront about Intel missing the mobile transition:

How did Intel miss out on the mobile/tablet market and what is it doing now to compensate?

BrianKrzanich 366 points1 hourago
we wanted the world of computing to stop at PC’s…and the world.. as it never does… didn’t stop innovating.. The new CEO of Microsoft Satya said it well the other day… our industry does not respect tradition, it respects innovation…. i think he was 100% right.. and it’s why we missed the mobile move.

Did you always want to be a CEO? Did you come from a technical background?

BrianKrzanich 59 points 1 hourago
nope.. thought i wanted to stick with the pure technical path…. then i realized the best way to drive technology was to run the company…

He is also clearly a total geek who used to water-cool his PC and overclock it at 4GHz:

Do you own a badass gaming PC with a crazy processor that isn’t on the open market?

BrianKrzanich 104 points1 hourago
no.. i used to have time to do that.. i used to build my own PC’s.. and actually had one of the first water cooled overclocked PC’s around. i ran it at over 4Ghz and this was back in 2001… but alas i do not have the time for that fun anymore

and what color socks is he wearing?

BrianKrzanich 38 points1 hourago
Brown

I knew that would be the first thing SemiWiki readers wanted to know.

Brian’s AMA is here.


More articles by Paul McLellan…


Carbon Design Systems – Secret of Success

Carbon Design Systems – Secret of Success
by Pawan Fangaria on 02-19-2014 at 11:00 am


Last week, after learning from the press releaseof Carbonabout its rocking sustained growth with record-breaking revenue and a thumping 46% increase in bookings, I was interested to know some more details about what drives Carbon to such an amazing performance in an EDA market that is generally prone to growth of a few percentage points provided the economic circumstances remain favourable. I was lucky talking to Bill Neifert of Carbon Design Systems who provided great insights into the value-added business model that is driving this kind of growth at Carbon and also about the ESL (Electronic System Level) segment. Here is the conversation –

Q: Bill, I know Carbon is among the very early movers into ESL area with its virtual prototyping product, SoC Designer. What are the various offerings you have that are gaining traction?

Carbon has been in the virtual prototyping space for nearly twelve years now, first with our Carbon Model Studio product and then with Carbon SoCDesigner. We rolled out Carbon IP Exchange a few years ago to meet the huge demand for automated model creation with an automated web portal. It’s really our CPAKs (Carbon Performance Analysis Kits) though which have been driving growth. While our other tool offerings are models and tools to create your own virtual prototypes, CPAKs are completely pre-built virtual prototypes and software which enable designers to be productive within minutes of download. If needed, they can then customize these very easily to more closely model the behaviour of their own system. We’ve found that by using CPAKs, we can enable users to be quickly productive across design tasks ranging from architecture analysis to system level performance optimization.

Q: I notice that you used the term virtual prototyping instead of ESL. What do you see as the difference?

ESL seems to mean something different to everyone. For some people it means high level synthesis (HLS). For others, it means a virtual prototype representation of the system. Another set of designers may use it to mean an architectural conception of the system as a starting point for successive refinement. Although I see lots of design teams using various parts of this definition, it’s pretty rare to see people using the entirety of design flows which can be lumped into the ESL term. We like to stick with using the term virtual prototype but even in that term there is some ambiguity since there can be virtual prototypes at differing levels of abstraction depending upon the models it contains. Our SoCDesigner-based virtual prototypes typically have multiple layers of abstraction depending upon the design need being addressed (ARM Fast Models for the software engineer, 100% accurate ARM models for the architect). Other virtual prototypes tend to stick to a single abstraction level and therefore, a single use case.

Q: Lately, I see that Carbon’s focus has been on providing complete solution for SoC designs, such as creation of accurate virtual models, Carbon IP Exchange through web based support etc. I would like to hear your comments on this.

Carbon IP Exchange is really software as a service (SaaS) application of Carbon Model Studio targeted for specific IP. For example, our relationship with ARM provides Carbon with access to ARM’s RTL to compile and instrument 100% accurate models based upon their IP.

We rolled out the Carbon IP Exchange website to automate this model creation task. It walks the user through the configuration options for each piece of IP when each option is chosen. The form updates itself to allow only valid choices for each of the additional options. This way, when the form is complete it will correspond to a valid configuration for the RTL. The resultant model is therefore correct by construction based upon the configuration options supported in the RTL.

Along with the models, we also have a growing collection of Carbon Performance Analysis Kits (CPAKs). These CPAKs are pre-built virtual prototypes and system software. These CPAKs are user extensible and shipped with source code to enable simple customization.

Carbon IP Exchange is a vital part of our customer’s design infrastructure, it’s available 24/7 round the clock. It has already been used by designers to create over 5,000 models. This doesn’t include the several thousand additional models which are included in our CPAKs. This enables designers to be much more productive since they don’t worry about model creation. We enable them to be more productive by concentrating their efforts on doing actual design and debug, not worrying about the correctness of their model.

Q: How does web based Carbon IP Exchange work? Can any third party IP be modeled on-the-fly as per SoC design need? How about licensing?

Any third party IP could utilize Carbon IP Exchange. After all, we’re using the Carbon Model Studio compiler to generate the designs from the original RTL. That technology has been successfully used on projects for over a decade. We’ve tended to focus more on IP which our customers are using in their own SoC designs. This is why you see most of the top IP providers represented on the portal. We still of course also provide the Carbon Model Studio tool to design teams as well to enable them to compile models from their own RTL or from IP providers who aren’t currently on Carbon IP Exchange.

Q: I recollect from the press release that Carbon Performance Analysis Kits (CPAKs) are seeing rapid adoption by SoC design houses. How do they fit into the overall solution?

We rolled out CPAKs about 18 months ago and they are now in use at a majority of our customers. The reason for this is quite simple: they get designers up and running much more quickly. CPAKs are prebuilt virtual prototypes and software targeted at various use cases. The use cases vary. IP level optimization typically requires bare metal software for configuration and benchmarks. Driver development is generally done on a more complete system with a ported version of Linux or Android. Finally, a designer doing OS level performance optimization generally wants a complete system together with benchmarking applications. We have CPAKs targeted at each one of these use cases and package them together with source code for both the virtual prototype as well as the software which runs on top of them. This means that they can get you up and running quickly and also be easily modified to represent the actual configuration of your system. Using CPAKs it’s not unusual for a designer to be productive within minutes of download. It will take much longer, if they assemble that system themselves.

Q: My observation from our conversation is that it’s huge productivity gain for SoC designers, which is driving them to increasingly use Carbon’s solution or products for fast and accurate virtual prototyping that leads to silicon success in shorter time. Am I right in assuming that?

When you’re designing an SoC you typically have a bunch of questions: how fast will it run? What IP do I need? How much power will it consume? How do all the parts work together? Many more questions come to mind as well. The faster these questions can be answered, the sooner the project can be successful. The evolution of Carbon’s solutions is focused entirely on reducing the time needed to answer these design questions.

Q: O.K. looking at customers, I guess U.S. is always leading in newer as well as mature businesses. You say there is strong growth in Asia as well; can you elaborate a little bit on that? Is it because of many design and IP companies coming up in that region?

Virtual prototyping has always had a slightly different geographical sales mix than other EDA products. Japan was easily the strongest geography for many years but we’ve been seeing a dramatic shift in this in the past few years as the US takes on a more system-focused design flow and as companies in Korea, China and Taiwan do more and more designs. We grew so much last year by expanding dramatically in certain strategic accounts but also by acquiring accounts focused on additional vertical markets. ARM’s new 64 bit processors are being adopted by companies looking to build servers, storage and network devices. This is opening up a new set of customers to us as designers in those markets are adopting the same tools which have been used in the markets where ARM is already dominant.

Q: Would you like to further elaborate on continuing strategic partnership with Samsung and future roadmap?

Not yet, stay tuned!

Q: It’s interesting to know about such a progress happening in this space. One last question, what’s your view on ESL (i.e. use of virtual platforms, high level modeling etc.) entering into the mainstream of semiconductor design practices, at least for SoC designs?

I honestly believe that virtual prototyping is finally in the mainstream of the design practices for most SoC designs. I used to spend a lot of time talking to designers about why they should design using virtual prototypes. Now, that time is focused more on discussing the ways in which they can be using the virtual prototypes they have. Most design teams seem to be using virtual prototypes in at least part of their design flow. The key now is expanding that use throughout the entire flow to enable even greater value. EDA companies are successful when they remove design bottlenecks and CPAKs have demonstrated that they are able to get designers productive more quickly than with any other solution.

This was an interesting discussion with Bill. Now I can truly visualize how virtual prototyping adds value to an SoC design and how Carbon has made the process of virtual prototyping simple enough for it to get into the mainstream of semiconductor designs.

More Articles by Pawan Fangaria…..

lang: en_US


Verifying DRC Decks and Design Rule Specifications

Verifying DRC Decks and Design Rule Specifications
by Daniel Nenni on 02-19-2014 at 8:00 am

DRVerify is part of the iDRM design rule compiler platform from Sage DA, something that I have been personally involved with for the past three years. DRVerify is mainly used to verify third party design rule check (DRC) decks and ensure that they correctly, completely and accurately represent the design rule specification. In addition, DRVerify can be used to clarify and validate the exact meaning of a complex design rule description, as well as to ensure design rule consistency and prevent potential conflicts between different rules.

Using the iDRM design rule definition pattern and it’s logical expression, DRVerify systematically generates variations of the design rule and creates a comprehensive set of layout test cases that manifest all meaningful combinations of the design rule parameters that can make it pass or fail.

DRC deck programmers use DRVerify to automatically generate an exhaustive set of pass and fail test cases, and run their DRC code on the generate gds file to check that it found all fail cases and did not flag any of the passing ones. Design rule manual (DRM) teams use DRVerify to visualize their design rule expressions and check its boundary conditions to ensure the formal expression accurately reflects their intent. DRVerify also indicates any possible conflict with other existing design rules.

Verifying DRC Decks (DRC Runsets)
Creating a correct and accurate check of a complex design rule is almost impossible; The probability of making errors is so high and compounded by the ambiguity of the design rule description, the difficulty to fully cover all possible situations and the complexity of evaluating each situation and determining its legality with respect to the design rule specification.

The Fundamental challenge is that there is no formal way or methodology to verify the DRC code against the design rule definition or any other golden reference, and ensure its correctness. To minimize the risk and release higher quality runsets, PDK teams create QA test cases for each design rule. These test cases are layout snippets that manifest both violating (fail) and legal (pass) configurations. The DRC code is then run on these test cases and the coders check that the code flags an error for each of the “fail” test cases, and that it does not flag any of the “pass” cases.

DRVerify offers a formal and automated methodology to generate such test cases. Starting from the source, the design rule specification in iDRM, the DRVerify geometry engine exercises both topological properties of the design rule pattern and the numerical values of its logical expression, and does it in a systematic and comprehensive manner. The result is an exhaustive set of layout test cases that manifest all meaningful combinations of the design rule parameters that can make it pass or fail. Each test case is tagged and annotated with the specific combination of parameters that make it pass or fail. The entire collection is written to a gds file, where all the violating test cases are grouped and placed under a dividing line and all passing cases are placed above it, to clearly distinguish between them.

Clear and formal design rule description in iDRM generates hundreds of DRC test cases

The DRC deck under test is then run on this gds file. A correct DRC deck must flag as errors all the test cases in the fail group below the line, and should not flag any of the passing test cases above the line. Any other result indicates a discrepancy between the deck and the design rule specification, and needs to be further evaluated and probably corrected. The annotated specific value combination of each test case can pinpoint to the specific discrepancy and help the user to debug the DRC code and correct it.

Verifying DRC deck using the generated test cases (only error marker layer shown)

Validating Design Rule Descriptions: Design rule descriptions are written today in various forms using drawings, tables and free form text. Advanced technology (20nm and below) design rules have become so complex that it is very hard to verify that such a description completely and accurately represents the design rule intent.

To address this problem, DRVerify uses the iDRM formal design rule description and automatically creates layout embodiment snippets of how this rule can manifest itself given the specified drawing and logical conditions. DRVerify acts like an animator that acts out the written spec and creates a set of design rule instances that visualize the design rule expression and highlight the possible corner cases and boundary conditions that can flip a case from pass to fail and vice versa.

This visualization shows immediately what the design rule spec actually means, which cases are passing and which are failing and where are the possible hidden ‘gotcha’ situations. The user can immediately spot unexpected aberrations, like violations that should have been legal and vice versa.

Design Rules Consistency: Advanced technology DRMs hold thousands of design rules, some of them very complex. Most layout configurations are subject to multiple design rules which can be overlapping, meaning that certain objects or distances in the layout need to satisfy different conditions that are specified in multiple design rules. All these rules need to be consistent and one design rule cannot conflict with other rules. Without the use of a formal automated system, this is almost humanly impossible to verify.

DRVerify addresses this problem: When a design rule is being exercised, DRVerify keeps all other “fixed” design rules that are in the background. If a specific layout instance requires setting values that conflict with any of the other rules, DRVerify will issue a warning indicating such conflict. The user can then review the conflicting conditions between the rules and determine how to resolve them. Note that conflicting conditions might also exist within a single complex design rule.

DRVerify is part of the iDRM platform. iDRM uses a formal and clear design rule description as a single source for everything: the design rule specification, the checker and the test structures. Using iDRM a design rule is defined only once, and all its other facets and derivative manifestations are generated automatically, correct by construction and are always in sync.

DRVerify enables the development of higher quality DRC decks, much faster and with less effort. In addition it can also be used to develop clear, unambiguous and consistent design rule specifications.

DRVerify is part of the iDRM Platform

Also read:

iDRM for Complex Layout Searches and IP Protection!

Analyze your physical design: avoiding layout related variability issues

iDRM Brings Design Rules to Life!

Sage Design Automation launched, with design rule compiler technology and products

More Articles by Daniel Nenni…..

lang: en_US


The Future of Money is Digital – BitCoin Introduction

The Future of Money is Digital – BitCoin Introduction
by Sam Beal on 02-18-2014 at 5:00 pm

By now most people who read or listen to the news know something about Bitcoin (BTC). Most people have the perception that it is either the currency of crime or speculation. Put aside the perceptions and consider this. When is the last time you saw someone write a check in the grocery store, especially someone under 60? Today’s kids will shop with their smart gadget, not with a credit card, and certainly not with folding money as my grandfather called it.
Continue reading “The Future of Money is Digital – BitCoin Introduction”


Smart cards hard for the US to figure out?

Smart cards hard for the US to figure out?
by Don Dingee on 02-18-2014 at 3:30 pm

Every once in a while, I just scratch my head and wonder just what in the wide, wide world of tech is going on. More than ever, it seems the big barriers to adoption aren’t a lack of technology – instead, barriers come from a system that staunchly defends the old way of doing things, even when the participants are battered, broken, and bleeding.

Consider smart cards, for instance. We have had both the international standardization and the microcontroller and RF technology available for some time. Smart cards are routinely used in 130 countries worldwide – but for the most part, not the US, outside of the well-known mobile phone SIM module. Financial transactions in the US still mostly rely on magnetic stripe technology. Why can’t we get on board?

Part of the answer lies in sheer size. By some estimates, changing the US financial system – card issuers, retailers, and almost every consumer – to a smart card transaction infrastructure could cost as much as $35B. Smart cards themselves, each carrying a microcontroller and non-volatile memory supporting encryption of stored information, are about five times more expensive to produce compared to trivial mag stripe versions (but maybe not the versions of cards with holographic logos, bearer photos, and other features).

There is also security to consider. Ironically, the recent Target breach could be the straw that finally breaks the camel’s back and lowers the resistance to smart cards. Mag stripe cards are trivial to counterfeit, but smart cards are a much more difficult nut to crack for forgers. However, there is controversy over whether the two-factor authentication method for smart cards should be chip-and-PIN, or chip-and-signature. As many pundits point out, while these cards provide more security in physical transactions, in an economy increasingly moving to online purchases, smart cards don’t create much of a change.

I remember when the buzz on NFC was that it was going to take over payments. On the trail, I wandered around CTIA Mobile in San Diego in the fall of 2011, asking a few vendors what they thought. The response was a bit startling: retailers won’t change their infrastructure. It’s a lesson we learned from RFID in the previous decade, where item-level tagging and seven-cent chips were going to sweep the universe and make everything “smart”, displacing that rotten old barcode technology.

Things didn’t happen that way. Smooth-talking marketers said it was all about the use case, that these new technologies don’t create a big enough change to justify investment. Those same marketers also managed to siphon most of the energy out of the term “smart”, nullifying it with the suggestion that it offered anything but extra cost for consumers. (See: smart grid.)

The other blowback is being “too far in front of the bus”. That usually comes from sales people making a comfortable living selling mostly old stuff to their mostly old customer buddies. The problem: embedded life cycles are really long, two, three, sometimes five years. If you miss the bus, your best case outcome is running like crazy for the next two years to catch the next one. The worst case is it goes by, and as Tom Peters succinctly put it, you wake up dead one day without understanding what happened. Sometimes, it’s better to be hit by the bus – it can be a great call to action.

The smart card bus is here, in a system riddled with hacking, fraud, and identity theft, and some big names are getting run over and hurt badly because switching from mag stripes was sold to a lot of people as too expensive. We got what we asked for by avoiding the fairly obvious. Now, how do we get what we really need?

It starts at the building block level, and dealing with the myth that smart cards have to be expensive to produce. An ultra-low power MCU and NVM with 10-year-plus data retention isn’t that costly any more, with 8-bit engines under 25 cents, and 32-bit engines under $1 and dropping thanks to IoT demand. Those costs are offset by reduced card replacement due to mag stripe erasure (like setting your phone on your wallet), and in response to all-too-frequent identity compromise. Some NVM technologies, like Sidense 1T-OTP, are also secure against physical inspection attacks – without visually revealing the state of programmed memory cells, hackers can’t reverse engineer the application code or encryption keys easily.

Next, we have to get over this “absolutely secure” excuse, postponing change waiting for a perfect solution. Signatures should have gone out with the Declaration of Independence, and are totally non-secure. Two-factor authentication schemes using PINs are pretty good. I actually like the two-factor NFC approach using a smartphone, but that’s another discussion. US banks and consumers need to just get behind the EMV smart card standards and chip-and-PIN, and get over the IT changes needed to make it happen. Nothing is bulletproof, but what we have now looks like Swiss cheese in comparison to what we should have – and attacks are only going to increase the longer we wait.

Of course, there are still lawyers to deal with, and they may be the ultimate barrier to progress. I enjoyed this analysis in the NY Times:

… Visa and MasterCard have both set forth timetables that attempt to institute the adoption of embedded-chips technology by the fall of 2015. Although the timetables are not mandatory, they would essentially shift the liability for card losses on to whichever side — the bank or the retailer — has the least secure technology.

That is world-class FUD if I’ve ever seen it, but unfortunately it is exactly the type of misunderstood risk a lawyer would use to stop change in its tracks. I may switch careers to become an expert witness: “Well, their card uses an MCU with known security vulnerabilities ….”

Seriously, the time has come for smarter payment systems – smart cards, NFC-enabled phones, anything but mag stripes – in the US. Technologists need to lead this charge and debunk some of the myths surrounding “smart”, communicating the benefits to consumers more clearly. I hope I’m part of that. What are your thoughts on this melee?


Verification of Power Delivery Networks

Verification of Power Delivery Networks
by Paul McLellan on 02-18-2014 at 2:43 pm

Power delivery networks (PDN) are the metal structures on a chip that delivers the power. In a high-end desktop SoC this might be delivering as much as 150W, and with voltages around 1V that means over 150 amps of current. Clearly getting the PDN correct is critical for a correctly functioning chip. One of the challenges to verifying the PDN is that early in the design the precise circuits are not finalized, and no vectors are available to perform the verification.

It is no longer possible to simply over-design the PDN since chips are increasingly routing area-limited. Instead, there is a requirement to efficiently perform PDN verification to ensure that all the currents and terminal voltages remain within specification, and that the line currents remain within limits protecting the grid from reliability issues.

The traditional approach to power grid verification is power grid analysis (PGA), which employs circuit simulation to check the grid voltages and currents in response to a current stimulus that represents the pattern of activity of the underlying circuit. The current stimulus ideally is the result of multiple simulation runs of the underlying transistor circuit, but that technique is prohibitively expensive. The more practical (and typical) approach is to generate the current stimulus based on specific workload scenarios (operational modes) of the underlying circuit.

Because of this need for a current stimulus, power grid analysis suffers from certain limitations. Naturally, to guarantee safety and reliability under all conditions, designers are interested in worst-case behavior on the grid, yet there is no known method for finding the true or realistic worst-case behavior without an exhaustive analysis, which is prohibitively expensive. In the real world, designers are reduced to generating the current stimulus based on typical or representative workload, neither of which are easy to define and, in any case, are insufficient to guarantee grid safety under all conditions. In fact, typical case analysis only provides a lower bound on the true worst-case voltage or current variations on the grid. To make things worse, it is extremely important to perform some type of grid verification during early design planning, but information on circuit workload is often simply unknown early in the design flow.

Existing power grid analysis, then, effectively shifts the burden onto the designer, who is required to provide the workload patterns for which the grid will be verified. A superior approach would obviously not burden designers to this extent. That is the promise of vectorless verification—a verification that does not require any user-input stimulus.


However, this notion of a truly vectorless verification approach is simply impossible to realize. Given any power grid that presumably has been checked and verified by such a hypothetical engine, designers can always envision an underlying circuit that would draw a large enough current to make the grid unsafe. We will never have an ideal vectorless verification approach for the power grid.

Given, then, that power grid verification must require some information about the current stimulus, the burden is on EDA developers to minimize the amount of information required. We describe an approach that aims to achieve this goal. Such an approach is not computationally cheap, but it gives a good upper bound on the true worst case, and it holds the promise of leading to practical approaches for certain scenarios, especially for early design planning.

One simple case is to assume all the currents are DC, also known as static verification. Only the grid resistance is relevant and the voltage drops can be calculated.

The problem is much harder in the dynamic case, where transient currents are allowed and RLC parasitics all become relevant. Exact solutions become prohibitively expensive, and we see that bounds on the solution must be sought for practical use. The constraints remain DC, but all current and voltage signals are transient over time.

Mentor have a new white paperVectorless Verification of IC Power Delivery Networks which covers the topic in a lot more detail. The author is Farid Najm of University of Toronto. It can be downloaded here


More articles by Paul McLellan…


Is Smartphone Market Maturing?

Is Smartphone Market Maturing?
by Pawan Fangaria on 02-17-2014 at 12:00 pm

Yes and No, in my view. Yes to a certain extent, considering that most of the people in developed world have more than one (may be with dual sim card) phone; and No, considering the vast untapped market in the third world countries of Asia and Africa. In India, although much of the population (who can afford a phone) has phone, but not smartphone; and similar can be the case in many other countries. So, what is required for these countries is an affordable smartphone, and that need has been well recognized by Chinese companies (a few companies in India as well but much of those components are sourced from China).
Continue reading “Is Smartphone Market Maturing?”