Banner 800x100 0810

Bringing Sanity to Analog IC Design Verification

Bringing Sanity to Analog IC Design Verification
by Daniel Payne on 05-24-2013 at 1:07 pm

Two weeks ago I blogged about analog verification and it started a discussion with 16 comments, so l’ve found that our readers have an interest in this topic. For decades now the Digital IC design community has used and benefited from regression testing as a way to measure both design quality and progress, ensuring that first silicon will work with a higher degree of confidence.

So, what can be done to make automated Analog Verification a reality for the average IC designer? In the Analog design space, a subset of testing will always remain manual. Often there is no way (or desire) to replace pulling up a SPICE waveform or looking at an eye diagram. But there is a large class of testing that can be scripted and automated.

In the last blog in this series, I looked at how a tool like VersICfrom Methodicscan help you discover, run and track the history of scripted tests right from within your Cadence environment. If you are willing to absorb the initial cost of setting up automation scripts, the benefits are manifold. Scripted tests can be standardized and run multiple times in identical fashion. Tests can be tracked, ported and shared easily across cells, libraries and designs.

Once some scripted tests are available, an additional benefit can be obtained – automatic regressions. A regression is a collection of tests that are run as a group. This group can then be manipulated, tracked and managed as a single entity.


Regressions as collections of scripted tests in Virtuoso

Grouping tests into regressions makes it very easy to run a set of consistent checks on a design change. Tests that are related in some way – say LVS/DRC, port checks and RTL consistency checks can all be grouped into a single regression for hand-off criteria. There is now no need to remember if all the hand-off criteria were met – simply run this regression and it ensures that all checks were in place.

Other examples of regressions include ADEXL functional tests that can be grouped as a regression. Each library or cell can have its own functional tests, run through a common ADEXL automation framework, so that essentially the same regression is run on all the cells of a library.

When handing a design off to integration, a good practice would be to run some critical subset of your tests as a handoff – or ‘sanity’ – regression. This regression, commonly referred to as a ‘smoke’ regression in Digital Verification circles, ensures that minimum quality is met before integration is attempted. This way, the integration team only concentrates on issues at the subsystem or system level, knowing that the individual cells or libraries are consistent.


Regression Progress Graph

Regression can also be tracked for progress – plotting the number of passing/failing tests in a regression over a period of time is a good indicator of the health of the design.

Think of regressions as a powerful communication tool. Regression results automatically provide a view into the current state of the design. The tests included in a regression are an indicator of what tests are important. Passing or failing regressions automatically become a gating criteria for acceptance of a design.

Summary
The new discipline of regression testing for AMS designs is of benefit and Methodics has EDA tools that will help you add automated regression testing into your IC design flow.

If you’re traveling to DAC in Austin then visit the engineers at Methodics in booth #1731, ask for Simon Butler.

Further Reading

lang: en_US


The Internet of Things

The Internet of Things
by Paul McLellan on 05-24-2013 at 1:02 am

There is a lot of talk about the Internet of Things (IoT) about how everything is going to be connected to the internet. For some reason the archetypal example is a refrigerator that knows what you are nearly out of and puts it on your shopping list. Or orders it from the store. Or something. This seems like pretty high on the list of things I don’t need. If my self-driving car would go and gas itself up without me, that would be great, but I don’t need a special delivery of milk and bacon. Well, OK, who wouldn’t want a delivery of bacon.

Here’s one of those little incremental steps on the way that seems neat. How about a fridge magnet. Boring. From a pizza company. A little less boring. With a button on it. When you press the button, the magnet (well, the electronics inside) links by bluetooth to your cell-phone and orders a pizza. It is all confirmed by text in case you want to change anything about the order. And the pizza shows up in 30 minutes.

Of course it isn’t available everywhere yet. Silicon valley? Forget it, it is only available in Dubai, which wouldn’t have been the first place I’d have picked. It has been so popular that they ran out of them and there is a six week delay while they manufacture more.

See a video showing how it works (1½ minutes).


PinPoint in Practice

PinPoint in Practice
by Paul McLellan on 05-23-2013 at 10:40 pm

I talked with a mystery person earlier this week. I would love to tell you his (or her) name and the company he (or she) works for but they are the sort of company that doesn’t casually endorse any suppliers so it all has to remain anonymous. But they have been a customer of Pinpoint, originally from Tuscany Design Automation until late last year when Dassault Systemes acquired the company. They have been using it in production designs for about 18 months.

Like most big semiconductor companies, they have multiple internal dashboards that have been homegrown. But Pinpoint is unique since it does not just pull status reports and metrics together in one place, it provides all the information needed to visualize a problem, debug the problem, plan the fix and make decisions from within a single browser window. Without the information Pinpoint provides it is often unclear what approach should be taken: if a path misses timing is it an RTL problem, a routing congestion problem, bad power grid, poor placement, or a bad floorplan. All of these are possible reasons and with only a timing report and a perhaps the layoutit is almost impossible to guess which is the root cause and thus whose job it is to try and fix it.


Unlike internal dashboards, Pinpoint makes it possible to look at the actual paths of interest (not making timing, say), see where they run in the layout and overlay it with other relevant information. The company now uses Pinpoint to drive their weekly project meetings when they are in design closure. They also use it as a focus to get RTL designers more involved since all the information is available through a browser window. The design team goes all the way from RTL to GDSII with very tough timing and power requirements. So RTL designers need to work closely with the physical team, they are not “just” taking pre-designed RTL IP from another group and integrating it into an SoC.

They are also starting to use Pinpoint as a way to communicate to geographically dispersed teams. Most of the team is in one site but there are people at a couple of other sites and PinPoint makes it easy to have a common view for discussion.

They have customized Pinpoint in a number of ways, firstly to add internal metrics that they sometimes use and also to overlay information from different sources such as power analysis maps from specialized power analysis tools, or congestion maps from routers, or thermal maps. This is something that is hard to do in the individual tools since they are usually only designed to display their own data graphically.


Another big saving is that Pinpoint provides visualization of physical design, timing paths etc without pulling down an expensive license of PrimeTime or the place & route tools during this analysis and debug. It is not just a financial saving, there is a big saving in time in avoiding needing to reload the whole design/block.

During physical design regression (the nightly run) all the data is automatically updated into Pinpoint. When a designer is playing around with multiple experiments, he/she can flag their best run and control the collection of the metrics. Pinpoint can track the historical progression of the various metrics and the status of the design, and that data is always maintained and visualizable even if the original source data (DEF, LEF, Primetime reports, etc) have to be deleted to make disk space available for future iterations.


Do my tests certify the quality of my products?

Do my tests certify the quality of my products?
by Pawan Fangaria on 05-23-2013 at 9:00 pm

Honestly speaking, there is no firm answer to this question, and often when we get confronted by our customers, we talk about the coverage reports. The truth is a product with high rate of coverage can very easily fail in customer environment. Of course coverage is important, and to be clear about the fact that the failure is not because a particular construct was not tested, we heavily stress on 100% coverage. When I was managing physical design EDA products, I often used to have arguments with my test team about flow tests which go much beyond the syntax and semantics, and in true sense have no limits. The toughest problem we had that no customer was ready to give its design to us for testing purposes. Sometimes, if we were lucky, could get a portion of it under NDA, else relied on repeated reporting of failures (until pass) from the customer.

I am happy to see this tool called Crossfire from Fractal Technologies. This tool enables customer as well as supplier to work in collaborative mode and certify the checks required for a design to work. It works at design level to validate complete StdCell library, IOs and IPs which are used in the design, and has more than 100 types of checks used consistently over different formats at front-end as well as back-end, including databases such as Cadence DFII, MilkyWay and Open Access. Apart from parsing the format, it has specific checks for cells, terminals, pins, nets and so on, as usual for all cells in the library.

What is interesting, and that adds up into the quality of test, is special set of checks which actually sneak into design quality at the time of construction of the design. Some nice examples of these are –

Layout Vs layout – Identity between polygons is checked by Boolean mask XOR operation and abstract enclosing layout polygons. Typical errors of this check can be in the form as below –

LEF cell size – LEF cell is checked to have correct size as per LEF technology.

Routability – Checks if signal-pins can be routed to cell-boundary. Typical errors are – “Pins not on grid” or “Wrongly coded off-set”.

Abutment – Cells checked for self-symmetry and abutment with reference cell. Typical abutment errors are represented as below –

Functional Equivalence – Functional representation in different formats is checked for equivalence. Schematic netlists such as Spice, CDL or schematic views must exhibit same functionality. Similarly Verilog, VHDL or any other description must mean the same functionality. Typical Functional Equivalence errors are – “mismatch between asynchronous and synchronous descriptions”, “short circuit”, “missing functional description”, “Preset and Set yielding different results”, and so on.

Timing Characterization: cross format – Checks for consistency of timing arcs in all formats such as Verilog, VHDL, Liberty, TLF and the like

Timing Characterization: NLDM (Non-linear Delay Model) – Consists of Index, Table Values, Trend, Property and Attribution checks. Typical characterization errors are – delay decreases with increasing output load, obsolete default conditions, non-pared setup and hold times etc.

Timing Characterization: CCS (Composite Current Source) – Consists of Index, Table Values, Reference time, Peak position, Threshold passage and Accuracy checks. Typical CCS characterization errors are represented as below –

Timing Characterization: ECSM (Effective Current Source Model) – Consists of Index, Table Values, Threshold passage and Accuracy checks. Typical ECSM characterization errors are represented as below –

Power Characterization: NLPM (Non-linear Power Model) – Consists of Index, Table Values and Trend checks.

A detailed description of all these checks is given in a paper at Fractal website here.
It also has example test reports of 45nm library and 45nm PDK library validations – Example1 and Example2

An important characteristic of Crossfire is that it allows you to create new tests on demand as per your design and design flow, hence leading to completeness in all types of checks in actual sense. It can also accommodate proprietary formats and databases. Fractal team provides expert knowledge on validation flows and integration of customized APIs with Crossfire which then provides a true environment for complete Quality Control and Assurance.


It’s all in the details of FPGA requirements management

It’s all in the details of FPGA requirements management
by Don Dingee on 05-23-2013 at 8:30 pm

Word association: if I said “requirements management”, you’d probably say IBM Rational “DOORS,” or maybe Serena or Polarion if you come from the IT world. But what if the requirements you need to manage are for an FPGA or ASIC, with HDL and testbench code and waveform files and more details backing verification, and compliance to safety-critical standards is needed?

Continue reading “It’s all in the details of FPGA requirements management”


A Brief History of NanGate

A Brief History of NanGate
by Daniel Nenni on 05-23-2013 at 8:10 pm

NanGate got started in 2004 by a group of engineers from Vitesse Semi and Intel. The technology and market idea was to address and solve the inherent shortcomings of standard cell based design as compared to full custom. Anyone having tried to push the performance of a standard cell design knows the frustration… if only I had a better library or if I could just have these extra cells made!

Standard cell libraries have been around for a long time and have grown in size and complexity from a few hundred cells to several thousand cells. The library design and the choice of which cells to have in the library are key factors to getting the most performance out of the technology and at the lowest power and area possible. The problem is that making a standard cell library takes a lot of effort and has to be a compromise between the power, performance, and area demands from different applications.

NanGate is about creating and optimizing standard cell libraries. They focus on automating the whole process from library specification to GDSII and Liberty. They support and generate all the views and formats needed to use the libraries with standard synthesis and place-and-route flows. Automation and high productivity are key. As the only library optimization company in the world they can generate layout in geometries from 14nm to 350nm from Boolean equations, SPICE netlists or stick diagrams and automatically find the optimal implementation in the selected technology and cell template. Nangate even supports GDSII-to-GDSII migration and DFM optimization.

The Nangat tool suite includes library characterization and library validation. As in digital design formal verification, STA and DRC are must-haves for any contemporary design flow. Validating the library characterization is just as important as any mistakes or modeling inaccuracies will have wide ranging consequences. The same is the case for validating the many different views in the library such as LEF and verilog.

NanGate’s tool suite has been developed, improved and proven in combat during the last 8 years. It required more than $25 million in venture capital funding and an engineering team of more than 30 engineers at the top to pull it off. By 2009, NanGate had 5 EDA products in the market for Library Creation, Characterization and Validation:

  • NanGate Library Creator[SUP]TM[/SUP]
  • NanGate Library Characterizer[SUP]TM[/SUP]
  • NanGate Liberty Analyzer[SUP]TM[/SUP]
  • NanGate Design Audit[SUP]TM[/SUP]
  • NanSPICE[SUP]TM[/SUP]

The products have been adopted by more than 15 customers and several public testimonials have been made by leading foundries and semiconductor companies such as TSMC, Fujitsu and Renesas. NanGate’s library platform has enabled customers to create custom libraries with much less effort and to push the performance higher.

As with the EDA industry in general, the VCs left Nangate in 2012 which enabled a management buyout. Today, Nangate Inc is fully owned and controlled by management. They have refocused the company on the core customers and competencies of NanGate. And probably most importantly, they have restructured the company to be Silicon Valley-based and profitable. Moving forward, NanGate will focus on library automation and optimization for the most advanced process nodes. Partnerships are key to NanGate’s future strategy; recently they integrated Sagantec’s 2D compaction engine to add a DRC correct clean-up step at the end of the layout generation flow. This complements the existing layout creation solution and enables the most powerful 14nm solution in the market today.

NanGate, a provider of physical intellectual property(IP) and a leader in Electronic Design Automation (EDA) software, offers tools and services for creation and validation of physical library IP, and analysis and optimization of digital design. NanGate’s suite of solutions includes Library Creator™ Platform, Design Optimizer™ and design services. NanGate’s solution enables IC designers to improve performance and power by concurrently optimizing design and libraries. The solution, which complements existing design flows, delivers results that previously could only be achieved with resource intensive custom design techniques.


Bats about DAC!

Bats about DAC!
by SStalnaker on 05-23-2013 at 8:05 pm

DAC 2013 is closing in fast now…and if you haven’t made your plans for what you want to see and do, you’d better get going! Of course, I’m happy to help you out with a few suggestions…starting with that most important objective—conference swag. Stop by the Mentor Graphics booth (#2046, for those of you who actually look at your floor maps) any time Monday through Wednesday to pick up your plush Congress Bridge bat. And if you get the chance, go out one night and watch the real thing (take your camera!).

For those who like a bit of anticipation, we also have daily drawings. Prizes this year include an Apple iPad[SUP]®[/SUP]Mini, a Nintendo Wii U™, and a GoPro[SUP]®[/SUP]camera. You get an entry for every Mentor suite session you attend, and drawings will be held every day at our Happy Hour open bar, which starts at 4:00 pm. You don’t have to be present to win, but you must pick up your prize in person before the close of DAC.

If a lively bit of discussion is your thing, Mentor is participating in or hosting a number of panels at DAC. Join us for any or all. No advance registration is required.

Achieving IC Reliability in High Growth Markets
Monday, June 3, 3:00-4:00 (Mentor Booth #2046)

Will Data Explosion Blow Up the Design Flow?
Monday, June 3, 3:15-4:00 (DAC Pavilion Panel, Booth #509)

Advanced Node Reliability: Are We in Trouble?
Tuesday, June 4, 10:30-12:00 (DAC Technical Panel, Room 16AB) requires full conference access

Marrying More than Moore
Tuesday, June 4, 3:00-4:00 (Mentor Booth #2046)

No Fear of FinFET
Wednesday, June 5, 3:00-4:00 (Mentor Booth #2046)

It’s also worth noting that the Mentor Booth panels are followed immediately by the Mentor Happy Hour—great chance to mingle with like minds, while enjoying an adult beverage!

As for those suite sessions—as usual, we’ll be hosting a variety of presentations at the Mentor booth. Below are just a few that might appeal to the Design to Silicon crowd, but you can check out the full list any time. Registration is required to attend a suite session—click on the session title to get to our registration page.

Reliability Checks for Multiple Markets(Presented in Mandarin)
Monday, June 3, 10:00-11:00, Mentor booth #2046
你說中文嗎? 你说中文吗? Calibre PERC provides a fully automated and comprehensive EDA design platform to check ESD, latch-up, EOS, ERC and other design issues in both design and stream out databases. In this session, presented in Mandarin, Mentor Graphics and SMIC discuss reliability checking with Calibre PERC. Want to hear it in English? Sign up for one of the Comprehensive Circuit Reliability with Calibre PERC sessions (Monday, 2:00 or Wednesday, 10:00) in the Mentor booth.

Best Practices for 20nm Design
Monday, June 3, 2:00-3:00, Mentor booth #2046
If you’re planning or contemplating a move to 20nm, you need to be in the seats for this session. TSMC and Mentor present best practices learned from their experience helping leading-edge customers with the transition to 20nm. This vital knowledge will help you smoothly tapeout your designs for TSMC’s advanced processes.

Advancing Circuit Reliability at TowerJazz with Calibre PERC Rule Decks
Tuesday, June 4, 2:00-3:00, Mentor booth #2046
Intended for advanced Calibre users, this presentation demonstrates how TowerJazz uses Calibre PERC rule decks and the Calibre PERC product’s unique ability to combine schematic (netlist) and physical layout information to perform circuit reliability verification during signoff. ONE TIME ONLY, LIMITED SEATING.

Preparing for Pervasive Photonics
Tuesday, June 4, 2:00-3:00, Mentor booth #2046
Silicon photonics is coming—are you ready? This session discusses the impact photonics will have on today’s IC design and manufacturing processes, the tool requirements for SP, foundry options, new applications that SP will open up, and new challenges it will present to IC designers.

Of course, we won’t just be hanging out at the Mentor booth the whole time. Calibre experts will be speaking at our partners’ booths as well. We have a full list of partner activities, but here’s one technical presentation you won’t want to miss:

Identifying Critical Design Features from Silicon Results
Tuesday, June 4, 10:00-11:00, GLOBALFOUNDRIES Booth #1314
Ken Amstutz (Senior Application Engineer) will be talking about the collaboration between Mentor Graphics and GLOBALFOUNDRIES to rapidly identify systematic defects and critical design features based on silicon data. Layout-aware diagnosis identifies the location and classification of defects causing manufacturing test failures. Specialized statistical analysis coupled with design profiling data (such as critical feature analysis) then determines the root cause of yield loss and separates design- and process-induced defects. LIMITED SEATING – REGISTRATION REQUIRED

For a full round-up of Mentor activities at DAC, and to register for any of our suite sessions in advance, you can check us out at Mentor@DAC 2013. See you in Austin!!


Network-on-Chip is the backbone of Application Processor and LTE Modem

Network-on-Chip is the backbone of Application Processor and LTE Modem
by Eric Esteve on 05-23-2013 at 9:38 am

I have mentioned NoC adoption explosion during the last two years, illustrated by the huge revenue growth of Arteris. This trend is now confirmed in the fastest moving segments, the Application Processors (AP) and LTE Modem for mobile applications. In fact, Arteris FlexNoC has been integrated in the majority of AP and LTE Modem chips being shipped in 2012 and shipping in 2013. What is the common and key feature for these chips?

Each of these is shipping by dozen of millions, all of them are extremely complex, counting 100 IP or more, and the Time To Market (TTM) is dramatically important: if a chip maker miss the release to production by only one months, several dozen if not hundred $ millions of chip sales just vanishes… and will never be catch-up, because OEM integrating these IC like Apple, Samsung or HTC have to release a new product generation almost every 6 months. If you take a look at the business won by Arteris since 2008, you will name most of the major semiconductor companies (even if you are not supposed to know the name of “Unannounced Customer”, just think about “Major”…).

If you come back to one of my very first post about Network on Chip, you remember that one of the most important NoC advantage is to avoid routing congestion on large SoC, thus accelerating TTM as the back-end cycles (Place and Route/post routing simulation/modification) are extremely time consuming. But we know that for IC like AP and LTE Modem, the key sales message will be about Performance, measured in term of main CPU Frequency, and Low Power, as these will be the most visible feature for the end user: can I use my smartphone right after opening it? Do I need to plug it every two hours or could I use it for days before charging? Arteris’ FlexNoC is also addressing these two very important requirements that every chip maker has… or that the company should have, in order to be successful!

It’s very interesting to see that the very fast growing chip makers or ASIC companies are also adopting the NoC, and the reasons why they adopt Arteris’ FlexNoC: high frequency, low gate count, lower power, higher flexibility or Quality of Service (QoS) are the most frequently mentioned advantages.

Fuzhou Rockchip Electronics Co. Ltd.
“Arteris FlexNoC interconnect IP enables us to exceed our design frequency and power requirements while giving us more flexibility than possible using older interconnect technologies, like buses and crossbars,” said Li Shiqin, IC Design Manager at Rockchip.

MegaChips Corporation
“From our experience with Arteris’ NoC technology over the years, we knew that Arteris FlexNoC IP was the fastest interconnect fabric for SoCs with multiple initiator and target IP blocks. However, we were surprised that FlexNoC could continue to run at fast design frequencies with a significantly lower gate count and less power consumption than alternative bus fabrics,” said Gen Sasaki, General Manager of Division No.2, MegaChips Corporation.

Open-Silicon, Inc.
“Arteris’ network-on-chip interconnect IP made timing closure much easier and allowed us to implement the QoS management required for the design’s high-performance I/O and sophisticated hardware acceleration engines. In addition, we were able to close timing in a fraction of the schedule needed previously for designs using older crossbar-based architectures,” said Colin Baldwin, senior director of marketing, Open-Silicon

When you see such a customer list, you understand why the company is claiming that FlexNoC is integrated into about 60% of the Application Processor and LTE Modem IC. When you know the associated shipments in smartphones and media tablet applications, you can guess that Arteris royalty revenues will sky rocket in 2013 and after!

To learn a lot more about NoC and Arteris products, just go here.

By Eric Esteve from IPNEST


Do You Need to Worry About Soft Errors?

Do You Need to Worry About Soft Errors?
by Paul McLellan on 05-22-2013 at 6:51 pm

As we get down to smaller and smaller process nodes, the problem of soft errors becomes increasingly important. These soft errors are caused by neutrons from cosmic rays, alpha particles from materials used in manufacture and other sources. For chips that go into systems with high reliability this is not something that can be ignored. Everyone in the design and supply chain has a part to play:

  • foundries and packaging for material choice and characterization
  • library designers, memory and storage elements
  • SoC designers, characterize reliability of design and improve if required
  • System architects to define the reliability needed


Next week, Adrian Evans of IROCtech will present a short webinar on the topic. The agenda is:

  • Soft Error Rate (SER) trends
  • What are the impacts of SER?
  • What can be done about it?
  • Customer case study
  • iROC services and products
  • Q&A

The webinar is on Thursday at 11.30am Pacific Time (although I believe Adrian will be presenting from France). I will be the moderator.

To register go here.

IROC is the standard for soft error analysis and prevention. With the introduction of submicron technologies in the semiconductor industry, chips are becoming more vulnerable to radiation induced upsets. IROC Technologies provides chip designers with soft error analysis software, services and expert advisors to improve a chip’s reliability and quality. Exposure of silicon to radiation will happen throughout the lifetime of any IC or device. This vulnerability will grow as development moves to smaller and smaller geometries. IROC proved that the soft errors that cause expensive recalls, time-to-volume slow-down, and product problems in the field can be significantly reduced. The mission of the company’s soft error prevention software and expert advisors is to allow users to increase reliability and quality while significantly lowering the risk of radiation-induced upsets, throughout the lifetime of products under development.