BannerforSemiWiki 800x100 (2)

Atrenta’s users. Survey says….

Atrenta’s users. Survey says….
by Paul McLellan on 12-09-2011 at 7:32 pm

Atrenta did an online survey of their users. Of course Atrenta’s users are not necessarily completely representative of the whole marketplace so it is unclear how the results would generalize for the bigger picture, your mileage may vary. About half the people were design engineers, a quarter CAD engineers and the rest split between test engineers, verification and other things.

There are some questions that focus on use of Atrenta’s tools that I don’t think are of such wide interest, I’ll focus on the things that caught my eye.

Firstly, the method to design your RTL. Do you create it from scratch or modify existing RTL? It is now a 40:60 split with 40% of designers writing their own RTL and 60% modifying existing RTL.

When it comes to the top level RTL, there is a split between doing it manually (57%), with scripts (57%) and using a 3rd party EDA tool (12%). Yes, those numbers total more than 100%, some people obviously use more than one technique.

On the main limitations of their current approach, designers had a litany of woes. Missing design manipulation features (35%), support not consistent and reliable (26%) and ECOs hard to handle (34%). But clearly the #1 problem is the difficulty of debugging design issues at 49%. There were many other things listed from missing IP-XACT files, IP being unqualified, to just plain “error prone”.


The final question was about what aspects of the design flow were most critical to improve. The choices for each feature were critical, very important, nice to have, not important and don’t know. So let’s take the critical and very important groups and see what the top concerns were.

First was reduce verification bugs due to connectivity problems. The next 3 are all facets of a similar problem: rapidaly adapt legacy designs, effort to integrate 3rd party IP and effort to make updates when 3rd party IP is in use. Slightly behind that is to reduce the time and effort to create test benches.


Challenges in 3D-IC and 2½D Design

Challenges in 3D-IC and 2½D Design
by Paul McLellan on 12-09-2011 at 5:18 pm

3D IC design and what has come to be known as 2½D IC design, with active die on a silicon interposer, require new approaches to verification since the through silicon vias (TSVs) and the fact that several different semiconductor processes may be involved create a new set of design challenges

The power delivery network is a challenge in a modern 2D (i.e. normal) die but designing the power delivery network is more challenging still with TSVs, passing the power up from the die at the bottom of the stack (or the interposer) up to the higher die. There are possibilities of inter-die noise and other issues. There are two approaches to handle this. The first approach, which can be used if all the die data is available, is to simulated everything concurrently. The second approach is to use models where a chip power model (CPM) is generated for missing data for co-analysis with the available data.

Another specific power-related problem is thermal and thermal-induced stress failures. The IC power is very temperature-dependent, especially leakage power. In a 3D design the thermal dissipation is more complex. Similar to CPM, a chip thermal model (CTM) can be generated for each die in the design, including temperature dependent power and per-layer metal density. The CTM is used for accurate prediction of power and temperature distribution.

From a signal integrity point of view, a new problem is jitter noise analysis for wide-I/O applications. In an interposer design, which is a lot less pin limited than a regular package, a parallel bus might have as many as 8K bits which, apart from skew considerations, can introduce significant jitter due to simultaneous switching.

So it is clear that a new approach is required, a comprehensive anaysis for power, noise, and reliability to ensure successful tape-out of 3D and silicon-interposer-based designs.

This is a summary of a paper by Dr Norman Chang of Apache presented at the IEEE/CPMT society 3D-IC workshop held in Newport Beach on December 9th. The conference website is here.


Low power techniques

Low power techniques
by Paul McLellan on 12-08-2011 at 5:49 pm

There was recently a forum discussion about the best low power techniques. Not surprisingly we didn’t come up with a new technique nobody had ever thought of but it was an interesting discussion.

First there are the techniques that by now have become standard. If anyone wants more details on these then two good resources are the Synopsys Lower Power Methodology Manual (LPMM) and the Cadence/Si2 Practical Guide to Low Power Design. The first emphasizes UPF and the second CPF but there is a wealth of background information in both books that isn’t especially tied to either power format.

  • Clock gating
  • Multiple Vt devices
  • Back/forward biasing of devices
  • Power gating for shutdown
  • State retention
  • Multi-voltage supplies
  • Dynamic voltage scaling
  • Dynamic frequency scaling (Dynamic Voltage and Frequency Scaling – DVFS)

A lot can be done at the system/architectural level, essentially controlling chip level power functionality from the embedded software, such as powering down the transmit/receive logic in a cell-phone when no call is taking place.

Asynchronous logic offers potential for power saving, especially for the ability to take silicon variation in the form of lower power as opposed to binning for higher performance. After all, 30% of many SoCs power budget is consumed by the clock itself. But there are huge problems with the asynchronous design flow since synthesis, static timing, timing driven place & route, scan-test etc are all inherently built on a synchronous model and break down when there is no clock. These are soluble problems if enough people wanted to use asynchronous approaches, but a lot of tools need to be fixed all at once (but to be fair, this was the case with the introductions of CPF and UPF too). But definitely it has a feel of “you just have to boil the ocean.”

WIth more powerful tools, such as those from Calypto, more clock gating can be done than the simple cases that synthesis handles (replacing muxes recirculating values with a clock gating cell). If a register doesn’t change on this clock cycle, then the downstream register won’t change on the next clock cycle. Some datapaths have a surprising amount of these sorts of structures that can be optimized, although the actual power savings are usually data dependent.

As we have gone down through the process nodes, leakage power has gone from an insignificant part of total power dissipation to being over half in some chips. Some of the new Finfet transistor approaches look like they will have a big positive effect on leakage power too, but there is a lot that can be done with any process using libraries containing both low-leakage and high-performance cells and using the high-performance cells only on the most critical paths.

The real bottom line is that power requires attention at all levels. The embedded software ‘knows’ a lot about which parts of the chip are needed and when. For example, the iPad supposedly has multiple clock rates for the A5 chip and only goes up to the full 1 GHz when that performance is needed for CPU intensive operations. Architectural level techniques such as choice of IP blocks can have a major impact. Low power synthesis is with clock gating and multiple libraries. Circuit design techniques. And finally process innovation that keeps the power under control as the transistors get smaller and smaller.


Driving in the bus lane

Driving in the bus lane
by Paul McLellan on 12-08-2011 at 1:16 pm

Modern microprocessor and memory designs often have hundreds of datapaths that traverse the width of the chip, many of them very wide (over one thousand signals). To meet signal timing and slope targets for these buses, designers must insert repeater cells to improve the speed of the signal. Until now, the operations associated with managing large numbers of large buses have been manual: bus planning, bus routing, bus interleaving, repeater cell insertion and so on. However, the large and growing number of buses, especiallly in multi-core microprocessor designs, means that a manual approach is both too slow and too inaccurate. So Pulsic have created an automated product to handle this increasingly critical task, Unity Bus Planner.

Another problem that automation helps to address is that the first plan is never the last. As von Clausewitz said: “No campaign plan survives first contact with the enemy”. In just the same way, no bus plan survives once the detailed placement of the rest of the chip gets done. Repeater cells, since they involve transistors, don’t fly over the active areas but have to interact with them, so as the detailed layout of the chip converges the bus plan has to be constantly amended.

During the initial floorplanning stage, designers do block placement and power-grid planning followed by initial bus and repeater planning. Buses that cross the whole chip need to be carefully planned. At the end of this stage initial parasitic data is available for simulating critical parts of the design.

During bus planning itself, designers fit as many buses as possible through dedicated channels to avoidtiming issues. Very fast signals require shielding with DC signals such (such as ground) to prevent crosstalk noise issues. Often architects interleave buses so that they provide shielding for each other rather than using valuable resources for dedicated shielding. But planning, interleaving and routing hundreds of very wide buses is slow and error-prone. Internal tools created to do this are often unmaintainable and inadequate for the new generation of still larger chips.

Signals on wide buses need to arrive simultaneously with similar slopes so that they can be correctly latched. This means that the paths must match in terms of metal layers, vias, repeater and so on, a very time consuming process, especially when changes inevitably need to be made as the rest of the design is completed.

With interactive and automated planning, routing and managment, designers can complete bus and repeater-cell planning in minutes or hours rather than days or weeks. Automation also makes the inevitable subsequent modifications faster and more predictable.

The Unity Bus Planner product page is here.



Improving Analog/Mixed Signal Circuit Reliability at Advanced Nodes

Improving Analog/Mixed Signal Circuit Reliability at Advanced Nodes
by glforte on 12-07-2011 at 3:52 pm

Preventing electrical circuit failure is a growing concern for IC designers today. Certain types of failures such as electrostatic discharge (ESD) events, have well established best practices and design rules that circuit designers should be following. Other issues have emerged more recently, such as how to check circuits for correct supply connections when there are different voltage regions on a chip in addition to other low power and electrical overstress (EOS) issues. While these topics are not unique to a specific technology node, for analog and mixed signal designs they become increasingly critical as gate oxides are get thinner for the most advanced nodes and circuit designers continue to put more and more voltage domains into their designs.

In addition to ESD protection circuit errors, some of the emerging problems that designers may face at advanced nodes include:

  • Possible re-spins due to layout errors

    • Thin-oxide/low-power transistor gate driven by the wrong (too high) voltage supply causing transistor failures across the chip
    • This might also cause degradation over a period of time, leading to reliability issues and product recalls
  • Chip reliability and performance degradation

    • Un-isolated ground/sink voltage levels between cells or circuits
    • High-voltage transistors operating in non-saturation because of insufficient/low-voltage supply

TSMC and MGC have worked together to define and develop rule decks for Calibre® PERC™ that enable automatic advanced circuit verification that addresses many of these issues. For example, in TSMC Reference Flow 11 and 12, and in AMS Reference Flow 1 and 2, as well as in standard TSMC PDK offerings, MGC and TSMC have collaborated to provide checks for ESD, latch-up and multi-power domain verification at the 28nm and 40nm nodes. The companies estimate that by using a robust and automated solution like Calibre PERC, users can achieve over 90% coverage of advanced electrical rules with no false errors and runtimes measured in minutes. This is a significant improvement over marker layers, which may achieve around 30% coverage and often result in false positives, and it is far better than visual inspection, which typically achieves only about 10% coverage and is extremely labor intensive.

Calibre PERC introduces a different level of circuit verification capability because it can utilize both netlist and layout (GDS) information simultaneously to perform checks. In addition, it can employ topological constraints to verify that the correct structures are in place wherever circuit design rules require them. Here is a representative list of the checks that Calibre PERC can be used to perform:

  • Point to point resistance
  • Current density
  • Hot gate/diffusion identification
  • Layer extension/coverage
  • Device matching
  • DECAP placement
  • Forward biased PN junctions
  • Low power checking

    • Thin-oxide gate considerations, e.g., maximum allowable voltage
    • Voltage propagation checks, e.g., device breakdown and reverse current issues
    • Detect floating gates
    • Verify correct level shifter insertions and correct data retention cell locations
  • Checks against design intent annotated in net lists

    • Matched pairs
    • Balanced nets/devices
    • Signal nets that should not cross
    • MOS device guard rings

Customers are constantly finding new ways to employ Calibre PERC to automate new types of circuit checks. Leave a reply or contact Matthew Hogan if you would like to explore how Calibre PERC might be used to improve the efficiency and robustness of new or existing checks.

by Steven Chen, TSMC and Matthew Hogan, Mentor Graphics

Acknowledgements
The authors would like to recognize members of the TSMC R&D team Yi-Kan Cheng, and his teams, Steven Chen, MJ Huang, Achilles Hsiao, and Christy Lin, as well as the entire Calibre PERC development and QA team for their support and dedication in making these outstanding capabilities possible.

This article is based on a joint presentation by TSMC and Mentor Graphics at the TSMC Open Innovation Platform Ecosystem Forum. The entire presentation is available on line on theTSMC web site (click here).

var _gaq = _gaq || [];
_gaq.push([‘_setAccount’, ‘UA-26895602-2’]);
_gaq.push([‘_trackPageview’]);

(function() {
var ga = document.createElement(‘script’); ga.type = ‘text/javascript’; ga.async = true;
ga.src = (‘https:’ == document.location.protocol ? ‘https://ssl’ : ‘http://www’) + ‘.google-analytics.com/ga.js’;
var s = document.getElementsByTagName(‘script’)[0]; s.parentNode.insertBefore(ga, s);
})();


How Fast (and accurate) is Your SPICE Circuit Simulator?

How Fast (and accurate) is Your SPICE Circuit Simulator?
by Daniel Payne on 12-06-2011 at 6:17 pm

In my dad’s generation they tweaked cars to become hotrods while in EDA today we have companies that tweak SPICE circuit simulators to become crowned speed champions. The perennial question though is, “How fast and accurate is my SPICE circuit simulator?”
Continue reading “How Fast (and accurate) is Your SPICE Circuit Simulator?”


Methodics can Now use www.methodics.com after Domain Name Battle in EDA

Methodics can Now use www.methodics.com after Domain Name Battle in EDA
by Daniel Payne on 12-06-2011 at 5:36 pm

Imagine trying to run your EDA business only to have a competitor squat on your domain name and then make disparaging remarks about you. This sounds like a match made for reality TV however it is quite real, and now this chapter in EDA has a happy ending because Methodics can use www.methodics.com as their domain name.


The Bad Guy
We’ve been blogging at SemiWiki about how Shiv Sikand of competitor IC Manage registered www.methodics.com in frustration after a sales employee left for Methodics. The publication of the following content at www.methodics.com raised the controversy quite high because it falsely implied that Methodics was out of business:

The Resolution
Fortunately for businesses that have domain name disputes you have recourse by going to the World Intellectual Property Organization (WIPO) and on November 28, 2011 Methodics won the domain name of www.methodics.com, which they now use for their new site name replacing www.methodics-da.com.

IC Data Management Companies
There are a few EDA companies offering tools that help manage the IC design process:

Conclusion
Don’t cyber squat on your competitor’s company name or product name, just compete based on the merits of your product and the skills of your sales force.


EDA mergers: Accelicon acquired by Agilent

EDA mergers: Accelicon acquired by Agilent
by Daniel Payne on 12-06-2011 at 4:51 pm

Agilent acquired EEsof back in 1999, now the EEsof group acquired Accelicon on December 1, 2011. The terms of the deal are not disclosed.

SPICE circuit simulators are only as accurate as their models and algorithms. On the model side we have Accelicon that provides EDA tools to create SPICE models based on silicon measurements:

  • Model Quality Assurance
  • Model Builder Program
  • DFM-aware PDK verification
  • SPICE Model Services


Capacitance and IV curves for MOS devices

Accelicon has partnered with many other EDA companies to fit into standard flows:

  • Synopsys HSPICE
  • Cadence Spectre
  • Mentor Eldo
  • Berkeley DA AFS
  • IPL Alliance
  • MOSFET models: BSIM3v3, BSIM4, BSISOI, BJT
  • DRC/LVS tools: Assura, Calibre, Hercules

The Advanced Model Analysis flow:

SPICE model services include a test chip design, measurements from silicon, then running through Model Builder Program (MBP) and Model Quality Assurance (MQA):

Competitors
We have several EDA companies competing in this space:

  • Silvaco – UTMOST IV
  • Agilent – ICCAP
  • ProPlus – BSIMPro, BSIMProPlus
  • Synopsys – Aurora
  • Accelicon – MQA, MBP

Summary
I don’t see any disruption in the EDA business with this acquisition because we have so many sources for SPICE models. Accelicon founder Dr. Xisheng Zhang started the company in 2002 and hopefully received a fitting reward for building up the business over the past 10 years.

See the Wiki page of all known EDA mergers and acquisitions.


Mark Milligan joins Springsoft

Mark Milligan joins Springsoft
by Paul McLellan on 12-06-2011 at 2:01 pm

Mark Milligan recently joined SpringSoft as VP Corporate Marketing. I sat down with him on Monday to get his perspective on things.

He started life, as so many of us, as an engineer. He was an ASIC designer working on low-level microcode for the Navy Standard Airborne Computer at Control Data. It was actually the first ASIC that they had done. It was the early days of RTL level languages and Mark worked on the simulation environment to verify the ASICs.

Teradyne had bought several companies and built the first simulation backplane so Mark switched to marketing and did the technical marketing for that product line.

Then he moved to the West Coast and joined Sunrise Test Systems which Viewlogic acquired (and eventuallly ended up in Synopsys). There he was an early advocate of DFT and scan. Funnily enough one pushback they used to get on running fault simulation is that designers didn’t want to know how bad it was, there wasn’t time or good tools to get coverage up and it was too late, typically, to bite the bullet and switch to a full scan methodology on the fly.

A spell in CoWare and then Virtualogix gave him a new perspective on embedded software (as did my tours of duty and VaST and Virtutech).

When Mark arrived at SpringSoft they had commissioned a customer satisfaction survey. He was pleased to discover that their customer satisfaction was 25% higher than any of Synopsys, Cadence, Mentor or Magma.

One challenge he feels SpringSoft faces is that products like Laker and Verdi have better name recognition than SpringSoft does itself.

Between 2008 to the present, SpringSoft has continued to grow and it has stayed profitable. In fact he believes they are the most profitable public EDA company (as a percentage of revenue, of course. For sure Synopsys makes more total profit). They are around 400 people, about 3/4 of them in Taiwan.

This profile, profitable and growing and medium sized, allows them to focus on specific pain areas to bring innovation to customers. They are big enough to be able to develop solutions and deliver them but small enough that they don’t have to try and do everything.

One similarity Mark noticed from a previous life was the way that, in the past, people wanted to know how good their test solution was (fault simulation) and now everyone wants to know how good their test benches are, which is a harder problem.

We talked a bit about innovation. Historically most innovation has come from small startup companies but these are no longer being funded in EDA. One the other hand, we have a few medium-sized EDA companies: SpringSoft, Atrenta, Apache (now part of Ansys but still being run as an EDA company). In the areas they cover, which for sure is not everything, there has been a lot of innovation there broadening out from single products to a portfolio that addresses a problem area.