Banner 800x100 0810

Beyond one FPGA comfort zone

Beyond one FPGA comfort zone
by Don Dingee on 04-29-2013 at 5:00 pm

Unless you are a small company with one design team, the chance you have standardized on one FPGA vendor for all your needs, forever and ever, is unlikely. No doubt you probably have a favorite, because of the specific class of part you use most often or the tool you are most familiar with, but I’d bet you use more than one FPGA vendor routinely.

Continue reading “Beyond one FPGA comfort zone”


Transient Noise Analysis (TNA)

Transient Noise Analysis (TNA)
by Rupindermand on 04-29-2013 at 4:21 pm

Tanner EDA Applications Engineers see a broad range of technical challenges that our users are trying to overcome. Here’s one worth sharing – it deals with transient noise analysis (TNA) for a comparator design. The customer is a producer of advanced flow measurement devices for application in medicine and research. The designer was trying to simulate (quantify) the jitter caused by the output noise of a comparator through its transition region. This signal goes into a buffer – and the buffer should also contribute some amount of jitter. Performing a noise simulation to assess the total onoise while biasing all devices in the transition region (the comparator being a very high-gain amplifier) resulted in very high total output noise.

The Designer was using LTSpice to run AC simulations with noise to calculate onoise and then divide it by the gain of the comparator to get equivalent inoise. The calculated inoise was then used to run a transient simulation. SPICE simulators include small-signal AC analysis that calculates the noise contributed by various devices as a function of frequency. This noise analysis applies to simple circuits that operate at a constant DC operating point.

To obtain better insight into the problem, we used the Tanner-AFS Transient Noise Analysis (TNA) to simulate the realistic noise with the customer’s data. We setup a test bench in S-Edit to run TNA with T-AFS. Working closely with the Designer, we measured maximum Peak-to-Peak Period Jitter and a maximum Peak-to-Peak Absolute Jitter. The transient noise analysis was first run without noise. Then it was run with a noise seed of 500 and a noise scale of 1. Results of the simulations were analyzed and compared in W-Edit to determine the effect of noise. To see the spread of Period Jitter, a set of simulations were conducted (each using different seeds); with statistical measurements performed using histograms in W-Edit. The results were utilized to inform the final design, supporting a successful tape-out.

Tanner EDA will exhibit at DAC 2013, June 2-4[SUP]th[/SUP], in booth 2442 and in the ARM Connected Community® (CC) Pavilion, #921. The entire analog and mixed-signal design suite will be demonstrated:

  • Front-end design tools for schematic capture, analog SPICE and FastSPICE simulation, digital simulation, transient noise analysis, waveform analysis,
  • Back-end tools, including analog layout, SDL, routing and layout accelerators as well as static timing and synthesis, and
  • Physical verification, including DRC and LVS.

Visit www.tannereda.comto learn more. DAC demo sign-ups are HERE.

Tanner EDA provides a complete line of software solutionsthat drive innovation for the design, layout and verification of analog and mixed-signal (A/MS) integrated circuits (ICs) and MEMS. Customersare creating breakthrough applications in areas such as power management, displays and imaging, automotive, consumer electronics, life sciences, and RF devices. A low learning curve, high interoperability, and a powerful user interface improve design team productivity and enable a low total cost of ownership (TCO). Capability and performance are matched by low support requirements and high support capability as well as an ecosystem of partners that bring advanced capabilities to A/MS designs.

Founded in 1988, Tanner EDA solutions deliver just the right mixture of features, functionality and usability. The company has shipped over 33,000 licenses of its software to more than 5,000 customers in 67 countries.


lang: en_US



MOS-AK/GSA Munich Workshop

MOS-AK/GSA Munich Workshop
by Daniel Nenni on 04-29-2013 at 4:06 pm

The MOS-AK/GSA Modeling Working Group, a global compact modeling standardization forum, completed its annual spring compact modeling workshop on April 11-12, 2013 at the Institute for Technical Electronics, TUM, Munich. The event received full sponsorship from leading industrial partners including MunEDA and Tanner EDA. The German Branch of IEEE EDS was the workshop technical program promoter. More than 30 international academic researchers and modeling engineers attended three sessions to hear 12 technical compact modeling presentations.

The workshop’s three sessions focused on common compact modeling actions. Sessions included: How to consolidate and build consistent simulation hierarchy at all levels of advanced TCAD numerical modeling; Compact/SPICE modeling for Analog / Mixed Signal circuits; and Corner modeling and statistical simulations.

The MOS-AK/GSA speakers discussed: statistical modeling with backward propagation of variance (BPV) and covariance equations (K.-W. Pieper; Infineon); circuit sizing: corner model challenges and applications (M. Sylvester; MunEDA); compact modeling activities in the framework of the EU-Funded “COMON” project (B. Iñiguez; URV); effective device modeling and verification tools (I. Nickeleit; Agilent); modeling effects of dynamic BTI degradation on analog and mixed-signal CMOS circuits (L. Heiss; LTE, TUM); STEEPER: tunnel field effect transistors (TFETS) technology, devices and applications (T. Schulz; Intel); current and future challenges for TCAD (C. Jungemann; RWTH); advances in Verilog-A compact semiconductor device modeling with Qucs/QucsStudio (M. Brinson; London Metropolitan University); FDSOI devices bentchmarking (B.-Y. Nguyen; SOITEC); COMON: SOI multigate devices modeling (A. Kloes; THM); COMON: FinFET modeling activities (U. Monga; Intel); COMON: HV MOS devices modeling (M. Bucher; TUC).

The event was accompanied by a series of the software/hardware demos by MOS-AK/GSA industrial partners: Agilent, MunEDA and Tanner EDA. The session technical and software/hardware demo presentations are available for download at: http://www.mos-ak.org/munich_2013/

The MOS-AK/GSA Modeling Working Group is coordinating several upcoming modeling events: a special compact modeling session at the MIXDES Conference in Gdynia (https://www.mixdes.org); an autumn Q3/2013 MOS-AK/GSA workshop in Bucharest, a winter Q4/2013 MOS-AK/GSA meeting in Washington DC, and a spring Q2/2014 MOS-AK/GSA meeting in London (http://www.mos-ak.org).

About MOS-AK/GSA Modeling Working Group:
In January 2009, GSA merged its efforts with MOS-AK, a well-known industry compact modeling volunteer group primarily focused in Europe, to re-activate its Modeling Working Group. Its purpose, initiatives and deliverables coincide with MOS-AK’s purpose, initiatives and deliverables. The Modeling Working Group plays a central role in developing a common language among foundries, CAD vendors, IC designers and model developers by contributing and promoting different elements of compact model standardization and related tools for model development, validation/implementation and distribution.

About
MunEDA:
MunEDA provides leading EDA software technology for analysis, modelling, optimization, and verification of performance and yield of analog, mixed-signal and digital designs. Founded in 2001 MunEDA has its headquarters in Munich, Germany, with worldwide offices and representations by leading EDA distributors worldwide. MunEDA solutions are in industrial use by leading semiconductor companies in the areas of communication, computer, memories, automotive, and consumer electronics.

About
Tanner EDA:
Tanner EDA provides a complete line of software solutions that drive innovation for the design, layout and verification of analog and mixed-signal (A/MS) integrated circuits (ICs) and MEMS. Customers are creating breakthrough applications in areas such as power management, displays and imaging, automotive, consumer electronics, life sciences, and RF devices. Capability and performance are matched by low support requirements and high support capability as well as an ecosystem of partners that bring advanced capabilities to A/MS designs.

lang: en_US


Challenges of 20nm IC Design

Challenges of 20nm IC Design
by Daniel Payne on 04-29-2013 at 11:38 am

Designing at the 20nm node is harder than at 28nm, mostly because of the lithography and process variability challenges that in turn require changes to EDA tools and mask making. The attraction of 20nm design is realizing SoCs with 20 billion transistors. Saleem Haider from Synopsys spoke with me last week to review how Synopsys has re-tooled their EDA software to enable 20nm design.


Saleem Haider, Synopsys
Continue reading “Challenges of 20nm IC Design”


NVM IP Security Solutions…

NVM IP Security Solutions…
by Eric Esteve on 04-29-2013 at 8:51 am

If you need securely storing in your SoC a data which is by nature unique, like encryption key, or a software code update, then you will probably decide to implement a Non Volatile Memory (NVM) block, delivered as an IP function, instead of using an expensive CMOS technology with embedded Flash capability. For example, Synopsys DesignWare non-volatile memory (NVM) AEON®/multiple-time programmable (MTP) EEPROM IP delivers EEPROM-level performance in standard CMOS processes. The target applications for NVM IP range from Multimedia SoC (for Digital Right Management purpose), to Analog chips calibration and trimming. The silicon-proven DesignWare NVM IP is delivered as a hard GDSII block and includes all the required control and support circuitry including the charge pump and high voltage distribution circuits. As we will see, NVM IP should be available in multiple technology nodes, and multiple process flavors.

NVM IP are frequently used in wireless (Bluetooth, digital radio, NFC) and digital home (HDMI port processors) SoCs to store customer configuration and calibration data. In this case, the NVM IP should be available in the most advanced nodes, in order to support the technologies in use for wireless and digital home application. Such applications do not require very high endurance, a maximum of 10, 000 writes cycle is largely enough. As well, a temperature range of -40°C, +125°C is well suited for digital home type of SoC, as indicated in the table below (click to get a better view):

For NVM to be used in High Voltage technologies, like HV CMOS or Bipolar-CMOS-DMOS, to support power management (battery fuel gauge, digital power) for performance tracking or configuration settings type of application, the requirements are different. The IP need to be qualified over a wider temperature range, -40°C, +150°C, and the design on more mature technology node allow providing a higher endurance, up to 1,000,000 write cycles in 250 nm technologies. Read and Programming voltages can be much higher as well, as you can see on the table below:

For a third family of NVM IP is specifically dedicated to analogor mixed-signal designs, like encryption or authentication, EEPROM replacement, customization or calibration settings. The target technologies are ranging from 180 nm down to 90 nm, allowing supporting high endurance, or 100,000 write cycles minimum, and up to 1,000,000 in some case.

You will learn much more about all of the DesignWare NVM IP and start integrating MTP capabilities into your advanced SoCs by going to the above link, and download one of these papers:

Comparison of data storage methods in floating gate and antifuse NVM IP technologies

As far as I am concerned, I would recommend the White Paper titled “Protect your Electronic Wallet Against Hackers”. This paper will not only teach you about the different data storage methods, like “floating gate” or “antifuse” NVM IP technologies, it will also explain what protection level these different design approaches are offering. Even more exciting, the paper will precisely describe three common reverse engineering techniques used by hackers to get access to the supposedly safely stored information – your wallet.

The aim of the paper is to direct you to the most resistant NVM IP technology, as you can see per this extract:

The Most Resistant NVM IP Technology to Reverse Engineering Schemes
Both antifuse and floating gate NVM IP technologies are relatively immune to reverse engineering and both require someone with the right equipment and right level of skill to extract the contents. But there are several advantages to floating gate for MTP applications over antifuse for OTP applications that designers should consider when developing SoCs for data storage application that have higher security requirements:

  • Floating gate technology makes no physical change to the silicon structure and thus is more resistant to techniques such as top-down planar inspection
  • The contents of a floating gate technology can be disturbed or erased by plasma etch techniques during the preparation process. Antifuse technology is not affected by plasma etch and samples can be prepared easily
  • The act of attempting to reverse engineer a floating gate technology using voltage contrast will erase the data contents after one attempt. Antifuse technology allows for multiple attempts without disturbing the data contents.

The paper show you, step by step, how hackers proceed to reverse engineer NVM IP (I don’t think we are talking about 15 years old geek…), like for example with this voltage contrast measurement technique:

De-processing required for effective voltage contrast measurements

Just have a look at the beginning of the paper summary here:

The capability to protect personal information from hackers through a secure element is critical to the continued development of the NFC ecosystem. Design engineers and system architects who most effectively implement data security from the start will have a competitive advantage in the marketplace. One of the key aspects of data security in NFC is the NVM in which the data is stored. There are two main technologies in use today for NVM IP in SoC applications, antifuse for OTP and floating gate for MTP. Understanding the basic differences between the two technologies and the impact that they have on which reverse engineering techniques is critical to making the right NVM IP technology choice for the end application. After reviewing three common reverse engineering techniques, the conclusion is … (just go to Protect your Electronic Wallet Against Hackers to get the final words…)

Eric Esteve from IPNEST

lang: en_US


Properly Handing Of Clock Tree Synthesis Specifications

Properly Handing Of Clock Tree Synthesis Specifications
by Randy Smith on 04-28-2013 at 1:00 pm

Given today’s design requirements with respect to low power, there is increasing focus on the contribution to total power made by a design’s clock trees. The design decisions made by the front-end team to achieve high performance without wasting power must be conveyed to back-end team. This hand-off must be accurate and complete. A key component of that hand-off is the clock tree synthesis (CTS) constraints.

Let’s look at what can go wrong and how to avoid these pitfalls.The clock trees in chips ten years ago were fairly simple and most chips had only a handful of clock trees. In today’s technologies this has exploded into a forest of clock trees. Sheer volume alone points to the need for automation. But even more daunting are complexities of today’s clock trees. Clock gating has been in use for a while now to aid in reducing power. Included IP blocks will have their own clock requirements. There are generated clocks, overlapping clocks, clock dividers, and on and on. All of this information needs to be packaged by the front-end team into the SDC file and clock specification (clock constraint) file for use by the back-end team.

ICScape’s ClockExplorer tool was developed to provide analysis tools to help both teams understand the entire clock graph being developed. It crosschecks equivalence of constraints generated by front-end and back-end teams. Both teams could use ClockExplorer to analyze and sign-off the netlist and clock constraints. ClockExplorer’s platform checks the clock structure and aids in the generation constraints for a CTS tool, including CTS sequencing for complex situations with multiple SDC files and overlapping clock trees.

If these tasks are done manually by either team, mistakes are much more likely to occur.Beyond the important capabilities of simply generating and checking the constraints, ClockExplorer also optimizes the clock topology to reduce latency. As a visual aid, ClockExplorer also generates a clock schematic, greatly assisting in reviews and discussions between the teams. For a more detailed look at all the analysis features of ClockExplorer, including more details on its SDC constraint checking features, see the white paper.


By using tools such as ICScape’s ClockExplorer, I think that front-end and back-end design teams will be able to cut design errors due to improper understanding of, or generation of, clock tree synthesis constraints. They will have a common view of the clock system, consistent checking and automated generation handling the key aspects of the constraint files. This should make a difficult task much easier and more reliable. Where discrepancies due crop up, the visual aid enabled by the automatic generation of the clock schematics should make debugging and communications between the teams much easier.

You can also see ICScape at DAC. Schedule a meeting by clicking here.


Reduce Errors in Multi-threaded Designs

Reduce Errors in Multi-threaded Designs
by Randy Smith on 04-28-2013 at 1:00 pm

Many advanced algorithmic IPs are described in C++. We use this language because of its flexibility. Of course software algorithms are written to be executed on processors so they don’t solve all the issues of getting the algorithm implemented in hardware directly. This is not simply a high-level synthesis (HLS) issue. Usually for implementation in hardware a software algorithm needs to be transformed to operate on a streaming, sample-by-sample basis. In order to achieve performance characteristics, a monolithic software algorithm is implemented as a chain of modules operating in parallel. The single-threaded C++ algorithm won’t meet the system constraints if it is left in a single-threaded form. For such a multi-module or multi-thread implementation you’re going to need more architectural information such as the macro-level block diagram and the number and type of interfaces between the modules. This type of information is best captured in a language like SystemC. But how to you get there?

Making transformations like these is a task that requires both familiarity with hardware as well as familiarity with the algorithms. One part which can be automated is the insertion of the communications channels to properly interface the threads or modules. This is not just the synchronization mechanism, but also storage and buffering since the streams and modules are working independently. The communications between blocks is a necessary part of the design but may not specifically add unique value.

Forte has created an application inside its Cynthesizer Workbench called Interface Generator which automatically generates the mechanisms to efficiently manage the data transfer between multiple threads and modules. The important decisions requiring algorithmic understanding, such as the types of channel to use to interface two modules, are left to the designer. The designer is given many types of channels to choose from – the data type, data storage capacity, how the data is to be synchronized, etc.

Using this interface generation approach, these custom SystemC channels are added to the design library. The designer can use a set of function calls implemented in the channels to handle the transfer of data between the streams or modules. The standardized function calls created by the Interface Generator give the designer a new layer of abstraction, hiding the details and reducing implementation errors inherent in creating the interface code manually (see this video for an example). Errors are reduced by using these standardized function calls to implement the complex interface behavior. Also, this gives the designer the flexibility to try different types of channels to see which type of channel is best for meeting the target specification without having to write low-level RTL protocols for each attempt. Accessing the communication channel by calling these functions allows the designer to work at a higher level of abstraction, with the details of the storage and synchronization protocols encapsulated inside the channel.

It seems to me that this approach would be useful in a huge number of applications. For example, many video vendors have C or C++ implementations of picture improvement algorithms. The algorithms are implemented under different specifications each time they are used. In order to meet the various constraints placed on the picture improvement module such as area, performance (e.g., frames per second), and power, the designers will explore different ways to parallelize the design. How the threads or modules are connected will have an impact on the correctness of the design as well as its performance. Use the Interface Generator the designer can easily experiment with multiple channel types to see which types meet the over design specification.

Another situation where such a problem may come up is when an algorithm is deemed too large and needs to be broken into multiple modules. This could be due to a chip’s floorplan constraints or to allow the design to be broken down for easier verification. It could be a way to save cost by splitting an algorithm into two or more less expensive FPGAs instead of one large FPGA, or it could be to assign the work to a number of designers working at the same time. The value of the Interface Generator is quite clear here as errors are reduced and multiple different interface approaches can be tried in order to meet the design objectives. A video showing usage of the Interface Generator in the design of an edge detection filter can be found here.

Bottom line: Designers who need to take an untimed C++ design and implement it as a multi-threaded or multi-module hardware design can benefit from the automatic creation of communication channels.

For more information see the Forte website here.

Forte is an ‘I LOVE DAC’ sponsor. To get your free DAC badge, or to sign up for a Forte demo at DAC, click here.


Using Virtual Platforms to Make IP Decisions

Using Virtual Platforms to Make IP Decisions
by Paul McLellan on 04-27-2013 at 10:48 am

Most SoC designs these days consist largely, but not entirely, of purchased IP blocks. But there are lots of tradeoffs involved in selecting IP blocks, and since those tradeoffs change with process node, even decisions that seem “obvious” based on the last generation of the design, may not be so clear cut. Even if you have already decided, due to existing software, to use (say) an ARM processor, there are a number of potential processors that could do the job and hit different performance/power points. Not to mention area and license fees.

Caches are a notoriously hard area to get right. Too much cache and you waste too much area and leakage power, too little and the performance is not what you expect. Not to mention the power, since cache misses are a lot more expensive from a power point of view than hits. Caches are very complex these days, with multiple masters, GPUs, snooping for coherency and so on. The caches also interact very closely with decisions made about interconnect (buses, NoC etc) in non-obvious ways.

Another difficult area is software/hardware tradeoffs. In a prior version of a design, it might have been necessary to use a special handcrafted RTL block to achieve the performance necessary. But in a later process node this might be better implemented either in software on the main control processor or perhaps in software on a specialized offload processor.


So how do you make these decisions? Obviously it is too complicated to actually put all the RTL together for the entire chip just to decide if that is really the RTL you need. Besides, RTL is too slow to run a full load of software (Android for example) and these days the purpose of many SoCs is just to run the software as efficiently as possible so it is not possible to do an analysis just looking closely at the hardware without actually running realistic software scenarios.

The answer is to use virtual platforms, which can quickly be configured to swap IP blocks in and out, vary the size of the cache, switch from ARM Cortex-A15 to A-9 and so on. And all while running fast enough that you can boot the operating system, run apps, run standard benchmarks, run test software and generally perform analysis at whatever depth you want.

Then, when you have made all your decisions, you have a virtual platform ready to deliver to the software team so that they can start work in parallel with the SoC design. Since there are typically more software engineers than IC design engineers on a project these days, this is especially important. Without having a virtual platform, it is easy for software engineers to “pretend to program” since it is impossible to be effective without being able to run the code immediately.

Carbon CTO Bill Neifert’s blog on this subject is here. Andy Meier’s blog on CPU selection is here.


GSA European Executive Forum

GSA European Executive Forum
by Paul McLellan on 04-27-2013 at 9:58 am

The first week of June is DAC in Austin of course. But over in Europe, the Wednesday and Thursday of that week, June 5-6th is the GSA European Executive Forum, bringing C-level executives together from all over Europe. It actually runs from 2pm on Wednesday until about 2pm on Thursday including a VIP dinner on Wednesday evening sponsored by eSilicon. The overall theme isThe Path to Global Growth: Optimism, Opportunity and the Role of Europe. The conference is held at the Sofitel Munich Bayerpost.

The first session is about wireless and the Internet of Things (IoT). It opens with a keynote on Reimagine Wireless: The Internet of Things Comes of Age which will present a vision of the wireless landscape and the key factors spurring its revolution such as LTE deployment and machine-to-machine communication.

This is followed by a panel moderated by Aart de Geus (who must be missing at least part of DAC to be there) ranging over the whole topic of wireless, IoT, Europe and so on. The panelists are:

  • Stan Boland, CEO, Neul
  • Matthias Bopp, CEO, Micronas
  • Graham Budd, COO, ARM
  • Maria Marced, President, TSMC Europe
  • Henri Seydoux, Founder, Chairman & CEO, Parrot

After that is a fireside chat in which Joep van Beurden, CEO of Cambridge Silicon Radio (although I believe they are officially just CSR these days) interviews Rick Clemmer, the CEO of NXP Semiconductors which is, of course, the spin-out from Philips of the old Philips Semiconductors (both companies always with an ‘s’ on the end, don’t forget).

There is a reception and then the aforementioned dinner.

Thursday morning starts with a keynote on Enhancing Automotive Safety and Efficiencyby Mark Basten, Group Chief Engineer, Electrical & Electronic, Tata Motors European Technical Centre. Tata is of course based in India but in Europe is probably most famous for being the current owner of Jaguar and Land Rover.

That is followed by a panel session moderated by Ingo Schroeter, Partner and Managing Director, The Boston Consulting Group on Reengineered and Remodeled: The Connected Car. The panelists are:

  • Hans Adlkofer, VP Automotive System Group, Infineon Technologies
  • Fabio Marchio, Group VP, GM, Automotive Microcontroller and Infotainment Division, STMicroelectronics
  • Lars Reger, VP, Head of Strategy, New Business and R&D, Automotive Business Unit, NXP Semiconductors
  • Hanns Windele, VP, Europe and India, Mentor Graphics

The topic then switches from Automotive to Energy with a keynote from André-Jacques Auberton-Hervé, Chairman & Chief Executive Officer, Soitec on Leading the Sustainable Energy Future.

There is then a panel session, moderated by David Baillie, CEO, CamSemi on Smart Energy Management. The panelists are:

  • Kourosh Boutorabi, Head of Energy Management Group, Atmel
  • Sandro Cerato, VP, Applications and System, Member of the Board, Power Management & Multimarket Division, Infineon Technologies
  • Eugen Mayer, Managing Director, Power Plus Communications
  • Dr. Hans Stork, CTO & Senior Vice President, ON Semiconductor

The conference wraps up with lunch sponsored by GlobalFoundries.

Full details are here.


TSMC ♥ Solido

TSMC ♥ Solido
by Daniel Nenni on 04-27-2013 at 8:00 am

Process variation has been a top trending term since SemiWiki began as a result of the articles, wikis, and white papers posted on the Solido landing page. Last year Solido and TSMC did a webinar together, an article in EETimes, and Solido released a book on the subject. Process variation is a challenge today at 28nm and it gets worse at 20nm and 16nm so you had better be ready.

Solido and TSMC recently completed qualification of Solido Variation Designer for 20-nm memory and standard cell designs. Solido’s software provides accurate, scalable and verifiable 6-sigma design coverage on TSMC 20-nm designs in orders-of-magnitude fewer simulations than Monte Carlo analysis.


Memory bitcells and sense amps are the first design blocks to take advantage of each shrink in process technology. Transistors are now so small that atomic variances directly impact design variation. Monte Carlo, as the standard for statistical analysis, has not been able to scale to the demands of memory design. Alternate solutions are inaccurate, scale poorly and are difficult to verify.

Consider a 256 Mb SRAM design, which consists of 256M bitcells and 64k sense amps. For the SRAM to yield, the bitcell yield would need to be 6-sigma, and sense amp yield would need to be 4.5-sigma. However, verifying to this sigma would need billions of Monte Carlo samples which is far too slow.

Solido’s High-Sigma Monte Carlo (HSMC) was shown to overcome the key drawbacks of traditional Monte Carlo analysis, providing:

  • Significantly fewer simulations
  • SPICE and Monte Carlo accurate results in the regions of interest
  • Scalable support on all design blocks used in memory design
  • Verification, for high confidence in results

Solido’s System Monte Carlo adds yield analysis capability at the array level:

  • Providing fast 3-sigma analysis across the array
  • Leveraging probability density function (PDF) data from cell-level analysis
  • Reporting tradeoffs between performance and yield
  • Fast enough to enable exploration of different array configurations

Results of running Solido on TSMC 20-nm memory design:

  • Measured bitcell performance to 6.15 sigma

    • Analyzed 12.8 billion Monte Carlo samples in only 5355 simulations
  • Measured sense amp performance to +/- 4.5 sigma

    • Analyzed 3.2 Million MonteCarlo samples in only 2727 simulations
  • Extracted probability density function (PDF) of bitcell and sense amp
  • Measured Monte Carlo based yield on a 64Mb array for 6 different read speeds in 1.5 hours
  • Improved memory specs by 11% to 52%

Retargeting standard cell libraries to new technologies is expense. It takes lots of simulator licenses and design time, layout has become part of the design loop, and increasing variability makes it difficult to size cells optimally for yield and performance. High-sigma analysis is necessary for the latest process technologies, but needs too many Monte Carlo samples to achieve accuracy and extrapolation with fewer samples is unreliable and inaccurate.

Cell Optimizer adds automation for sizing standard cells, providing:

  • Full script-based operation
  • Design sizing across multiple corners and testbenches
  • Support for pre- and post-layout netlists
  • Simulator independence

On the initial TSMC 20-nm standard cell design, 3 out of 4 measurements failed to meet the specification. After sizing, all measures met specification.

Signup for a DAC demo here:

http://www.solidodesign.com/

Solido Design Automation Inc. is a leading provider of variation-aware custom integrated circuit design software. Solido Variation Designer and application packages are used by analog/RF, IO, memory and standard cell digital library designers to improve design performance, parametric yield and designer productivity. Solido has pioneered a proprietary and patent-pending set of algorithms forming the core of its technology.

lang: en_US