Banner 800x100 0810

Robust Reliability Verification: Beyond Traditional Tools and Techniques

Robust Reliability Verification: Beyond Traditional Tools and Techniques
by SStalnaker on 05-31-2013 at 7:10 pm

Robust Reliability Verification: Beyond Traditional Tools
by Matthew Hogan, Mentor Graphics

At all process nodes, countless hours are diligently expended to ensure that our integrated circuit (IC) designs will function in the way we intended, can be manufactured with satisfactory yields, and are delivered in a timely fashion while meeting the market need. Traditional IC verification relies on a collection of well-known and well-understood tools. Design rule checking (DRC), layout vs. schematic comparison (LVS), electrical rule checking (ERC), parasitic extraction (PEX), design for manufacturing (DFM) and simulation (most often SPICE and timing closure) are all used as part of this cohesive verification flow that provides us the insight required to find and correct any errors or omissions in our design process. Many design errors lead to hard failures in manufacturing, and can be readily identified and fixed, like a metal width that is too small for a process node layer, cells that were incorrectly placed, or shorts across other elements in the design. Finding and fixing these issues is the mainstay of IC verification.

The legacy of simulation
SPICE simulation, and the associated parasitic extraction that it uses, plays a vital role in identifying less obvious errors—those that deal primarily with reliability. Ensuring that you have the correct simulation vectors to provide sufficient coverage while validating the waveforms or analyzing messages from your simulation environment can be time-consuming and CPU-intensive activities, where results often require both expert interpretation and the keen eye of someone who understands the subtleties of each particular design.

Finding scalable alternatives
Whichever Greek philosopher first said that necessity is the mother of invention must have foreseen the challenges that the IC industry would one day face. Time and again, when faced with a new set of requirements not addressed by existing tools, engineers have leveraged their imaginations to create innovative solutions, designs, and process flows.

The same is true for reliability verification. With larger designs, smaller process nodes, and the increased pressure on time-to-market schedules and productivity targets, many design teams are turning to new alternatives that provide critical advantages over existing tools:

  • a simple-to-use environment for the designer and verification engineer,
  • fast runtime (that can scale to the full chip),
  • a cohesive platform that is able to validate a wide range of issues

One tool that has found a strong role in reliability verification is Calibre[SUP]®[/SUP] PERC™. With its ability to evaluate both the logical intent and physical implementation of the design, Calibre PERC provides a unique and powerful reliability verification platform not previously available. While there are many applications where Calibre PERC technology is successfully leveraged, one of the most common uses is the automated identification and resolution of typical reliability design challenges:

  • Electrostatic discharge (ESD)
  • Electrical overstress (EOS) and power intent
  • Voltage-aware DRC

Many of these topics may be quite familiar to you, or perhaps you already have solutions in place today to help with these issues, but let’s go through them one at a time to provide a broader understanding for all.

ElectrostaticDischarge
Designers have always needed to ensure that designs are robust from an ESD perspective. To provide that surety, they need to know what structures the design requires to protect pins from an ESD event, and they need to make sure that the implementation of those structures is correct. They also need to verify that the design complies with the topology rules (that is, the correct combination of protection devices are in place), and that these devices are robust enough to handle the ESD event.

Topology rules exist to help the designer verify that the layout correctly implements the design intent. For example, do you have robust ESD structures in place (primary and secondary) to protect the pins? Are there anti-parallel (back-to-back) diodes in place for multiple power domain designs? Do you have level shifters in place for signals? Are the metal widths sufficient? Are there enough vias?

Point-to-point and current density simulations ensure metal lines and vias are sufficiently robust to handle the expected energy, should an ESD event occur. However, these types of issues are difficult to identify using traditional simulation technologies.

Generalized ESD cells are often designed for this use, but the designer must still ensure that the cell is placed correctly into the design. Additionally, the chip design may change after the ESD IP is placed, requiring the designer to adjust the ESD IP to ensure it fits within the new design parameters (area, performance, etc.). In these situations, it is essential to validate the philosophy of the ESD intent, not just check that the ESD IP has remained intact and unchanged. ESD specialists are often called upon to provide custom solutions for each design, while keeping an eye out for known issues and previous concerns. The best IP in the world can be compromised by a simple implementation oversight.

While never an ideal solution, many designers have always relied on visual inspection and manual methods to evaluate the accuracy of the implementation of ESD structures. The large number of pin pairs in today’s devices make this solution a daunting, if not impossible, task. The designer simply can’t select a “typical” connection and evaluate just a few; rather, every reasonable combination must be evaluated. This challenge was one of the catalysts that led to a rethinking of how ESD structures are evaluated.[1]

Calibre PERC can automatically select and analyze all of the required combinations. For schematic checking, the rules are directed more towards verifying the presence of the appropriate protection schemes from a topological perspective. Users can perform checks on circuitry directly connected to pads, as well as checks on the ESD network. For layout checking, the rules focus on verifying the point-to-point parasitic resistance between the pad and the ESD device, checking current density between pad and the ESD device, detecting pmos/nmos devices sharing the same well, detecting pmos/nmos field oxide parasitics, detecting latch-up issues, and more.

EOS and powerintent
Designs that incorporate multiple power domain checks are particularly susceptible to subtle design errors. Often, these subtle errors don’t result in immediate part failure, but performance degradation over time. Effects such as Negative Bias Temperature Instability (NBTI) can lead to the threshold voltage of the PMOS transistors increasing over time, resulting in reduced switching speeds for logic gates, and Hot Carrier Injection (HCI), which alters the threshold voltage of NMOS devices over time. Soft breakdown (SBD) also contributes as a time-dependent failure mechanism, contributing to the degradation effects of gate oxide breakdown.

Transistor-level power intent verification is a critical need, especially in designs that make extensive use of IP. The IP must be hooked up correctly within the design. Thin oxide gates and high power applications require tight controls for voltage and power domains. Many of these issues are difficult to identify in the simulation space or with traditional PV techniques.

Power-aware checking requires the ability to use the design’s netlist to recognize specific circuit topologies, such as level shifters, I/O drivers, and other structures, and then relate those to the corresponding GDS geometries that make up the layout, to verify that those specific elements have been included and have been implemented correctly. Unlike the foundry DRC decks, the definition of these checks do not all come from the foundry, but must be tailored to the specific design styles and practices of the designer’s company, so any tool performing this function must be highly flexible and easily programmable. A transistor-level power-aware checking tool must also be able to statically propagate voltage values from the various supplies to every node in the circuit to facilitate a variety of EOS checks.

For example, one common problem for designers trying to debug power violations at the transistor level with simplistic tools is a lack of knowledge of the intention (the functionality) of the circuit where the violation is found. Simply checking for transistors connected to multiple domains results in a large number of false errors at the boundary between domains, where level shifter structures intentionally include transistors exposed to both low and high voltages. A power-aware checking tool like Calibre PERC can prevent such false errors by using an automated circuit recognition technique to identify particular topologies. The circuit recognition functionality within Calibre PERC uses the SPICE syntax as an easy way to define complex circuit structures. Whenever power violations are detected in enable-LS, NAND or NOR structures, false errors can be quickly waived using topological recognition.

Additionally, the unified power format (UPF) provides a way to annotate a design with power intent that is independent of any hardware description language (HDL). It is typically used at all levels of the design flow. A UPF specification at the register transfer logic (RTL) level defines the power architecture of a given design, and drives synthesis and place-and-route to achieve correct implementation. In automated reliability verification, using the same UPF specification for transistor level physical verification ensures the original power intent is preserved with the final implementation.

UPF specifications can be leveraged as an integral part of Calibre PERC’s understanding of power intent. Along with the design layout data and verification rule deck, Calibre PERC examines the UPF definitions of supply networks (consisting of power switches, supply ports, and supply nets) and checks each supply port’s supply states and its connected supply net. Most importantly, it analyzes the power state tables defined in terms of these states to ensure it captures the legal combinations of supply voltages in the entire design. With integrated support for UPF, Calibre PERC can automatically assign voltages based on a design’s power intent, greatly improving verification coverage and robustness.

Voltage-aware DRC
For smaller process nodes and high reliability designs, the spacing requirements between nets vary as the nets traverse through the design. The required spacings are dependent on the operating voltage ranges, and devices operating at different voltages must be properly protected. For example, many designs have high voltage areas, such as flash memories, that are particularly susceptible. Designers must identify vulnerable nets and devices, and perform the appropriate spacing and guarding checks on the layout. With traditional verification methods, this means creating physical layout markers to perform voltage-aware DRC.

Using its novel circuit topology-aware voltage propagation capability, Calibre PERC can automatically perform voltage analysis and apply the results against the schematic or extracted layout netlist. Target nets and devices for the voltage-aware DRC checks are selected from the layout through the direct integration of netlist-based voltage analysis, using either vectored or vectorless static voltage propagation. The voltage-aware DRC rules are then applied to the selected layout objects. Such analysis and verification is used to identify areas of the design at risk for time dependent dielectric breakdown (TDDB).

Summary
Robust reliability verification is available today as a comprehensive solution. Verification tools such as Calibre PERC include specific technologies to make fast, automated reliability verification practical. With a reliability verification methodology comprising a single tool with a unified rule deck and integrated debug environment, Calibre PERC helps designers find subtle design optimization opportunities without SPICE circuit simulation, while also enabling them to achieve the accurate and comprehensive verification necessary to ensure a repeatable and reliable design.

References

[LIST=1]

  • Muhammad, M.; Gauthier, R.; Junjun Li; Ginawi, A.; Montstream, J.; Mitra, S.; Chatty, K.; Joshi, A.; Henderson, K.; Palmer, N.; Hulse, B., “An ESD design automation framework and tool flow for nano-scale CMOS technologies,” Electrical Overstress/ Electrostatic Discharge Symposium (EOS/ESD), 2010 32nd , vol., no., pp.1-6, 3-8 Oct. 2010
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=5623716

    Going to DAC and interested in learning more about Calibre PERC? Matt will be presenting on reliability verification several times at the Mentor Graphics booth (#2046):

    • Monday, 2:00-3:00 (Comprehensive Circuit Reliability with Calibre PERC)
    • Tuesday, 2:00-3:00 (Advanced Circuit Reliability with TowerJazz; complements lunch seminar)
    • Wednesday, 10:00-11:00 (Comprehensive Circuit Reliability with Calibre PERC)

    Registration is required for booth presentations: sign up here

    Matt will also be participating in a panel on Monday from 3:00-4:00 in the front of the Mentor booth:
    Achieving IC Reliability in High Growth Markets

    No registration is required for panels – simply show up, listen, and learn! The panel will be followed immediately by Mentor’s complimentary Happy Hour, where you’ll have a chance to enjoy an adult beverage, and engage with Matt and other Mentor experts in a relaxed environment.


    Matthew Hogan is a Product Marketing Manager for Calibre Design Solutions at Mentor Graphics, with over 15 years of design and field experience. He is actively working with customers who have an interest in Calibre PERC. Matthew is an active member of the ESD Association—involved with the EDA working group, the Symposium technical program committee, and the IEW management committee. Matthew is also a Senior Member of IEEE, and a member of ACM. He holds a B. Eng. from the Royal Melbourne Institute of Technology, and an MBA from Marylhurst University. Matthew can be reached at matthew_hogan@mentor.com


  • Cooley’s Cheesy Must See List for DAC is Out

    Cooley’s Cheesy Must See List for DAC is Out
    by Paul McLellan on 05-31-2013 at 6:23 pm

    One of the other increasingly successful channels (besides Semiwiki of course) for EDA, IP and semiconductor companies to reach potential customers is John Cooley’s DeepChip. Every year he puts a lot of effort into trying to find out who is exhibiting what at DAC and which stuff seems like it is new and maybe important, and he produces a long guide (a couple of dozen pages). He even lists by name the most appropriate contact person at each company for the product that he is talking about. Sure, they are mostly marketing guys, but they are the marketing guys who specialize in that product, not just generic contacts at the company.

    Print it out and read it on the plane to Austin and you’ll have a much better idea of what stuff is worth your time. Of course it is his opinion yours may differ. But I’ll bet you’ve not put as much thought into the entire spectrum of what is being show at DAC

    This years guide is now out just a couple of days before DAC.

    Gary Smith’s what to see @ DAC 2013 List is here.

    SemiWiki Top Ten Must See @ #50DAC List is here.

    I have no idea if this is accurate, but apparently word is Wally Rhines, Lip-bu Tan, Kathryn Kranen, JL Grey, Gary Smith, Joe Costello, Suk Lee, Richard Goering, Raul Camposano, Dean Drako all donated to charity to be at Jim Hogan’s Hot Zone VIP party at 8:00 to 1:00 on DAC Monday at Austin City Limits. I donated too so go ahead and donate yourself and I’ll see you there.



    The Semiconductor Wiki Project
    , the premier semiconductor collaboration site, is a growing online community of professionals involved with the semiconductor design and manufacturing ecosystem. Since going online January 1st, 2011 more than 600,000 unique visitors have been recorded at www.SemiWiki.com viewing more than 5M pages of blogs, wikis, and forum posts.

    Gary Smith EDA (GSEDA) is the leading provider of market intelligence and advisory services for the global Electronic Design Automation (EDA), Electronic System Level (ESL) design, and related technology markets.

    DeepChip.com is a 20 year old clearinghouse where semiconductor chip designers contribute data-intensive papers and articles of first-hand evaluations and production benchmarks of commercial EDA tools.


    ARM Partners with Carbon on Cortex-A57

    ARM Partners with Carbon on Cortex-A57
    by Paul McLellan on 05-31-2013 at 3:37 pm

    Just in time for DAC, Carbon have announced that they have expanded their partnership with ARM to create and deliver models for the ARM Cortex-A57 processor and related IP. One piece of related IP is the Cortex-A53 which can be configured in big.LITTLE multi-core setups to achieve the sweet spot of higher performance and lower power. A57 when you need it, A53 when you don’t.

    But ARM’s IP family has got quite extensive and the agreement also includes the Corelink CCN-504 cache-coherent network and the Mali-T628 GPU. Carbon will take all this technology and compile 100% accurate virtual models as well as Carbon Performance Analysis Kits (CPAKs) that can boot Android or Linux in seconds while still preserving Carbon’s secret sauce, the capability to switch to 100% accurate representation at any breakpoint and then proceed with almost any level of detail of the design exposed. A bit like big.LITTLE: accuracy when you need it, high speed when you don’t.

    A57 and A53 are ARM’s first 64-bit cores. They can run 32-bit legacy applications too. In fact 32-bit versions of the core are also available. Who knows when we will really need 64-bit in our phones, but in the meantime the focus is on producing very low power servers for specialist datacenter applications. The ARM licensees focused on this market all have value propositions something like 10% cost, 10% power and 10% of the physical volume of equivalent traditional (Intel) solutions. For internet applications, being able to handle millions of transactions simultaneously at low power and cost is more important than the single thread performance of any one transaction (weather forecasting or simulating nuclear bombs is the other way around, but the fast growing part of the market is datacenters for Apple, Facebook, Amazon, eBay etc) making for a real opportunity.

    A system including A57, A53 and Mali is pretty complicated since everything has cache coherent interoperability. There may be more than one core of A57 or A53 of course (as in the latest Samsung phone that contains 4 high performance and 4 low power cores although they are earlier Cortex-A15/A9 not the A57/53).

    Carbon makes it feasible to run full software loads on the design, even very early when decisions are being made about which cores to use and how many. They can then be used for early (pre-silicon, in fact pre-RTL, pre-everything except basic architectural decisions) software development and high visibility post-silicon debug. Plus, they can drop into fully accurate mode to debug subtle problems with hardware or device drivers or to get accurate timing for critical algorithms.

    As usual, models will be available from Carbon’s IP Exchange web portal which is here. Models of the A57 and Mali-T628 are available today for select early access partners.

    Carbon will be at DAC but they won’t have their own booth. They will be in the…surprise…ARM booth, #921.


    10 years, 100,000 miles, or <1 DPM

    10 years, 100,000 miles, or <1 DPM
    by Don Dingee on 05-30-2013 at 10:00 pm

    Auto makers have historically been accused of things like planned obsolescence – redesigning parts to make repairs painfully or even prohibitively expensive – and the “warranty time-bomb”, where major systems seem to fail about a week after the warranty expires. Optimists would chalk both those up to relentless innovation, prudent engineering, and cost containment.

    Continue reading “10 years, 100,000 miles, or <1 DPM”


    SEMulator3D – A Virtual Fab Platform

    SEMulator3D – A Virtual Fab Platform
    by Pawan Fangaria on 05-30-2013 at 8:30 pm

    Yes, it’s a pleasant surprise; it is Virtual Fabrication Platform, one of the new innovations in 2013. I was looking around for what kind of breakthrough technologies will be announced in DAC this year. And here I came across this new kind of innovative tool which can produce final virtual fabricated 3D structures after following all the complex steps of actual fabrication process based on process parameters and design data. Amazing, isn’t it?

    Coventor has introduced SEMulator3D (currently available for shipping) at the right time when we are talking about 3D transistors (Tri-Gate, FinFET and the like), High-k/Metal Gate and sub 22nm processes which come with their own challenges of fabrication. While complexities of process technology are hitting the limits, in order to keep the Moore’s law (according to which the number of transistors on an IC will double in every two years) alive, we must reduce the number of iterations and shorten the time between design and final physical silicon.

    It’s a commendable job by Coventor, who, amid increasing complexities at sub 22nm process nodes and 3D Gates, has come up with this new process modelling paradigm. It handles the complexities of integrated 3D front-end-of-line (FEOL) manufacturing processes quite well. On the virtual automated platform, it reduces the cycle time between fabless design and foundry from months to days or hours, hence seamlessly increasing collaboration between the two teams and reducing cost dramatically.

    The SEMulator3D engine employs advanced physics-driven predictive modelling techniques, such as voxel and surface evolution, which bring high order of physical accuracy. Voxel modelling is a fast and robust digital approach, capable of scaling to the requirements of integrated processes and large silicon areas. Surface evolution is a more analog approach, capable of modelling a wide range of physical process behaviours. I would like to write more about these technologies in future, but for now I can foresee that the concept of SEMulator3D will gain importance and proliferate widely to meet the challenges of today’s semiconductor design and fabrication. SEMulator3D also provides automatic process variation analysis with parallel modelling and virtual metrology that enable in-line, local measurement of critical dimensions, mimicking real metrology operations.

    SEMulator3D is a must to reduce silicon learning cycles, faster time-to-market (especially for new 20nm and below process nodes) and saving $$ spent in reaching manufacturing readiness.

    For more information –
    Visit Coventor booth # 1326 at DAC 2013

    Attend a technical presentation by CTO of Coventor, Dr. David Fried on “Virtual Fabrication: Integrated Process Modelling for Advanced Technology” at SEMICON West – San Francisco, CA, July 9 – 12

    See Coventor press release here.


    You can tune a piano, but you can’t tune a cache without help

    You can tune a piano, but you can’t tune a cache without help
    by Don Dingee on 05-30-2013 at 8:30 pm

    Once upon a time, designing a product with a first generation SoC on board, we were trying to use two different I/O peripherals simultaneously. Seemed simple enough, but things just flat out didn’t work. After days spent on RTFM (re-reading the fine manual), we found ourselves at the absolute last resort: ask our FAE.

    After about a week, he brought back the answer from the corporate team responsible for the chip design: “Oh, you want to do those two things AT THE SAME TIME? That won’t work. It’s not documented, but it won’t work.” Sigh. Functionality verified, but performance under all use conditions obviously not.

    My PTSD-induced flashback was provided courtesy of a recent conversation with Patrick Sheridan, senior staff product marketing manager at Synopsys, when we were discussing why protocol analysis is important in the system architecture and verification process – not just during the design of compliant IP blocks – and what to look for in performance verification of an SoC design.

    The unnamed SoC in my opening happened to be non-ARM-based, but the scenario applies to any shared-bus design, especially advanced multicore designs. Without careful pre-silicon verification, there can be surprises for the unsuspecting system designer just trying to get the thing to do what the documentation says it does. The issues we see today aren’t usually as readily observed as mutual exclusivity, and what likely was an attempt to preclude the actual problem from showing up in a much harder-to-detect fashion.

    The types of issues we are talking about aren’t functional violations of AMBA protocol – almost any reputable IP block vendor or design team can clobber those defects before they show up at integration. Things start cropping up when more blocks performing more transactions are combined. I asked Sheridan what kinds of problems they find with the Discovery Protocol Analyzer, and he gave one answer: cache coherency.

    If I had a dollar for every cache-non-coherent conversation I’ve had over the course of my career, I’d be riding a bike somewhere on the side of a mountain instead of looking out my window at my plants wilting in 100-degree weather in Phoenix while I’m writing this. Those familiar with caching know there are two things worse than not using cache. The first is sustaining a stream of rapid-fire cache misses, which kick off a lot of cycles updating the data wherever it has been copied, and the resulting wait for things to catch up. The second and worse scenario is one or more locations blithely running off with the bad data, before the system has a chance to update it, due to being out of sync for some timing reason.

    The AXI Coherency Extensions, or ACE, form the protocol which needs to be checked under duress to mitigate caching issues. Combining Discovery Verification IP with the Discovery Protocol Analyzer provides an easy way for a verification team to generate traffic and check performance without a whole lot of additional effort. Alternatively, a team would have to embark on complex simulation scenarios, or worse yet timing budget computations in a spreadsheet, to find possible faults.

    By using a reference platform with pre-configured masters and slaves and built-in checking sequences, achieving the needed coverage is straightforward. With protocol-aware analysis capability, root causes of problems can be found looking at individual transactions, pins, or waveforms. Verification engineers can quickly run scenarios and spot interactions causing cache-coherency problems, and customize the SystemVerilog for their environment.

    For more insight, watch for an upcoming article in the Synopsys Advanced Verification Bulletin, Issue 2, 2013 authored by Neil Mullinger titled “Achieving Performance Verification of ARM-Processor-based SoCs.” Mullinger will also be speaking at @50thDAC on Wednesday, June 5, in the ARM Connected Community Pavilion at 9:40am.

    lang: en_US


    DAC lunch seminar: Better IP Test with IEEE P1687

    DAC lunch seminar: Better IP Test with IEEE P1687
    by Beth Martin on 05-30-2013 at 7:28 pm

    What: DAC lunch seminar (register here)
    When: June 5, 2013, 11:30am – 1:30pm
    Where: At DAC in lovely Austin, TX

    Dr. Martin Keim of Mentor Graphics will present this overview of the new the IEEE P1687 standard, called IJTAG for ‘internal’ JTAG.

    If you are involved in IC test*, you’ve probably heard about IJTAG. If you haven’t , it’s time to, because IJTAG defines a standard for embedded IP that vastly improves IP integration. It includes simple and portable descriptions supplied with the IP itself that create an environment for plug-and-play integration, access, test, and pattern reuse of embedded IP that doesn’t currently exist.

    This seminar from Mentor Graphics covers the key aspects of IJTAG, including how it simplifies the design setup and test integration task at the die, stacked die, and system level. You will also learn about IP-level pattern reuse and IP access with IJTAG. Are you wondering what you need to do to migrate your existing 1149.1-based approach to P1687? Dr. Keim can tell you that too.

    All the examples used in the seminar are from actual industrial use cases (from NXP and AMD). The presenter, Dr. Martin Keim, has the experience and technical chops to make this a very lunchtime seminar for everyone involved.

    Register here.

    If you’d like to study up on IJTAG before the seminar so you can ask the probing questions that make your fellow attendees jealous of your brains (in addition to your good looks), here’s a reasonable place to start — What’s The Difference Between Scan ATPG And IJTAG Pattern Retargeting?

    *DFT managers, DFT engineers, DFT architects, DFT methodologist, IP-, Chip-, System-Design managers and engineers, IP-, Chip-, System-Test integrator, Failure analysis managers and engineers, system test managers, and system test engineers. Whew!


    NanGate Launches Aggressive DAC Campaign: 50 Library Characterization Licenses for USD 50K

    NanGate Launches Aggressive DAC Campaign: 50 Library Characterization Licenses for USD 50K
    by Daniel Nenni on 05-30-2013 at 12:00 pm

    NanGate today announced a very aggressive “50-50 campaign”. Throughout June and July and in celebration of DAC 50th anniversary, NanGate will be offering 50 licenses of its Library Characterizer™ product for USD 50K for the first year. The offer applies to new customers as well as to existing customers that do not yet license the library characterization solution. The package also includes 50 licenses of NanSpice™, the company’s internal SPICE simulator. Interface to all major third party SPICE simulators is also available.

    A Brief History of NanGate

    I talked briefly with Alex Toniolo, VP of Business Development at NanGate, about the impact of such strategy – which could be both positive and negative. NanGate Library Characterizer™ is a fully capable library characterization tool which offers similar features as found in competing solutions and which has interface to their library validation suite for accuracy tuning. NanGate believes that an affordable entry level option will enable small companies using library IP from 3[SUP]rd[/SUP] party vendors to have a state-of-the-art characterization flow in house. These companies would then have the flexibility to run several simulations at many different PVT corners without having to involve other companies in the process and consequently reducing their design implementation time.


    NanGate is also aligning different partnerships with the industry leading SPICE simulators to offer very attractive packages. They have even integrated some Spice engines other standard cell characterization tools have not – but they didn’t disclose which one.

    The public announcement of this campaign can be found in NanGate’s website: www.nangate.com
    They will be offering this deal until end of July. This is also a good opportunity for those who want to evaluate a characterization tool to either replace the current solution or to use as second source during the library verification process.

    About NanGate

    NanGate, a provider of physical intellectual property(IP) and a leader in Electronic Design Automation (EDA) software, offers tools and services for creation and validation of physical library IP, and analysis and optimization of digital design. NanGate’s suite of solutions includes Library Creator™ Platform, Design Optimizer™ and design services. NanGate’s solution enables IC designers to improve performance and power by concurrently optimizing design and libraries. The solution, which complements existing design flows, delivers results that previously could only be achieved with resource intensive custom design techniques.


    TSMC ♥ Berkeley Design Automation

    TSMC ♥ Berkeley Design Automation
    by Daniel Nenni on 05-30-2013 at 11:00 am

    As I mentioned in BDA Takes on FinFET Based Memories with AFS Mega:

    Is AFS Mega real? Of course it is, I’m an SRAM guy and I worked with BDA on this product so I know. But don’t take my word for it, stay tuned for endorsements from the top SRAM suppliers around the world.

    Here is the first customer endorsement from the #1 foundry. Expect more endorsements from the top fabless semiconductor companies to follow:

    SANTA CLARA, CA — May 30, 2013 — Berkeley Design Automation, Inc., provider of the world’s fastest nanometer circuit verification, today announced that TSMC is using Analog FastSPICE Mega (AFS Mega™) for memory IP verification. Memory IP circuits implemented in 16-nm and smaller FinFET-based process nodes must meet stringent performance targets while requiring six-sigma bit cell yield to meet cost and power targets.

    Analog FastSPICE Mega is the silicon-accurate circuit simulator that can handle up to 100M-element memories and other mega-scale arrays. Unlike digital fastSPICE tools that sacrifice accuracy via partitioning, event simulation, netlist simplification, table-lookup models, and other shortcuts, AFS Mega meets foundry required accuracy on 100M-element arrays. AFS Mega features unique capabilities to robustly, accurately, and quickly handle pre-layout and post-layout mega-scale arrays providing the silicon-accurate time, voltage, frequency, and power resolution and doing so faster than legacy digital fastSPICE tools.

    “We are delighted that TSMC has adopted Analog FastSPICE Mega for FinFET-based memory IP Verification,” said Ravi Subramanian, president and CEO of Berkeley Design Automation. “As the industry leader in advanced process technology and embedded memory IP, TSMC’s choice affirms Berkeley Design Automation’s entry into the memory verification market with AFS Mega.”

    The Analog FastSPICE (AFS) Platform provides the world’s leading circuit verification for nanometer-scale analog, RF, mixed-signal, mega-scale arrays, and custom digital circuits. The AFS Platform delivers nanometer SPICE accuracy and faster runtime performance than other simulators. For circuit characterization, the AFS Platform includes comprehensive silicon-accurate device noise analysis and delivers near-linear performance scaling with the number of cores. For large circuits, it delivers 100M-element capacity, the fastest near-SPICE-accurate simulation, and the fastest, most accurate mixed-signal simulation. Available licenses include AFS circuit simulation, AFS Nano, AFS Mega, AFS Transient Noise Analysis, AFS RF Analysis, AFS Co-Simulation, and AFS AMS.

    “The move to the 16-nm FinFET process with multiple patterning and new transistors requires new approaches for accurate memory IP verification,” said Suk Lee, TSMC Senior Director, Design Infrastructure Marketing Division. “With BDA’s Analog FastSPICE Mega, we can accurately characterize post-layout FinFET-based memory arrays.”

    About Berkeley Design Automation
    Berkeley Design Automation, Inc. is the recognized leader in nanometer circuit verification. The company combines the world’s fastest nanometer circuit verification platform, Analog FastSPICE, with exceptional application expertise to uniquely address nanometer circuit design challenges. More than 100 companies rely on Berkeley Design Automation to verify their nanometer-scale circuits. Berkeley Design Automation was recognized as one of the 500 fastest growing technology companies in North America by revenue in 2011 and again in 2012 by Deloitte. The company is privately held and backed by Woodside Fund, Bessemer Venture Partners, Panasonic Corp., NTT Corp., IT-Farm, and MUFJ Capital. For more information, visit http://www.berkeley-da.com.

    lang: en_US