Synopsys IP Designs Edge AI 800x100

You can tune a piano, but you can’t tune a cache without help

You can tune a piano, but you can’t tune a cache without help
by Don Dingee on 05-30-2013 at 8:30 pm

Once upon a time, designing a product with a first generation SoC on board, we were trying to use two different I/O peripherals simultaneously. Seemed simple enough, but things just flat out didn’t work. After days spent on RTFM (re-reading the fine manual), we found ourselves at the absolute last resort: ask our FAE.

After about a week, he brought back the answer from the corporate team responsible for the chip design: “Oh, you want to do those two things AT THE SAME TIME? That won’t work. It’s not documented, but it won’t work.” Sigh. Functionality verified, but performance under all use conditions obviously not.

My PTSD-induced flashback was provided courtesy of a recent conversation with Patrick Sheridan, senior staff product marketing manager at Synopsys, when we were discussing why protocol analysis is important in the system architecture and verification process – not just during the design of compliant IP blocks – and what to look for in performance verification of an SoC design.

The unnamed SoC in my opening happened to be non-ARM-based, but the scenario applies to any shared-bus design, especially advanced multicore designs. Without careful pre-silicon verification, there can be surprises for the unsuspecting system designer just trying to get the thing to do what the documentation says it does. The issues we see today aren’t usually as readily observed as mutual exclusivity, and what likely was an attempt to preclude the actual problem from showing up in a much harder-to-detect fashion.

The types of issues we are talking about aren’t functional violations of AMBA protocol – almost any reputable IP block vendor or design team can clobber those defects before they show up at integration. Things start cropping up when more blocks performing more transactions are combined. I asked Sheridan what kinds of problems they find with the Discovery Protocol Analyzer, and he gave one answer: cache coherency.

If I had a dollar for every cache-non-coherent conversation I’ve had over the course of my career, I’d be riding a bike somewhere on the side of a mountain instead of looking out my window at my plants wilting in 100-degree weather in Phoenix while I’m writing this. Those familiar with caching know there are two things worse than not using cache. The first is sustaining a stream of rapid-fire cache misses, which kick off a lot of cycles updating the data wherever it has been copied, and the resulting wait for things to catch up. The second and worse scenario is one or more locations blithely running off with the bad data, before the system has a chance to update it, due to being out of sync for some timing reason.

The AXI Coherency Extensions, or ACE, form the protocol which needs to be checked under duress to mitigate caching issues. Combining Discovery Verification IP with the Discovery Protocol Analyzer provides an easy way for a verification team to generate traffic and check performance without a whole lot of additional effort. Alternatively, a team would have to embark on complex simulation scenarios, or worse yet timing budget computations in a spreadsheet, to find possible faults.

By using a reference platform with pre-configured masters and slaves and built-in checking sequences, achieving the needed coverage is straightforward. With protocol-aware analysis capability, root causes of problems can be found looking at individual transactions, pins, or waveforms. Verification engineers can quickly run scenarios and spot interactions causing cache-coherency problems, and customize the SystemVerilog for their environment.

For more insight, watch for an upcoming article in the Synopsys Advanced Verification Bulletin, Issue 2, 2013 authored by Neil Mullinger titled “Achieving Performance Verification of ARM-Processor-based SoCs.” Mullinger will also be speaking at @50thDAC on Wednesday, June 5, in the ARM Connected Community Pavilion at 9:40am.

lang: en_US


DAC lunch seminar: Better IP Test with IEEE P1687

DAC lunch seminar: Better IP Test with IEEE P1687
by Beth Martin on 05-30-2013 at 7:28 pm

What: DAC lunch seminar (register here)
When: June 5, 2013, 11:30am – 1:30pm
Where: At DAC in lovely Austin, TX

Dr. Martin Keim of Mentor Graphics will present this overview of the new the IEEE P1687 standard, called IJTAG for ‘internal’ JTAG.

If you are involved in IC test*, you’ve probably heard about IJTAG. If you haven’t , it’s time to, because IJTAG defines a standard for embedded IP that vastly improves IP integration. It includes simple and portable descriptions supplied with the IP itself that create an environment for plug-and-play integration, access, test, and pattern reuse of embedded IP that doesn’t currently exist.

This seminar from Mentor Graphics covers the key aspects of IJTAG, including how it simplifies the design setup and test integration task at the die, stacked die, and system level. You will also learn about IP-level pattern reuse and IP access with IJTAG. Are you wondering what you need to do to migrate your existing 1149.1-based approach to P1687? Dr. Keim can tell you that too.

All the examples used in the seminar are from actual industrial use cases (from NXP and AMD). The presenter, Dr. Martin Keim, has the experience and technical chops to make this a very lunchtime seminar for everyone involved.

Register here.

If you’d like to study up on IJTAG before the seminar so you can ask the probing questions that make your fellow attendees jealous of your brains (in addition to your good looks), here’s a reasonable place to start — What’s The Difference Between Scan ATPG And IJTAG Pattern Retargeting?

*DFT managers, DFT engineers, DFT architects, DFT methodologist, IP-, Chip-, System-Design managers and engineers, IP-, Chip-, System-Test integrator, Failure analysis managers and engineers, system test managers, and system test engineers. Whew!


NanGate Launches Aggressive DAC Campaign: 50 Library Characterization Licenses for USD 50K

NanGate Launches Aggressive DAC Campaign: 50 Library Characterization Licenses for USD 50K
by Daniel Nenni on 05-30-2013 at 12:00 pm

NanGate today announced a very aggressive “50-50 campaign”. Throughout June and July and in celebration of DAC 50th anniversary, NanGate will be offering 50 licenses of its Library Characterizer™ product for USD 50K for the first year. The offer applies to new customers as well as to existing customers that do not yet license the library characterization solution. The package also includes 50 licenses of NanSpice™, the company’s internal SPICE simulator. Interface to all major third party SPICE simulators is also available.

A Brief History of NanGate

I talked briefly with Alex Toniolo, VP of Business Development at NanGate, about the impact of such strategy – which could be both positive and negative. NanGate Library Characterizer™ is a fully capable library characterization tool which offers similar features as found in competing solutions and which has interface to their library validation suite for accuracy tuning. NanGate believes that an affordable entry level option will enable small companies using library IP from 3[SUP]rd[/SUP] party vendors to have a state-of-the-art characterization flow in house. These companies would then have the flexibility to run several simulations at many different PVT corners without having to involve other companies in the process and consequently reducing their design implementation time.


NanGate is also aligning different partnerships with the industry leading SPICE simulators to offer very attractive packages. They have even integrated some Spice engines other standard cell characterization tools have not – but they didn’t disclose which one.

The public announcement of this campaign can be found in NanGate’s website: www.nangate.com
They will be offering this deal until end of July. This is also a good opportunity for those who want to evaluate a characterization tool to either replace the current solution or to use as second source during the library verification process.

About NanGate

NanGate, a provider of physical intellectual property(IP) and a leader in Electronic Design Automation (EDA) software, offers tools and services for creation and validation of physical library IP, and analysis and optimization of digital design. NanGate’s suite of solutions includes Library Creator™ Platform, Design Optimizer™ and design services. NanGate’s solution enables IC designers to improve performance and power by concurrently optimizing design and libraries. The solution, which complements existing design flows, delivers results that previously could only be achieved with resource intensive custom design techniques.


TSMC ♥ Berkeley Design Automation

TSMC ♥ Berkeley Design Automation
by Daniel Nenni on 05-30-2013 at 11:00 am

As I mentioned in BDA Takes on FinFET Based Memories with AFS Mega:

Is AFS Mega real? Of course it is, I’m an SRAM guy and I worked with BDA on this product so I know. But don’t take my word for it, stay tuned for endorsements from the top SRAM suppliers around the world.

Here is the first customer endorsement from the #1 foundry. Expect more endorsements from the top fabless semiconductor companies to follow:

SANTA CLARA, CA — May 30, 2013 — Berkeley Design Automation, Inc., provider of the world’s fastest nanometer circuit verification, today announced that TSMC is using Analog FastSPICE Mega (AFS Mega™) for memory IP verification. Memory IP circuits implemented in 16-nm and smaller FinFET-based process nodes must meet stringent performance targets while requiring six-sigma bit cell yield to meet cost and power targets.

Analog FastSPICE Mega is the silicon-accurate circuit simulator that can handle up to 100M-element memories and other mega-scale arrays. Unlike digital fastSPICE tools that sacrifice accuracy via partitioning, event simulation, netlist simplification, table-lookup models, and other shortcuts, AFS Mega meets foundry required accuracy on 100M-element arrays. AFS Mega features unique capabilities to robustly, accurately, and quickly handle pre-layout and post-layout mega-scale arrays providing the silicon-accurate time, voltage, frequency, and power resolution and doing so faster than legacy digital fastSPICE tools.

“We are delighted that TSMC has adopted Analog FastSPICE Mega for FinFET-based memory IP Verification,” said Ravi Subramanian, president and CEO of Berkeley Design Automation. “As the industry leader in advanced process technology and embedded memory IP, TSMC’s choice affirms Berkeley Design Automation’s entry into the memory verification market with AFS Mega.”

The Analog FastSPICE (AFS) Platform provides the world’s leading circuit verification for nanometer-scale analog, RF, mixed-signal, mega-scale arrays, and custom digital circuits. The AFS Platform delivers nanometer SPICE accuracy and faster runtime performance than other simulators. For circuit characterization, the AFS Platform includes comprehensive silicon-accurate device noise analysis and delivers near-linear performance scaling with the number of cores. For large circuits, it delivers 100M-element capacity, the fastest near-SPICE-accurate simulation, and the fastest, most accurate mixed-signal simulation. Available licenses include AFS circuit simulation, AFS Nano, AFS Mega, AFS Transient Noise Analysis, AFS RF Analysis, AFS Co-Simulation, and AFS AMS.

“The move to the 16-nm FinFET process with multiple patterning and new transistors requires new approaches for accurate memory IP verification,” said Suk Lee, TSMC Senior Director, Design Infrastructure Marketing Division. “With BDA’s Analog FastSPICE Mega, we can accurately characterize post-layout FinFET-based memory arrays.”

About Berkeley Design Automation
Berkeley Design Automation, Inc. is the recognized leader in nanometer circuit verification. The company combines the world’s fastest nanometer circuit verification platform, Analog FastSPICE, with exceptional application expertise to uniquely address nanometer circuit design challenges. More than 100 companies rely on Berkeley Design Automation to verify their nanometer-scale circuits. Berkeley Design Automation was recognized as one of the 500 fastest growing technology companies in North America by revenue in 2011 and again in 2012 by Deloitte. The company is privately held and backed by Woodside Fund, Bessemer Venture Partners, Panasonic Corp., NTT Corp., IT-Farm, and MUFJ Capital. For more information, visit http://www.berkeley-da.com.

lang: en_US


2013 semcionductor market forecast lowered to 6% from 7.5%

2013 semcionductor market forecast lowered to 6% from 7.5%
by Bill Jewell on 05-30-2013 at 9:00 am

The global semiconductor market was weaker than expected in 1Q 2013, down 4.5% from 4Q 2012 according to WSTS. Much of the softnes was attributable to a major falloff in the PC market. According to International Data Corporation (IDC), 1Q 2013 PC shipments were down 15% from 4Q 2012 and down 14% from 1Q 2012. Other key end markets remained strong. IDC estimates 1Q 2013 media tablet shipments were up 142% from a year ago. The combination of PC units and media tablet units showed 15% year-to-year growth in 1Q 2013. Mobile phone shipments in 1Q 2013 increased only 4% from a year ago. However smart phones – with high semiconductor content – were up 42% versus a year ago, continuing the 43% growth trend for the year 2012.

The strong growth of media tablets and smart phones will continue in 2013. IDC forecast 59% growth for media tablets in 2013, with tablets surpassing PC units by 2015. Smart phones are expected to grow 30% in 2013, exceeding 50% of total mobile phone units. Overall economic growth should pick up slightly in 2013 over 2012. The International Monetary Fund (IMF) April forecast called for global GDP growth of 3.3% in 2013 compared to 3.2% in 2012.

The second quarter of 2013 shows promise for healthy growth over 1Q 2013. Below is the available revenue guidance for major semiconductor companies. Micron did not provide specific guidance, but we at Semiconductor Intelligence estimated revenue growth based on Micron’s expectations of bit growth and price changes for DRAM and flash. The low end of guidance is pessimistic, with 5 of the 6 companies forecasting a decline. The midpoint guidance seems more realistic, with all but Qualcomm showing increases. Qualcomm cited seasonal trends in its business for the weak guidance. High end guidance averages 6% for the 6 companies providing numbers.

In February, we at Semiconductor Intelligence forecast 7.5% growth in the semiconductor market in 2013 and 12% growth in 2014. Although 1Q 2013 was weaker than expected, the general trends driving moderate growth are still in place. We have lowered our forecast for 2013 to 6%. We are holding the 2014 forecast at 12% based on continued improvement in the global economy. The chart below compares recent forecasts for 2013 and 2014.

lang: en_US


Atrenta: Mentor/Spyglass Power Signoff…and a Book

Atrenta: Mentor/Spyglass Power Signoff…and a Book
by Paul McLellan on 05-30-2013 at 7:00 am

Today Atrenta and Mentor announced that they were collaborating to enable accurate, signoff quality power estimation at the RTL for entire SoCs. The idea is to facilitate RTL power estimation for designs of over 50M gates running actual software loads over hundreds of millions of cycles, resulting in simulation datasets in the 10s of gigabytes.

Under the hood, the implementation is an interface between Mentor’s Veloce2 emulator and SpyGlass Power RTL power estimation tool. This enables estimation of SoC power and validation of power budgets at the full-chip level. This is important since power is actually a chip-level problem (although there are local power issues concerned with thermal and power supplies). The interface files from the emulator differ from files generated by standard RTL simulation tools since they are optimized for large data sets over millions of cycles. SpyGlass power can consume the switching activity interface format (SAIF) generated by Veloce2, as well as files in the industry-standard FSDB format.

The results of this collaboration will be shown in both the Atrenta booth and the Mentor booth during DAC. Atrenta is in booth 1847. Mentor is in booth 2046.

Also, this week, Atrenta announced the publication of a new book on timing constraints. The book Constraining Designs for Synthesis and Timing Analysis: A Practical Guide to Synopsys Design Constraints (SDC) is authored by Sridhar Gangadharan, senior product director at Atrenta and Sanjay Churiwala, director at Xilinx. The book, which features a foreword by Dr. Ajoy Bose, chairman, president and CEO of Atrenta, is being published by Springer Science+Business Media.

The book targets system on chip designers and provides a complete overview of how to create effective timing constraints using SDC, including detailed syntax and semantics, its impact on timing analysis and synthesis and the interaction of timing constraints with the rest of the design flow.

I will review the book on Semiwiki…but after DAC, this week is already too insane.

Springer have a booth at DAC so I’m sure it will be available there (and probably discounted during the show but I’m just guessing). The book is available on Amazon here. Springer is at booth 1243.

Full details of Atrenta activities at DAC, including links for registration, are here. Atrenta is at booth 1847. Atrenta is also the sponsor of the “hot zone” at the DAC party on Monday night.


Efficient Handling of Timing ECOs

Efficient Handling of Timing ECOs
by Daniel Nenni on 05-29-2013 at 8:00 pm

Today, in the design of any type of system on chip (SoC), timing closure is a major problem and it only gets worse with each new, and more advanced process technology. Timing closure is closely inter-leaved with power and clock design. The complexity of achieving closure rises sharply with increasing design density and advancing process technology. Since ECO handling is mostly a manual process. Hence it is time consuming and error-prone. When most chip design cycles are a year or less, if timing closure takes up to two months, it becomes an expensive process. The trend is that ECO handling is expected to grow more complex with each new process node, and hence become more expensive.

Designer’s Challenges
Achieving design closure in the world of SoC design is becoming more and more difficult with each new process node. With increasing design densities/complexities, the interaction of process parameters on design and the inability of design tools to efficiently handle a large number of multi-mode, multi-corner timing scenarios exacerbate the issue.

While it is natural to expect ECO scripts generated using a sign-off STA to be accurate, it is not true for most SoC designs today. Why? Delays are layout dependent and STA tools are not physically-aware. First and foremost, the ECO scripts generated by such non-physically aware tools are not accurate. Second, the scripts are actually implemented using a P&R tool. The inherent lack of correlation between the timing engines within the STA and P&R tools lead to inaccurate estimations of the size and locations for buffers added to the design. In addition, the common challenges faced by designers are:

[LIST=1]

  • Handling multi-mode, multi-corner (MCMM) is practically impossible when manually generating ECOs. Without MCMM, non-linear process variation effects lead to several thousand hold violations.
  • STA being not physically-aware, leads to inaccurate ECO fixes and often over use of buffers, thereby increasing chip power.
  • Designers are forced to run several long and time-consuming iterations through P&R tools since, a) P&R tools can handle few scenarios only at a time, and b) STA-generated ECO scripts are poor predictors of timing convergence.
  • Sometimes, driven by time to market pressures, it is not uncommon to tape-out a chip for a lower than intended performance target.Current ECO Methodologies
    In the traditional timing ECO methodology (Figure1), designers start the ECO process after completing routing. Three common approaches currently used to address timing ECOs and the challenges they present are:

    Script-based ECO handling is a common approach. Based on violation report or partial STA graph, and using easy ways to fix those violations, designers develop and apply an ECO script. Being mostly manual in nature, it is practically impossible to handle multi-mode, multi-corner (MCMM) issues in one shot. In addition, neither the STA tool nor the designers are layout-aware, leading to both timing and layout correlation issues, and hence poor results.

    Another method is to use an optimizer on top of the STA tool. Since the STA tool is not physically-aware, layout correlation issues lead to poor results.

    The third approach is to build an optimizer on top of place and route software. The difference between the built-in timing engine and the sign-off STA engine creates timing correlation issues, leading to over compensation and too many buffers, thereby increasing routing congestion and power. In addition, P&R tools are inherently limited to handling only a few scenarios at a time. This leads to extra iterations and longer time to closure.

    During the post-route optimization phase, designers typically try to reduce the number of violations to within a few hundred such that they can be addressed manually. Since P&R tools can handle only few scenarios at a time, the number and duration per iteration typically increases, resulting in longer time to a) reduce violation count and b) time to closure.

    Required Solution

    From the above discussion it is clear that in order to effectively address timing ECOs and closure, the required solution must combine the capabilities of static timing analysis and physical design to effectively and efficiently handle ECO optimization. In essence, such a solution must be:

    [LIST=1]

  • Placement and most importantly routing-aware
  • Highly correlated with respect to layout and timing
  • Able to handle a large number of MCMM scenariosCorrelation and routing awareness would ensure a) accurate buffer estimation, b) their legalized placement and c) most efficient handling of transition timing violations. The ability to handle large number of MCMM scenarios would transform into greater accuracy, and significant reduction in the number of ECO iterations.

    TimingExplorer™, a unique, placement and routing-aware timing closure solution for all MCMM timing scenarios provided the capabilities we were looking for.

    TimingExplorer provided us the following benefits:

    [LIST=1]

  • Saved 2-4 iterations and 1-2 weeks at the post-route optimization stage. This equated to 50% savings in the post-route optimization phase.
  • Saved 2-3 iterations and 0.5-1 day per iteration (total 1-2 weeks) in ECO phase. This was a 60% to 70% reduction in the duration of timing ECO phase.
  • Most effective and highest efficiency in fixing violations
  • Saved area by using 25-30% fewer buffers than a P&R tool based methodology After successfully taping-out dozens of designs, this tool is now part of our standard design closure flow.

    Summary
    ECOs are the biggest reason why design closure is increasingly complex and time consuming. Effectively and efficiently addressing ECOs call for a product that is architected to be placement and routing-aware, and is capable of handling any number of MCMM sign-off scenarios.

    About the Author
    Timothy Yinghas been working as an ASIC design engineer for 12 years. In his current position as Staff Design Engineer at Marvell’s Storage Group, he is focused on timing ECO closure using Static Timing Analysis. He primarily works on 28nm designs with complex clock structures, multi-voltage domains and hierarchy. Timothy holds a bachelor’s degree in Computer Science from Fudan University in Shanghai, China, and a master’s degree in Electrical Engineering from San Jose State University in California.

    lang: en_US


Advanced Verification – HW/SW Emulation – SoC/ASIC Prototyping

Advanced Verification – HW/SW Emulation – SoC/ASIC Prototyping
by Daniel Nenni on 05-29-2013 at 8:00 pm

market

Aldec, Inc. is an industry-leading Electronic Design Automation (EDA) company delivering innovative design creation, simulation and verification solutions to assist in the development of complex FPGA, ASIC, SoC and embedded system designs. With an active user community of over 35,000, 50+ global partners, offices worldwide and a global sales distribution network in over 43 countries, the company has established itself as a proven leader within the verification design community. Make sure and visit Aldec at #50DAC:

Register for One-on-one Technical Sessions and Demonstrations
Design Automation Conference (DAC) – Aldec Booth #2225
June 3-5, 2013 from 9:00am-6:00pm

Sessions are filling up. Pre-register here to reserve your appointment.

[TABLE] cellpadding=”1″ cellspacing=”1″ style=”width: 90%”
|-
| valign=”top” style=”width: 75px” | Session 01:
| valign=”top” | Prototyping Over 100 Million ASIC Gates Capacity
|-
| valign=”top” | Session 02:
| valign=”top” | Hybrid SoC Verification and Validation Platform for Hardware and Software Teams
|-
| valign=”top” | Session 03:
| valign=”top” | Requirements Traceability for Safety-Critical FPGA/ASIC Designs
|-
| valign=”top” | Session 04:
| valign=”top” | Comprehensive CDC Analysis for Glitch free Design
|-
| valign=”top” | Session 05:
| valign=”top” | UVM/SystemVerilog: Verification and Debugging
|-
| valign=”top” | Session 06:
| valign=”top” | VHDL 2008 and Beyond: OS-VVM Continues to Grow
|-
| valign=”top” | Session 07:
| valign=”top” | Accelerate DSP Design Development: Tailored Flows
|-
| valign=”top” | Session 08:
| valign=”top” | Ask Aldec: Demos, Roadmaps, Partners, Q&A, etc.
|-
| valign=”top” | Session 09:
| valign=”top” | CyberWorkBench: C-based High Level Synthesis and Verification
|-

Register for a one-on-one Technical Session at http://www.aldec.com/DAC2013.


Aldec market share is estimated at 38% of all mixed-language RTL Simulators sold to FPGA designers worldwide. (Excludes OEM simulators supplied directly from FPGA vendors).

Aldec delivers high quality EDA solutions for government, military, aerospace, telecommunications, automotive and safety critical applications. Large companies including IBM, GE, Qualcomm, Rohde and Schwarz, Bosch, Texas Instruments, Applied Micro, Hewlett Packard, Toshiba, Intel, NEC, Mitsubishi, LG, Hitachi, NASA, Invensys, Westinghouse, Raytheon, Panasonic, Lockheed Martin, Samsung, as well as mid-size and small firms utilize Aldec EDA verification suites to boost product performance, cut design development cycles and reduce cost.

The Design Automation Conference (DAC)is recognized as the premier event for the design of electronic circuits and systems, electronic design automation (EDA) and embedded systems and software (ESS).

Members are from a diverse worldwide community of more than 1,000 organizations that attend each year, represented by system designers and architects, logic and circuit designers, validation engineers, CAD managers, senior managers and executives, and researchers and academicians from leading universities.

Close to 300 technical presentations and sessions are selected by a committee of electronic design experts offer information on recent developments and trends, management practices and new products, methodologies and technologies.

A highlight of DAC is its exhibition and suite area with approximately 200 of the leading and emerging EDA, silicon, intellectual property (IP), embedded systems and design services providers.

The conference is sponsored by the Association for Computing Machinery (ACM), the Electronic Design Automation Consortium (EDA Consortium), and the Institute of Electrical and Electronics Engineers (IEEE), and is supported by ACM’s Special Interest Group on Design.

Some of the highlights of this year’s DAC include:

  • Keynotes by industry leaders/visionaries
  • Technical Program (panels, special sessions, Designer Track)
  • Forums, tutorials, and workshops
  • Management Day
  • Exhibition Floor
  • Colocated Conferences
  • Awards for professionals and students

And there’s much more!

lang: en_US


Sagantec’s nmigrate adopted and deployed for 14nm technology

Sagantec’s nmigrate adopted and deployed for 14nm technology
by Daniel Nenni on 05-29-2013 at 3:00 pm

Major semiconductor company successfully migrated 28nm libraries to 14nm FinFET

Santa Clara, California – May 29, 2013 – Sagantec announced that its nmigrate tool was adopted by a major semiconductor company for the development of standard cell libraries at 14nm and 16nm FinFET technologies.
This customer already used nmigrate successfully to migrate a library from a 28nm technology implementation to another foundry 14nm FinFET process. The migration from planar 28nm to 14nm FinFET is very challenging, since it needs to deal with new interconnect layers, satisfy stringent restrictions on front-end layers and FinFET device constraints, new MOL structures and rules and double patterning coloring rules. The nmigrate tool deploys an automated two-dimensional, dynamic layout compaction technology which optimally enforces all the above design rules and constraints and delivers a 100% DRC clean and optimal result.

In addition to using nmigrate as a layout migration tool, nmigrate is also used for DRC clean up and design rule updates. In this use model, layout designers draw or modify layout manually, and use nmigrate for final DRC cleanup. This semi-automatic use model provides significant design acceleration and effort savings, and is particularly beneficial in 14nm and 16nm technologies where manual layout design takes much more effort than in previous nodes.

Availability

The nmigrate migration and compaction tool is already available for customers who wish to accelerate the layout design work of libraries in 14nm and 16nm process nodes, or would like to migrate existing planar 28nm or 20nm libraries to 14nm or 16nm FinFET technologies.

This year at DAC

nmigrate migration and DRC-cleanup presentations and demos at the 50[SUP]th[/SUP] Design Automation Conference in Austin TX, can be scheduled here

About Sagantec
Sagantec is the leading EDA provider of process migration solutions for custom IC design. Sagantec’s EDA solutions enable IC designers to leverage their investment in existing physical design IP and accomplish dramatic time and effort savings in the implementation of custom, analog, mixed-signal and memory circuits in advanced process technologies.

These solutions have been used commercially by tier-1 semiconductor companies, and have been proven to reduce layout time and effort by factors of 3x to 20x and enable dramatically faster introduction of IC products in new technology nodes.
Visit Sagantec at www.sagantec.com