DAC2025 SemiWiki 800x100

Right-source your electronic designs

Right-source your electronic designs
by nitindeo on 05-19-2011 at 5:42 pm

Concept2Silicon Systems (C2SiS) is focused on providing complete solutions for complex SoC and System designs with best in class engineering capabilities and most cost-efficient business model. Our highly capable engineering team has experience in delivering 200+ silicon and system design solutions to its customers in the most advanced semiconductor process technology nodes up to 28nm.


We have delivered extremely tough design criteria for embedded ARM with low-power and high-performance on a tight timeline. Our customers include ARM, TI, Cypress and the large communications company in Southern California. Please go to www.concept2silicon.com for more information.




Cadence Virtuoso 6.1.5 and ClioSoft Hardware Configuration Management – Webinar Review

Cadence Virtuoso 6.1.5 and ClioSoft Hardware Configuration Management – Webinar Review
by Daniel Payne on 05-19-2011 at 5:33 pm

Introduction
Cadence and ClioSoft made a webinar recently and I’ll summarize what I learned from it.

What’s New from Cadence in Virtuoso 6.1.5

  • Back2Basics (28nm rule integration, Skill improved with object-oriented, OASIS support, HTML Publisher, Waveform re-written for better Analog support, smaller Waveform db files, generate layout from schematic source improved )
  • Connectivity Design (smarter wire to via automation, better auto routing for bus and diff pair, one router simplifies setup time, low power using Common Power Format with visual spreadsheet)
  • Design Constraints (in schematic add layout constraints for Encounter, constraint checking simplified, constraints are bi-direction schematic-layout)
  • Selective Automation (reliability analysis integrated, fluid guard rings without tweaking Pcell code )
  • Parasitic Aware Design (how does physical layout affect performance, schematic-> constraints-> tests and sims -> pre-layout parasitic estimates -> circuit optimization -> MODGEN creation -> device placement -> net routing -> in-design verification -> extraction -> parasitic comparison)

Do Not

  • Use 6.1 like you did 5.1, there’s no benefit

Do

  • Take advantage of new features to get benefits

ClioSoft
Karim Khalfan talked about the SOS tool used by IC designers:

  • Hardware Configuration Management – should be easy to use
  • Design Management is for the entire IC team
  • Manage everything: Spec, RTL, Verification, P&R, Analog, Custom Layout
    • Large teams, multiple sites, data explosion, binary files, complex flows, IP and re-use, design variants
  • Features
    • Version control – text and binary files, folders, tags and labels
    • Release management – take snapshots
    • Issue tracking – connect to your favorite tools
    • Design reuse – reference previous projects
    • Global Collaboration – client/server architecture, cache, synched
    • Authentication – users identified, groups (schematics, layout, etc.)
    • Design Aware Integration – integrated into Virtuoso
  • Hardware Design
    • Easy to checkin and checkout, use design objects
    • Disk use is optimized, not sending terabytes across network
    • Isolated and shared work spaces, you decide
    • Design hierarchy can be managed
    • Visual differences – compare schematics or layouts, click on each change
    • Integration directly into Virtuoso
  • DDM is built just for IC designs, no 3rd party SW needed

Then Karim did a live demo of Virtuoso 6.1.5 with ClioSoft menus (Design Manager) and commands explained as he went through the process of checking in cells and checking out cells for an IC project. Used both the Library Manager and Schematic tools in Virtuoso. Visual Difference shown between two versions of a schematic or layout, flat or hierarchical.

 

Summary
Cadence and ClioSoft have really created a powerful and flexible IC design environment for team-based design. Both the designers of schematics and the layout people will benefit from using hardware configuration management by keeping track of their complex projects all within the familiar Virtuoso tools.

Also Read

How Avnera uses Hardware Configuration Management with Virtuoso IC Tools

Hardware Configuration Management and why it’s different than Software Configuration Management

Webinar: Beyond the Basics of IP-based Digital Design Management


A New Hierarchical 3D Field Solver

A New Hierarchical 3D Field Solver
by Daniel Payne on 05-19-2011 at 2:04 pm

Introduction
3D field solvers produce the most accurate netlists of RC values of your IC layout that can then be used in SPICE circuit simulators however most of these solvers produce a flat netlist which tends to simulate rather slowly. Thankfully several years ago the first hierarchical SPICE tools were offered by Nassda (HSIM) and Cadence (UltraSim) but they require a hierarchical netlist to simulate large designs quickly and efficiently.

What’s New?
This week I spoke with Dermott Lynch of Silicon Frontline about their latest Hierarchical 3D Field Solver named H3D.

Q: What is hierarchical extraction a big deal?
A: Most SOCs have many memory and other repeated structures in their layout, so exploiting layout hierarchy provides a big benefit to users.

Q: What kind of approach did you use with H3D to create a hierarchical netlist?
A: We have a patented approach based on the Random Walk Algorithm, interested readers can see the patent here.

Q: How are the CPU run times with your hierarchical approach to extraction?
A: This new tool shows a sub-linear increase based on the number of nets.

Q: Does H3D fit into my existing IC tool flow?
A: Yes, it reads a standard layout and produces a netlist that can be: distributed, lumped, C, R, RC and RCCc.

Q: How about using distributed CPUs?
A: We support both distributed and multi-core CPUs to help speed up the results.

Q: Compared to a flat extractor, what kind of speed improvements should I expect with this hierarchical extractor?
A: From 10X to 100X speed improvements, based upon how much hierarchy there is in your layout. Designs with the most hierarchy are: FPGA, Image Sensors and memory rich SOCs.

Q: Should I expect the RC numbers extracted from your flat tool to match the hierarchical tool?
A: Yes, they are statistically equivalent values.

Q: How does H3D help my post-layout simulation run times?
A: We’re seeing speed improvements between 2X and 7X faster using the H3D netlist in a Fast SPICE simulator.

Q: How popular are your field solver tools?
A: Over 350 designs have been verified at over 30 customers, and they’ve been adopted by 12 of the top 30 semiconductor vendors.

Q: What is pricing like for H3D?
A: Pricing starts at $99K annually and there are several configurations to choose from.

Summary
If your IC design requires the highest accuracy for parasitic RC values and the layout has hierarchy then you should start to consider the tool flow of H3D for hierarchical extraction followed by a Fast SPICE tool like HSIM or UltraSim.

*Note: to read/write comments you must log in.


Electro-static Discharge (ESD)

Electro-static Discharge (ESD)
by Paul McLellan on 05-18-2011 at 4:26 pm

Electro-static discharge (ESD) has been a problem since the beginning of IC production. Chips function on power supplies of up to a few volts (depending on the era) whereas ESD voltages are measured in the thousands of volts. When you reach out for your car door handle and a spark jumps across, that is ESD. If you were touching a chip instead of your car (and cars contain plenty of chips these days) then the chip has to be able to absorb those thousands of volts and harmlessly dump them without letting voltages or currents on the chip go beyond what can be handled. ESD doesn’t just cause a potential problem when it happens, it can physically destroy the chip. Estimates are that as many as 35% of all in-field chip failures are due to ESD.

For example, if ESD is incorrectly handled and very high voltages end up on the gate of the chip, the discharge can destroy the thin oxide underneath the gate and make that transistor, and probably thus the whole chip, inoperable. Since the thin oxide is getting thinner with each process node it is not surprising that ESD is a problem that is only going to continue to get worse. Metal geometries are also getting smaller, and so have a reduced capacity to handle a current surcharge.

There are other reasons that it is getting worse, not just shrinking geometries. The higher levels of integration on mixed signal chips man that there are many isolated independent power/ground networks, which aggravates the problem. Also, there are more and more hand-held devices (cell-phones etc) which means that there is more direct access to the ICs. You are not very likely to have n ESD problem with the engine-controller in your car since you don’t touch it. The base of your iPhone has a socket with lots of connectors that go straight to the chips inside the phone.

ESD events can also occur during packaging, assembly and test of the IC. And, in fact, charge buildup inside the chip can also cause ESD failures, especially from in-package capacitors and from on-chip memories.

ESD protection has typically been implemented by placing clamp circuits at appropriate locations on the chip that do two things. First, provide a low impedance discharge path for the ESD and second, clamp the signal voltage at a level that avoids dielectric breakdown. Historically these clamp circuits have been placed in the I/O and power/ground pads. Input pads are especially vulnerable since the input pin has to be connected to the gate of the input driver. The clamp circuits are quite large but since the pads and their drivers are already large this doesn’t have a huge impact.

However, modern designs often have c4 bumps all across the chip, and with the coming of 3D (and 2.5D) designs this will only increase. This means that clamp circuits, which are large, are needed in the core of chip too and thus compete for area with circuitry in the core such as standard cells.

Historically, analysis of ESD has been done in a fairly ad hoc manner, with design guidelines and manual verification. But as the complexity of the design and the number of power/ground nets increases this is no longer adequate. A full-chip ESD verification solution for ESD signoff is required. Indeed, as more ESD protection is needed in the core, it is no longer enough to analyze after the design is complete, ESD protection needs to be planned since it has potentially large impact on the area available in the core for actual implementation of the design.

Whitepaper on Pathfinder

* Note you must be logged in to read/write comments


Adjusting Custom IP to Process Changes

Adjusting Custom IP to Process Changes
by Daniel Nenni on 05-16-2011 at 1:57 pm

A High-Definition Multimedia Interface (HDMI) IP core was being implemented in an advanced process technology. This fairly large and complex analog mixed-signal (AMS) IP comprising over 130K devices was close to being finalized and shipped to the customer. But many design rules at the foundry were unexpectedly changed from recommended to compulsory, creating hundreds of thousands of violations. It would have taken months to fix all the problems. An automated migration was really the only possible solution.

The main changes: New PDKs were received from the foundry incorporating modified PCells, changed rules for DRC (design rule check) and DFM (design for manufacturing). Some of the most significant were:

  • Larger poly gate end-cap. Simple to fix when there is room for the extra poly but very hard to fix when there are other nearby transistors, poly etc
  • Larger metal on via enclosure. Simple to fix if the via is isolated but very hard to fix in the typical case where the via is surrounded by minimum spacing routing
  • Small notches prohibited

The main non-changes: Obviously the migration needed to be performed with minimal possible changes to the design, especially for an AMS block. In particular, design hierarchy needed to be maintained with no changes, and LVS must continue to pass. Corrections should have minimum affect on the design, especially in areas where no violations needed fixing. Virtuoso database integrity should be maintained, in particular structure, connectivity, Pcells etc.

The schedule: The runtime should only be a few hours to allow rapid iteration. At least 90% of the design rule and DFM violations should be fixed automatically. The entire project, including any manual fixup, and technology file creation, should be completed in 3 weeks.

How the above challenges were addressed:

Metal enclosure of vias bigger in target:meta1 enclosure around via needs to be enlarged without creating DRC errors due to the introduced extra metal. A similar change is required on all metal levels. The migration engine not only enlarges the metal enclosure, but will move adjacent wires where needed to maintain a correct spacing.

Replacing some Pcells introduced jogs: replacing some pcells and via cells with target PDK cells resulted in small metal jog DRC errors which needed to be removed automatically.

The poly to diffusion spacing was too small:
at 90° gate connections the poly spacing to the source/drain diffusion needed to be enlarged automatically to be DRC correct. The migration engine will move adjacent poly wires or devices where needed to maintain correct spacing.

Experience
: The Sagantec application engineer took about a week to set up the technology file from scratch and debug it, running a few sub-blocks to pipe-clean the flow and ensure that the results maintained LVS connectivity and Virtuoso database integrity. Mid way through the project, a still-newer PDK was received from the foundry with modified devices that required changes to the source layout database. However, due to having the automated flow in-place these changes caused only minimal delay compared to using massive layout resources to make the changes by hand.

The Sagantec tool automatically corrected over 95% of the ~250,000 violations created by the rule changes, both the DRC and the DFM problems. The remaining violations were easily handled and were cleaned up manually in less than a week of effort.

The final migrated design completely maintained the hierarchy of the original design and was LVS clean.

Results : The corrected output was available before the agreed deadline. Each iteration of the design took only a few hours to run. Over 95% of the violations were fixed automatically and the remaining ones were easily fixed within a week.

One big advantage of the flow was risk-reduction and change-management since rules, requirements and libraries were all unstable. The final Virtuoso database had exactly the same structure as the original layout and nothing in the database was lost or changed during the correction.

IP block Adaption: Altering a complex custom physical design IP to take account of design rule changes can either be done by an experienced and highly effective layout team or using an automated flow that handles almost all of the work automatically. But when the volume of modification is high and the schedule is tight, the manual correction route is no longer a viable option. Furthermore, with any subsequent changes of design rules, the manual work needs to be repeated, resulting in additional delays and cost. That is where automatic solutions, such as Sagantec’s migration technology, provide the most significant advantage, as once the flow is setup, rule update iterations can be completed in hours rather than weeks. In this case, layout adjustment work that would have taken 20 weeks was reduced to one week, effectively a 20x effort reduction. In addition to licensing the migration software, Sagantec also offers migration as a service, performed by its experienced application engineers. Such service offering minimizes the turn-around time, ensures optimal use of the software and accomplishes high quality results in the shortest time.

Sagantec Demo Suite Registration


*Note: to read/write comments you must log in.


Shakeup at Mentor Graphics

Shakeup at Mentor Graphics
by Daniel Payne on 05-12-2011 at 12:22 pm

Reading the title you guessed it right, Mentor Graphics has three new board members today from the slate offered by billionaire activist Carl Icahn:

  • José Maria Alapont, chief executive of the auto parts maker Federal-Mogul
  • Gary Meyers, a director of the chip maker Exar
  • David Schechter, an executive at Mr. Icahn’s investment firm

After Wally Rhines made the announcement of the three new board members, the audience broke into applause and new board member David Schechter said, “Thank you again, thank you Mentor employees. I look forward to working with you.”


The Commons at Mentor, Wilsonville

The acrimony in the press over the past month between Mentor and Mr. Icahn has now given way to a new world order within Mentor Graphics as three board members have been replaced by outsiders.

My hope is that Mentor continues to focus on providing innovative EDA tools to meet the challenges for IC, PCB and Systems Designers.

The Meeting Details
Arriving 30 minutes early at Mentor Graphics this morning I parked and approached the building for the annual share holders meeting. At the door was tight security and a check-in process to identify me as Press. I sat next to Mike Rogoway, Business Writer at the Oregonian, we’ve both been following this hostile drama with Mentor for the past year. We tried in vain to connect with Clear wireless but it simply wasn’t working, so we couldn’t tweet out the highlights.

Ry Schwark, PR Director told me the Press rules: No photography, No video, No questions.

The buzz in the room before the meeting is, “Will Carl Icahn show up?” Most people think that he won’t show and instead will send others in his place.

Bryan Derrik, VP of Corporate Marketing is making the rounds and chatting with co-workers and shareholders.

Don Nail, a Mentor shareholder for 15 years is ready for change and has voted for the Icahn change.

It’s 9:11AM and Wally Rhines of Mentor just showed up so we are soon to get started this morning. A hush has entered the room. All the Mentor executives and board members occupy the front row.

30th annual meeting is opened by Wally Rhines. Questions will be allowed, and cards will be handed out for attendees to use during Q and A.

Directors are introduced to a deadly quiet audience.

Notably executives are introduced by Wally, still a quiet audience.

Dean Freed takes over from Wally Rhines to announce the rules for the meeting. Last call to change your mind on voting by ballot in person. Reading from a script, the minutes are skipped from last year, list of board candidates read.

IBS Associates is the official company to count all votes today.

Dean mentions the board members from Carl Icahn, and Mr. Schechter is in attendance today.

Six items for vote are rattled off.

Wally Rhines takes over from Dean and reports a summary of recent Mentor accomplishments. FY2011 Results, revenue of $915M, up 14% from FY2010 the fastest growth rate in public EDA companies. Non-GAAP EPS of $0.70, up 49%. Bookings up 30% for the year.

Review of “What is EDA?” Software for IC, PCB and Systems design.

EDA Product Segments show 60 distinct software categories for EDA (courtesy of Gary Smith EDA).

Largest EDA Market Share by segments (SNPS, CDN, MENT) shows about a 40% to 66% market share to the #1 supplier across each category.

Mentor Revenue: Integrated System Design PCB is 25%, Scalable Verification (Simulation, Emulation) is 25%, IC Design to Silicon (Calibre, IC, Olympus P&R) is 30%, New and Emerging (transportation, embedded, DFT) is 15%.

80% of Mentor revenue is in products where they are #1 or #2 in market share.

PCB is a #1 market for Mentor, and they have been growing market share to 40% total.

Mentor Strategies: Extend, Detect Discontinuity (Calibre vs Dracula),

Emulation – Functional simulation is too slow, so emulation will enable verification of the largest new designs (Veloce – about 5X faster than competitors). Market share double in FY 2011.

New Markets – Transportation (auto and aerospace) and embedded software. Products: CHS – wire harness, System Vision – mechatronic, Volcano – network design.

8 out of last 9 quarters Mentor has exceeded analyst expectations.

Fiscal 2012 revenue estimates at $1B, a 9% growth.

Dean Freed – any more ballot votes? At 9:40AM the polls are closed.

Waiting for a final share holder to hand-write his ballot, the room starts a small murmur and whispering. Wally and Dean look on, then Wally cracks a joke, “Lot of suspense, this could be a swing vote.”

The preliminary vote says that 5 board members stay, and that 3 new members are elected (per Icahn’s). The room erupts into applause.

Wally pledges cooperation with Icahn’s 3 board members.

Wally opens up for questions and answers.

David Schecter, new board member, “Thank you again., thank you Mentor employees. I look forward to working with you.”

The Votes being taken away by armored truck

Shareholder Questions:

Jim Romero – What about acquisitions to grow?

Wally – Yes, primarily our growth is internal. Plus, we do acquisitions, typically small in size. Flomerics is an acquisition example.

No more questions or comments.

David Schechter (Newest Mentor board member)
Q: Was this a surprise?
A: David – No, we expected this result. We look forward to working with the other Mentor board members to return the greatest value to share holders.

Q: Any other comments?
A: David – I’ll have Mr. Icahn reply to you.

Mentor Founder
After the meeting I chatted with Mentor Founder Tom Bruggere (now Chairman/CEO at 13therapeutics) and Mike Bosworth (former CEO of Context, acquired by Mentor).

Q: What did you think of the vote today?
A: It is what it is.

Mike Bosworth, Tom Bruggere

I first met Mr. Bruggere in the 90’s when Mentor Graphics acquired Silicon Compiler Systems.

All of the founders of Mentor have long since left the company, although many are still interested in the outcome of today’s vote for a change in board members.

*Note: to read/write comments you must log in.


How Good is Your Verification?

How Good is Your Verification?
by Paul McLellan on 05-11-2011 at 5:00 am

The traditional way for analyzing the effectiveness of testing in the software world and in the RTL world is code coverage. Make sure that every line of code is executed. This is a pretty crude measure since even 100% code coverage doesn’t mean that all the condition has really been tested but it is certainly necessary–after all if a line of code is never executed then there is no way to know if it is correct or not.

In the manufacturing test world the criterion is to look at fault coverage. Every signal is considered to be stuck at 0 and 1 and the percentage of these faults that actually propagate to the outputs is calculated. This isn’t perfect since other types of faults can exist (two signals shorted together for example) but again it is a good starting point and there is a good chance that other faults will be detected if the fault coverage is good.

A better way to do things would be to take the design, add a bug, run the test and see if the added bug was detected by the test. Repeat for lots of bugs and you start to get a good idea of how effective your verification is. This gives you an objective metric as to what percentage of the injected bugs are found and, moreover, by looking at which bugs are missed allows the verification tests to be strengthened in the appropriate areas.


Certitude does just this, automatically injecting bugs into the design, running the simulation and then seeing how many of the injected bugs are flagged. For example, it might change “a=b|c” in your original code into “a=b&c”.

This helps with the closure challenge: is it safe to sign off the RTL? Since it is impossible to ever exhaustively test a design, this is always somewhat of a judgement call. But Certitude gives an objective measure of stimulus and checker completeness to support this signoff decision, along with pointers to specific holes to accelerate the closure process by directing incremental efforts to the areas requiring additional attention.

The latest version has been further enhanced by adding new fault types that better represent typical SoC failures. Fault dropping has been expanded to remove redundancy in the results. Results can now be ranked based on the impact of the fault, directing the user to where to analyze first.

Certitudeproduct page


SOC Realization: How Chips Are Really Designed

SOC Realization: How Chips Are Really Designed
by Paul McLellan on 05-09-2011 at 10:00 pm

If you just casually peruse most marketing presentations by EDA companies, you’d come to the conclusion most SoCs are designed from scratch, wrestlilng the monster to the ground with bare hands. But the reality is that most SoCs consist of perhaps 90% IP blocks (many of them memories). That still leaves the remaining 10% to design, which might be 100 milliion gates, a non-trivial task. But increasingly there is a mismatch between the classic EDA flow and the way systems are actually put together out of blocks of predesigned IP, microprocessors, memories and pre-existing software operating systems and stacks.


SoC reallization is the bridge between traditional EDA flow for creating a design from scratch and the IP and software-centric world of designing a system. Since most design is done this way, this is likely to be an area of technology that will play a major role in the years ahead.

A chip being designed will typically contain a lot of the previous generation of the chip, and especially a lot of the previous software of the chip. Software tends to be much longer lasting than hardware, enduring for decades whereas hardware lasts for just one process node before being re-designed.

SoC Realization is taking design up to the next level and thus being able to analyze and verify design concepts much earlier in the design process in order to avoid costly errors or excessive iteration downstream. It ties downwards into the silicon implementation realm and upwards into the software world.

There are three critical areas driving todays’s IP-based SoC Realization solution:

  • IP sourcing: Quality metrics? Silicon proven in which processes/foundries? Power, performance, area, license fee?
  • SoC creation: Optimality of architecture, software/hardware interface, scalability across product generations?
  • SoC handoff: Is the SoC ready for implementation? Will it meet specs? Can impact of system decisions be measured?
  • Continuous model fidelity: at all levels of abstraction from software virtual platform down to silicon implmentation, is the fidelity of the model maintained? Is system intent correctly preserved as the design is reduced to more silicon-specific implementations?

Read the Atrenta white-paper on SoC Realization here

*Note: to read/write comments you must log in.


Cadence EDA360 is Paper!

Cadence EDA360 is Paper!
by Daniel Nenni on 05-08-2011 at 4:02 pm

Hard to believe a year has gone by since the big announcement of the Cadence Blueprint toBattle ‘Profitability Gap’; Counters Semiconductor Industry’s Greatest Threat! Having spent more time on it that I should have, here is my opinion on EDA360 on its first anniversary.

Richard Georing did a very nice anniversary piece “Ten Key Ideas Behind EDA360 – A Revisit” which is here. Points 1-9 are a good description of what Synopsys and Mentor already do today but they call it revenue instead of a vision. Point 10 is something that I have to take exception with due to the absurdity.

10.No one company or type of company can provide all the capabilities needed for the next era of design. EDA360 requires a collaborative ecosystem including EDA vendors, embedded software providers, IP providers, foundries, and customers.Cadence is committed to building and participating in that ecosystem…..

Lets not forget when Cadence announced EDA360 they productized it immediately:

SAN JOSE,Calif.,27 Apr 2010
Cadence Design Systems, Inc. (NASDAQ: CDNS), the global leader in EDA360, today laid out a new vision for the semiconductor industry, EDA360.

The only companies that are going to “collaborate” on EDA360 are companies that want to be acquired by the “Global Leader in EDA360”. Most of the semiconductor professionals that I mentioned EDA360 to laughed out loud. EDA can’t even spell the word collaboration nor will they follow the Global Leader in EDA360down the collaboration bunny trail.

A video of the panel I did on “Enabling the Collaboration Across the Ecosystem to Deliver Maximum Innovation”at the Design Tech Forum can be found in the Wikihere. Executives from Cadence, Synopsys, Mentor, eSilicon, and GlobalFoundries were there and not one mention of EDA360.

I also took issue with them calling it a manifesto in my blog Cadence EDA360 Manifesto and questioned John Bruggeman’s social media savvy since he thinks LinkedIn is only for job searching. His Blog (Official doorway into the life and mind of JohnB) and Twitter (@JohnBruggeman) accounts were brand new. Both have since been abandoned but read the handful of blogs and comments, they are pretty funny. JohnB also does not see the value in a crowd sourcing social media platform like SemiWiki.com. Luckily thousands and thousands of semiconductor professionals around the world do not agree with JohnB.

Another problem I have with EDA360is the fear, uncertainty, and doubt (FUD) it attempts. For example, it argues that Moore’s law hit a wall due to rising development costs? FUD is a standard EDA tool to baffle customers and build a value proposition where none currently exists, so nothing new there.

On the positive side, I think EDA360 is an excellent road map for Cadence. The company seems to have focus and hopefully EDA360 products will continue to be developed and deployed. The one year anniversary is paper by the way, thus the title. Just my opinion of course. Share yours in the comment section and I will make sure JohnB gets them.

*Note: to read/write comments you must log in.


40nm to 28nm Migration Success Story

40nm to 28nm Migration Success Story
by Paul McLellan on 05-08-2011 at 4:00 pm

The problem:To move dual-port SRAM library and macros from a 40nm process to a 28nm process. In addition to all the changes between two different foundry processes, the 28nm rules are disruptive and incompatible with the previous rules. The memory corecells (foundry-specific) would also need to be completely replaced.

Current wisdom was that an IP block migration to 28nm was impossible and a complete re-design would be required. As usual, the schedule was very aggressive so a quick migration was really the only feasible option.

The main changes: There were many changes in design rules between the two foundry processes. Some of the most challenging changes were:

  • No poly bends. The original process allowed poly bends and made extensive use of them in the design.
  • Extra dummy rule. Small transistor stacks required an additional dummy poly “gate” off the end of the transistor. This was not electrically functional, but was required for optical lithography reasons.
  • Memory core cells from the original process were replaced with new core cells for the new process. These new core cells are architecturally different, resulting in various mismatches that would require attention.
  • ]Devices outside the memory core-cells needed to have width and length adjusted according to complex rules.
  • ]All devices must be vertical. The original design was done in the horizontal direction meaning it must be rotated when it is written onto the reticle.

The main non-changes: Many aspects of the design could not be changed. The design was originally done using Virtuoso and Pcells in DFII but the migrated design needed to be in OpenAccess. It was important to maintain the hierarchy and ensure that LVS would pass for the new design. Transistor aspect ratios also needed to be adjusted appropriately.

The schedule: There was only a month to do the migration. No more than two days of manual cleanup was allowed at the end to fix anything that could not be migrated automatically.

How each major challenge was addressed:

No poly bends. When possible, the poly would need to be replaced with metal1 if the routing channel was open. If the routing channel was not open, then the result was left LVS clean but with DRC violations that would need to be fixed manually. A preliminary analysis revealed that there would be very few such cases left.


To avoid poly bend violations, the tool automatically moved poly/metal connections in cases it was needed to keep the poly line straight.


Extra dummy rule
. The extra dummy rule requires that additional dummy poly “gates” are added to the end of transistors below a certain size. This is complicated by the fact that the transistors appear within Pcells.


Memory core cells
. The memory core-cells from the original design were replaced with new foundry-provided core-cells for the new target process. A direct mapping was possible for most of the core cells including abutments that could be handled directly.

However, the mapping of interconnections between the core cell array and the decoders and I/Os were another challenge. There were fewer horizontal metal2 lines in the new technology and so the number of lines had to be adjusted during the migration.

Similarly there were a different number of metal3 lines which required adjustment during the migration. Further, the metal3 lines needed to be re-ordered (the order that they emerged from the core was different for the two architectures). The re-ordering of lines had to be done manually after the migration but space was automatically left to make this straightforward.

Device resizing. Transistors needed to be re-sized automatically during the migration according to complex rules. Devices with channel length of 40nm were mapped to 30nm. Longer channel devices had their channel length shortened to a certain specific ratio of the original length. As for transistor channel widths, these were shrunk by different ratios for the nMOS vs. the pMOS transistors. The minimum width was further constrained to a specific size that would force every source/drain to have a minimum of two contacts, for reliability reasons. Clearly, different ratios and limits could have been handled automatically too.

All devices must be vertical. Since the design would be rotated after migration, all design rules needed to interchange horizontal and vertical between the design rule manual and the Sagantec technology file.

Results: The migration was done in three steps.

Step 1: the design rules were encoded into a Sagantec technology file, taking into account the horizontal/vertical interchange.

Step 2: the core-cell array was migrated, taking into account the shrink and the reduction in the number of metal2 lines available. Abutments were all fully respected. The corecell to X-decoder metal2 connections were handled completely correctly.


The corecell array to X-decoder-strap was completely handled, including the metal2 lines that could be deleted due to the new abutments.


The corecell array to X-decoder end-cell was completely handled correctly.


The core-cell array to I/Os on metal3 was placed completely correctly. However, the order of the lines was incorrect and would need to be fixed manually. Additional space was automatically left to make this task straightforward.


The run time for the entire migration was few hours.

Results: The dual-port SRAM migration was delivered on-time. Minor fixing was required after the initial delivery to update some minor device size changes and other fixes (1½ days in total).

The memory core-cell array was replaced 100% clean. Core cell and peripherals cell placement and abutment was fully respected. Everything except for the re-ordering of I/O lines on metal3 was handled completely automatically.

Devices re-sizing was fully implemented and target devices sizes respected.

As many poly angles were removed as possible. Where this was not possible (since metal1 was already in use) the remaining few violations were left for manual fixing.
The entire project took one man-month (half of that time was spent to set up the design rules and flow from scratch, half to actually perform the migration). A macro 2K by 32 dual-port SRAM macro takes a few hours of run time. Migrating subsequent SRAM instances comes almost free, just a few hours to set up and a few hours of run-time.

IP block migration:Moving a complex custom IP block from one process to a very different process can either be done by an experienced layout team or using an automated flow that handles almost all of the work automatically, such as Sagantec’s migration technology. For migration to older process nodes or between similar processes, it is possible that a shrink followed by manual fix-up of violations would work, but in advanced process nodes and when the processes have different rules, the number of violations generated can be overwhelming and impractical for such approach. The alternative would be a complete redesign, which in this case would be prohibitively expensive in both schedule and resources required. In addition to licensing migration software, Sagantec has also experienced application engineers that can do migrations as a service to minimize turn-around time, get the highest quality results and maximize ROI. In this case study the Sagantec service did not only provide a significant return on the investment, it actually enabled getting these 28nm memory macros implemented in such demanding schedule requirements.

Sagantec Demo Suite Registration