Synopsys IP Designs Edge AI 800x100

SpringSoft Laker vs Tanner EDA L-Edit

SpringSoft Laker vs Tanner EDA L-Edit
by Daniel Nenni on 08-26-2012 at 7:00 pm

Daniel Payne recently blogged some of the integration challenges facing Synopsys with their impending acquisition of SpringSoft. On my way back from San Diego last week I stopped by Tanner EDA to discuss an alternative tool flow for users who find themselves concerned about the Laker Custom Layout road map.

Design of the analog portion of a mixed-signal SoC is routinely cited as a bottleneck for getting SoC products to market. This is primarily attributed to the iterative and highly artistic nature of analog design; and the lack of analog design tools to keep up with the rate and pace of productivity tools for digital circuit design. Fortunately, there is a well-known, time-proven tool for analog and mixed-signal design that offers compelling features and functionality. What’s more – with several upcoming enhancements, this tool is very well suited to be a top choice for leading SoC designers who don’t have time to wait and see how the Synopsys Custom Designer / Laker Custom Layout integration is going to play out.

L-Editfrom Tanner EDAhas been around since 1987. It was the seminal EDA software tool offered by Tanner Research. John Tanner – a CalTech grad student and Carver Mead advisee, originally marketed L-Edit as “The Chip Kit” – a GUI-driven PC-based layout editor. Several of the core principles Dr. Tanner embraced when he first started the company continue to be cornerstones of Tanner EDA twenty-five years later:

Relentless pursuit of Productivity for Design Enablement
The tool suite from Tanner can be installed and configured in minutes. Users consistently cite their ability to go from installing the tools to having a qualified design for test chips in weeks. And we’re not talking about some vintage PLL or ADC that’s designed in 350 nanometer. Tanner has L-Edit users actively working at 28nm and 22nm on advanced technologies and IP for high-speed I/O and flash memory.

In addition to improving L-Edit organically, Tanner has embraced opportunities to add functionality and capability with partners. L-Edit added a powerful advanced device generator – HiPer DevGen– back in 2010. It automatically recognizes and generates common structures that are typically tedious and time-consuming; such as differential pairs, current mirrors, and resistor dividers. The core functionality was built out by an IC design services firm; and is now an add-on for L-Edit. More recently, Tanner has announced offeringsthat couple their tools with offerings from BDA, Aldec and Incentia. This is a great sign of a company that knows how to “stick to their knitting” – while also collaborating effectively to continue to meet their users’ needs.

Tanner L-Edit v16 (currently in Beta – due out by year-end) offers users the ability to work in Open Access; reading and writing design elements across workgroups and across tool platforms. Tanner EDA CTO Mass Sivilotti told me “Our migration to Open Access is the biggest single capability the company has taken on since launching L-Edit. This is a really big deal. We’ve been fortunate to have a strong community of beta testers and early adopters that have helped us to ensure v16 will deliver unprecedented interoperability and capability.”

Collaboration with leading foundries
for certified PDKs has been a key focus area for Tanner; and it shows. With foundry-certified flows for Dongbu HiTek, TowerJazz and X-Fab and a robust roadmap, it’s clear that this is a priority. Greg Lebsack commented “Historically, many of Tanner EDA’s users had captive foundries or other means to maintain design kits. Over the past several years, we’ve seen an increasing interest by both the foundries and our users to offer certified PDKs and reference flows. It just makes sense from a productivity and design enablement standpoint.”

Maniacal focus on customer service and support
Tier II and Tier III EDA customers (companies that have more modest EDA budget than a Qualcomm or Samsung) often cite lackluster customer service from the “big three” EDA firms. This is understandable – as much of the time, attention and resource spent by big three EDA companies is directed towards acquiring and keeping large customers. Tanner EDA has many users in the Tier I accounts, but those users tend to be in smaller in research groups or advance process design teams. Their sweet spot has been Tier II and Tier III customers; and they’ve done a great job of serving that user base. One of the keys John Tanner attributes to this is having a core of the support and development teams co-located in Monrovia. “It makes a tremendous difference, says Dr. Tanner, when an FAE can literally walk down the hall and grab a development engineer to join in on a customer call.”

Features and functions that are “just what you want” – not “more than you need”

John Zuk, VP of Marketing and Business Strategy explained it to me this way: “Back in 1987, the company touted that L-Edit was built by VLSI designers for VLSI designers. Inherent in that practice has been the embracing of a very practical and disciplined approach to our product development. Thanks to very tight coupling with our user-base, we’ve maintained a keen understanding of what’s really necessary for designers and engineers to continue to drive innovation and productivity. We make sure the tools have essential features and we don’t load them up with capabilities that can be a distraction.”


While Tanner may have had a humble presence over this past quarter century, the quality of their tools and their company are proven by Tanner’s impressive customer set. A look at the selected customer stories on their website and the quotes in several of their datasheets reveal some compelling endorsements. From FLIR in image sensors, to Analog Bits in IP, to Knowles for MEMS microphones, to Torex for Power Management, Tanner maintains a very loyal user base.

The $100M question is: Will Tanner EDA pick up where SpringSoft Laker left off?


IP Wanna Go Fast, Core Wanna Not Rollover

IP Wanna Go Fast, Core Wanna Not Rollover
by Don Dingee on 08-23-2012 at 8:15 pm

At a dinner table a couple years ago, someone quietly shared their biggest worry in EDA. Not 2GHz, or quad core. Not 20nm, or 450mm. Not power, or timing closure. Call it The Rollover. It’s turned out to be the right worry.

Best brains spent inordinate hours designing and verifying a big, hairy, heavy breathing processor core to do its thing at 2GHz and beyond. They spent person-months getting memory interfaces tuned to keep the thing fed at those speeds. They spent more time getting four cores to work together. They lived with a major customer or two making sure the core worked in their configuration. They sweated out the process details with the fab partners making sure the handoff went right, and working parts came out the other end. All this was successful.

So, the core has been turned loose on the world at large, for anyone to design with. On some corner of the globe, somebody who has spent a lot of money on this aforementioned processor core grabs an IP block, glues it and other IP functions in, and the whole thing starts to rollover and bounces down the track to a crunching halt.

The EDA tools can’t see what’s going on, although they point in the general direction of the smoke. The IP function supplier says that block works with other designs with that core.

The conclusion is it’s gotta be the core, at least until it’s proven innocent. Now, the hours can really start piling up for the design team and the core vendor, and more often than not it will turn out the problem isn’t in the processor core, but in the complexity of interconnects and a nuance here or there.

Been here? I wanna go fast, and so do you, and the IP suppliers do too, and there are open interfaces like MIPI to help. The harsh reality is that IP blocks, while individually verified functionally, haven’t seen everything that can be thrown at them from all possible combinations of other IP blocks at the most inconvenient times. Enter the EDA community and art of verification IP in preventing The Rollover.

Problemmatic combinations aren’t evident until a team starts assembling the exact IP for a specific design, and the idea of “verified” is a moving target. Synopsys has launched VIP-Central.org to help with this reality. Protocols have gotten complex, speeds have increased, and verification methodologies are just emerging to tackle the classes of problems these very fast cores and IP are presenting.

Janick Bergeron put it really well when he said that Synopsys is looking to share learning from successful engagements. No IP supplier, core or peripheral, wants to be the source of an issue, but some issues are fixed more easily than others. Understanding errata and a workaround can often be more expedient than waiting for a new version with a solution from the IP block supplier, and a community can help discuss and share that information quickly.

Recorded earlier this month, a new Synopsys webinar shows an example of how this can work: verifying a mobile platform with images run from the camera to a display.

The idea of verification standards in EDA wasn’t so functional IP suppliers could say they’ve checked their block, but instead to give designers a way to check that block within a new design with other blocks of undetermined interoperability. “Verified” IP catalogs are a starting point, but the real issues have only started to surface, and verification IP can help avoid your next design being a scene in The Rollover.


A Brief History of ASIC, part II

A Brief History of ASIC, part II
by Paul McLellan on 08-23-2012 at 8:00 pm

All semiconductor companies were caught up in ASIC in some way or another because of the basic economics. Semiconductor technology allowed medium sized designs to be done, and medium sized designs were pretty much all different. The technology didn’t yet allow whole systems to be put on a single chip. So semiconductor companies couldn’t survive just supplying basic building block chips any more since these were largely being superseded by ASIC chips. But they couldn’t build whole systems like a PC, a TV or a CD player since the semiconductor technology would not allow that level of integration. So most semiconductor companies, especially the Japanese and even Intel, started ASIC business lines and the market became very competitive.

ASIC turned out to be a difficult business to make money in. The system company owned the specialized knowledge of what was in the chip, so the semiconductor company could not price to value. Plus the system company knew the size of the chip and thus roughly what it should have cost with a reasonable markup. The money turned out to be in the largest, most difficult designs. Most ASIC companies could not execute a design like that successfully so that market was a lot less competitive. The specialized ASIC companies that could, primarily VLSI Technology and LSI Logic again, could charge premium pricing based on their track record of bringing in the most challenging designs on schedule. If you are building a sky-scraper you don’t go with a company that has only built houses.

As a result of this, and of getting a better understanding of just how unprofitable low-volume designs were, everyone realized that there were less than a hundred designs a year being done that were really worth winning. It became a race for those hundred sockets.

Semiconductor technology continued to get more powerful and it became possible to build whole systems (or large parts of them) on a single integrated circuit. These were known as systems-on-chip or SoCs. The ASIC companies all started to build whole systems such as chipsets for PCs or for cell-phones alongside their ASIC businesses which were more focused on just those hundred designs that were worth winning. So all semiconductor companies started to look the same, with lines of standard products and, often, an ASIC product line too.

One important aspect of the ASIC model was that the tooling, the jargon word for the masks, belonged to the ASIC company. This meant that the only company that could manufacture the design was that ASIC company. Even if another semiconductor company offered them a great deal, they couldn’t just hand over the masks and take it. This would become important in the next phase of what ASIC would morph into.

ASIC companies charged a big premium over the raw cost of the silicon that they shipped to their customers. ASIC required a network of design centers all over the world staffed with some of the best designers available, obviously an expensive proposition. Customers started to resent paying this premium, especially on very high volume designs. They knew they could get cheaper silicon elsewhere but that meant starting the design all over again with the new semiconductor supplier.

Also, by then, two other things had changed. Foundries such as TSMC had come into existence. And knowledge about how to do physical design was much more widespread and, at least partially, encapsulated in software tools available from the EDA industry. This meant that there was a new route to silicon for the system companies, namely ignore the ASIC companies, do the entire design including the semiconductor-knowledge-heavy physical design, and then get a foundry like TSMC to manufacture it. This was known as customer-owned-tooling or COT, since the system company as opposed to the ASIC company or the foundry, owned the whole design. If one foundry gave poor pricing the system company could transfer the design to a different manufacturer.

However, the COT approach was not without its challenges. Doing physical design of a chip is not straightforward. Many companies found that the premium that they were paying ASIC companies for the expertise in their design centers wasn’t for nothing, and they struggled to complete designs on their own without that support. As a result, companies were created to supply that support, known as design services companies.

Design service companies played the role that the ASIC companies’ design centers did, providing specialized semiconductor design knowledge to complement the system companies’ knowledge. In some cases they would do the entire design, known as turnkey design. More often they would do all or some of the physical design and, often, manage the interface with the foundry to oversee the manufacturing process, another area where system companies lacked experience.

One company in particular, eSilicon, operates with a business model identical to the old ASIC companies except in one respect. It has no fab. It actually builds all of the customers’ products in one of the foundries (primarily TSMC).

Another change has been the growth of field-programmable gate-arrays (FPGAs) which are used for many of the same purposes as ASIC used to be.

So that is the ASIC landscape today. There is very limited ASIC business conducted by a few semiconductor companies. There are design services companies and virtual ASIC companies like eSilicon. There are no pure-play ASIC companies. A lot of what used to be ASIC has migrated to FPGA. A Brief History of ASIC Part I is HERE.

A Brief History of Semiconductors
A Brief History of ASICs
A Brief History of Programmable Devices
A Brief History of the Fabless Semiconductor Industry
A Brief History of TSMC
A Brief History of EDA
A Brief History of Semiconductor IP
A Brief History of SoCs


Book Review: Mixed-Signal Methodology Guide

Book Review: Mixed-Signal Methodology Guide
by Daniel Payne on 08-23-2012 at 4:00 pm

Almost every SoC has multiple analog blocks so AMS methodology is an important topic to our growing electronics industry. Authored by Jess Chen (Qualcomm), Michael Henrie (Cliosoft), Monte Mar (Boeing) and Mladen Nizic (Cadence), the book is subtitled: Advanced Methodology for AMS IP and SoC Design, Verification and Implementation. Cadence published the book and I’ve just read the first chapter and deemed the tome worthy to review because it shows both the challenges of AMS design plus verification and discusses multiple approaches, while not favoring one particular EDA vendor tool.

Review of Chapters 2 through 11 are here.

Mladen Nizic
I spoke by phone with Mladen Nizic on Thursday afternoon to better understand how this book came to be.

Q: Why was this book on AMS design and verification methodology necessary?
A: Technology and business drivers are demanding changes in products and therefore methodology. We need more discussion between Digital and Analog designers, collaborating to adopt new methodologies.

Q: How did you select the authors?
A: I knew the topics that I wanted to cover, then started asking customers and people that I knew. We gathered authors from Boeing, Cadence, Qualcomm and ClioSoft.

Q: What are the obstacles to adopting a new methodology?
A: Organizational and technical barriers exist. Most organizations have separate digital SOC and analog design groups, they just do design differently and are kind of isolated. You start to see engineers with AMS Verification Engineer appearing now. The complexity of the AMS designs is increasing with more blocks being added. Advanced nodes bring challenges, which require even more analysis.

Q: Will reading the book be sufficient to make a difference?
A: The whole design team needs to read the book, then discuss their methodology, and start to adapt some or all of the recommended techniques. Analog designers need to learn what their Digital counterparts are doing for design and verification.

Q: Why should designers spend the time reading about AMS methodology?
A: To become better rounded in their approach to design and verification. You can also just read the chapters that are of interest to each group.

Q: Where do I buy this book?
A: Right now you can pre-order it at LuLu.com, and soon afterwards on Amazon.com.

Q: Can I buy an e-Book version?
A: There will be an e-Book version coming out after the hard-copy.

Q: Is there a place for designers to discuss what they read in the book?
A: Good idea, we’re still working on launching that, so stay tuned.

Chapter 1: Mixed-Signal Design Trends & Challenges

The continuous time output of Analog IP blocks are contrasted with the binary output of Digital IP blocks and various approaches to Mixed-signal verification are introduced:

To gain the verification speed you must consider abstracting the analog IP into behavioral models, instead of only simulating at the transistor level with a SPICE or Fast-SPICE circuit simulator.

Topics raised: mixed-signal verification, behavioral modeling, low-power verification, DFT, chip planning, AMS IP reuse, full-chip sign-off, substrate noise, IC/package co-design, design collaboration and data management.

Other chapters include topics like:

  • Overview of Mixed-Signal Design Methodologies
  • AMS Behavioral Modeling
  • Mixed-Signal Verification Methodology
  • A Practical Methodology for Verifying RF Designs
  • Event-Driven Time-Domain Behavioral Modeling of Phase-Locked Loops
  • Verifying Digitally-Assisted Analog Designs
  • Mixed-Signal Physical Implementation Methodology
  • Electrically-Aware Design Methodologies for Advanced Process Nodes
  • IC Package Co-Design for Mixed-Signal Systems
  • Data Management for Mixed-Signal Designs.



Book Review: Mixed-Signal Methodology guide

Book Review: Mixed-Signal Methodology guide
by Daniel Payne on 08-23-2012 at 4:00 pm

Almost every SoC has multiple analog blocks so AMS methodology is an important topic to our growing electronics industry. Authored by Jess Chen (Qualcomm), Michael Henrie (Cliosoft), Monte Mar (Boeing) and Mladen Nizic (Cadence), the book is subtitled: Advanced Methodology for AMS IP and SoC Design, Verification and Implementation. Cadence published the book and I’ve just read the first chapter and deemed the tome worthy to review because it shows both the challenges of AMS design plus verification and discusses multiple approaches, while not favoring one particular EDA vendor tool.

Review of Chapters 2 through 11 are here.

Mladen Nizic
I spoke by phone with Mladen Nizic on Thursday afternoon to better understand how this book came to be.

Q: Why was this book on AMS design and verification methodology necessary?
A: Technology and business drivers are demanding changes in products and therefore methodology. We need more discussion between Digital and Analog designers, collaborating to adopt new methodologies.

Q: How did you select the authors?
A: I knew the topics that I wanted to cover, then started asking customers and people that I knew. We gathered authors from Boeing, Cadence, Qualcomm and ClioSoft.

Q: What are the obstacles to adopting a new methodology?
A: Organizational and technical barriers exist. Most organizations have separate digital SOC and analog design groups, they just do design differently and are kind of isolated. You start to see engineers with AMS Verification Engineer appearing now. The complexity of the AMS designs is increasing with more blocks being added. Advanced nodes bring challenges, which require even more analysis.

Q: Will reading the book be sufficient to make a difference?
A: The whole design team needs to read the book, then discuss their methodology, and start to adapt some or all of the recommended techniques. Analog designers need to learn what their Digital counterparts are doing for design and verification.

Q: Why should designers spend the time reading about AMS methodology?
A: To become better rounded in their approach to design and verification. You can also just read the chapters that are of interest to each group.

Q: Where do I buy this book?
A: Right now you can pre-order it at LuLu.com, and soon afterwards on Amazon.com.

Q: Can I buy an e-Book version?
A: There will be an e-Book version coming out after the hard-copy.

Q: Is there a place for designers to discuss what they read in the book?
A: Good idea, we’re still working on launching that, so stay tuned.

Chapter 1: Mixed-Signal Design Trends & Challenges [Online]

The continuous time output of Analog IP blocks are contrasted with the binary output of Digital IP blocks and various approaches to Mixed-signal verification are introduced:

To gain the verification speed you must consider abstracting the analog IP into behavioral models, instead of only simulating at the transistor level with a SPICE or Fast-SPICE circuit simulator.

Topics raised: mixed-signal verification, behavioral modeling, low-power verification, DFT, chip planning, AMS IP reuse, full-chip sign-off, substrate noise, IC/package co-design, design collaboration and data management.

Other chapters include topics like:

  • Overview of Mixed-Signal Design Methodologies
  • AMS Behavioral Modeling
  • Mixed-Signal Verification Methodology
  • A Practical Methodology for Verifying RF Designs
  • Event-Driven Time-Domain Behavioral Modeling of Phase-Locked Loops
  • Verifying Digitally-Assisted Analog Designs
  • Mixed-Signal Physical Implementation Methodology
  • Electrically-Aware Design Methodologies for Advanced Process Nodes
  • IC Package Co-Design for Mixed-Signal Systems
  • Data Management for Mixed-Signal Designs.

Also Read

Interview with Brien Anderson, CAD Engineer

Managing Differences with Schematic-based IC design

ClioSoft Update 2012!


ARM + Broadcom + Linux = Raspberry Pi

ARM + Broadcom + Linux = Raspberry Pi
by Daniel Payne on 08-23-2012 at 12:28 am

Broadcom has designed an impressive SOC named the BCM2835 with the following integrated features:

  • ARM CPU at 700MHz
  • GPU – VideoCore IV
  • RAM – 256 MB

The British chaps at Raspberry Pi have created a $35.00 Linux-based computer based on the Broadcom BCM2835 chip that is tiny in size but big in utility: Continue reading “ARM + Broadcom + Linux = Raspberry Pi”


Debugging Subtle Cache Problems

Debugging Subtle Cache Problems
by Paul McLellan on 08-22-2012 at 5:11 pm

When I worked for virtual platform companies, one of the things that I used to tell prospective customers was that virtual prototypes were not some second-rate approach to software and hardware development to be dropped the moment real silicon was available, that in many ways they were better than the real hardware since they had so much better controllability and observability. An interesting blog entry on Carbon’s website really shows this strongly.

A customer was having a problem with cache corruption while booting Linux on an ARM-based platform. They had hardware but they couldn’t freeze the system at the exact moment cache corruption occurred and since they couldn’t really have enough visibility anyway, they had not been able to find the root problem. Instead they had managed to find a software workaround that at least got the system booted.

Carbon worked with the customer to put together a virtual platform of the system. They produced a model of the processor using Carbon IP Exchange (and note, this did not require the customer to have detailed knowledge of the processor RTL). They omitted all the peripherals not needed for the boot (and also didn’t need to run any driver initialization code, speeding and simplifying things). Presumably the problem was somewhere in the memory subsystem. There was RTL for the L2 Cache, AHB fabric, boot ROM, memory controller, and parts of their internal register/bus structure. This was all compiled with Carbon Model Studio to a 100% accurate model that could be dropped into the platform. And when they booted the virtual platform the cache problem indeed showed up. When you are having a debugging party it is always nice when the guest of honor deigns to show up. Nothing is worse than trying to fix a Heisenbug, where when you add instrumentation to isolate it the problem goes away.By analyzing waveforms on the ARM High-performance Bus (AHB), the cache-memory, the memory controller, and assembly code (all views that Carbon’s SOCDesigner provides) they isolated the problem. It turned out that the immediate problem was that a read was taking place from memory before the write to that memory location had been flushed from the cache, thus picking up the wrong value. In turn, the real reason was that a writethrough to the cache (which should write the cache and also write through to main memory) had been wrongly implemented as a writeback (which just goes to the cache immediately and only gets written to main memory when that line in the cache is flushed). With that knowledge it was easy to fix the problem.

They had not been able to fix this problem using real hardware because they couldn’t see enough of the hardware/software interaction, bus transactions and so forth. But with a virtual platform, where the system can be run with fine granularity, complete accuracy and with full visibility, this type of bug can be tracked down. As Eric Raymond said about open source, “with enough eyeballs, all bugs are shallow.” Well, with enough visibility all bugs are shallow too.Read more detail about this story on the Carbon blog here.


Synopsys up to $1.75B

Synopsys up to $1.75B
by Paul McLellan on 08-22-2012 at 4:24 pm

Synopsys announced their results today. With Magma rolled in (but not yet SpringSoft since that hasn’t technically closed) they had revenue of $443M up 15% from $387M last year. This means that they are all but a $1.75B company and a large part of the entire EDA industry (which I think of as being $5B or so, depending on just what you do about services, IP, ARM in particular etc). GAAP earnings per share were $0.50 (non-GAAP was $0.55). GAAP (which stands for generally accepted accounting principles excludes various deductions especially in the way acquisitions are accounted and stock compensation is accounted).

For their fourth quarter they expect revenue to be flat versus this quarter at $440-448 million. So for fiscal 2012 (which ends in October) they expect revenue of $1.74 to $1.75B, with positive cash flow for the year of $450M. If Synopsys manage to grow another 15% next year (either through organic growth or through acquisition, and presumably SpringSoft is already in the bag) then they will be over $2B a year running roughly $500M/quarter, which I think is an amazing achievement given that Synopsys in 2000 was $784M, which it now takes just 5 months to do.

And, talking of acquisitions, it still has close to $1B in cash. The one area where Synopsys is weak is emulation so I wouldn’t be the least bit surprised to see them go after Eve.

Full press release including P&L, balance sheet etc is here.


Cadence at 20nm

Cadence at 20nm
by Paul McLellan on 08-21-2012 at 8:10 pm

Cadence has a new white paper out about the changes in IC design that are coming at 20nm. One thing is very clear: 20nm is not simply “more of the same”. All design, from basic standard cells up to huge SoCs has several new challenges to go along with all the old ones that we had at 45nm and 28nm.

I should emphasize that the paper is really about the problems of 20nm design and not a sales pitch for Cadence. I could be wrong but I don’t think it mentions a single Cadence tool. You don’t need to be a Cadence customer to profit from reading it.

The biggest change, and the one that everyone has heard the most about, is double patterning. This means that for those layers that are double patterned (the fine pitch ones) two masks are required. Half the polygons on the layer go on one mask and the other half on the other mask. The challenge is that no patterns on either mask can be too close, and so during design the tools need to ensure that it is always possible to divide the polygons into two sets (so, for example, you can never have three polygons that are minimum distance from each other at any point, since there is no way to split them legally into two). Since this is algorithmically a graph-coloring problem this is often referred to as coloring the polygons.

Place and route obviously needs to be double pattern aware and not create routing structures that are not manufacturable. Less obvious is that even if standard cells are double pattern legal, when they are placed next to each other they may cause issues between polygons internal to the cells.

Some layers at 20nm will require 3 masks, two to lay down the double patterned grid and then a third “cut mask” to split up some of the patterns in a way that wouldn’t have been possible to manufacture otherwise.

Another issue with double patterning is that most patterning is not self-aligned, meaning that there is variation between polygons on the two masks that is greater than the variation between two polygons on the same mask (which are self-aligned by definition). This means that verification tools need to be aware of the patterning and, in some cases, designers need to be given tools to assign polygons to masks where it is important that they end up on the same mask.

Design rules at 20nm are incredibly complicated. Cadence reckon that of 5,000 design rules only 30-40 are for double patterning. There are layout direction orientation rules, and even voltage dependent design rules. Early experience of people I’ve talked to is that the design rule are now beyond human comprehension and you need to have the DRC running essentially continuously while doing layout.

The other big issue with 20nm are layout dependent effects (LDEs). The peformance of a transistor or a gate no longer depends just on its layout in isolation but also on what is near it. Almost every line on the layout such as the edge of a well has some non local effect on the silicon causing performance changes in active areas nearby. At 20nm the performance can vary by as much as 30% depending on the layout context.

A major cause of LDEs is mechanical stress. Traditionally this was addressed by guardbanding critical paths but at 20nm this will cause too much performance loss and instead all physical design and analysis tools will need to be LDE aware.

Of course in addition to these two big new issues (double patterning, LDEs) there are all the old issues that just get worse, whether design complexity, clock tree synthesis and so on.

Based on numbers from Handel Jones’s IBS, 20nm fabs will cost from $4-7B (depending on capacity), process R&D will be $2.1-3B on top of that, and mask costs will range from $5-8M per design. And design costs: $120-500M. You’d better want a lot of die when you get to production.

Download the Cadence white paper here.


A Brief History of ASIC, part I

A Brief History of ASIC, part I
by Paul McLellan on 08-21-2012 at 7:00 pm

In the early 1980s the ideas and infrastructure for what would eventually be called ASIC started to come together. Semiconductor technology had reached the point that a useful number of transistors could be put onto a chip. But unlike earlier, when a chip only held a few transistors and thus could be used to create basic generic building blocks useful for everyone, each customer wanted something different on their chip. This was something the traditional semiconductor companies were not equipped to provide. Their model was to imagine what the market needed, create it, manufacture it and then sell it on the open market to multiple customers.

The other problem was that semiconductor companies knew lots about semiconductors, obviously, but didn’t have system knowledge. The system companies, on the other hand, knew just what they wanted to build but didn’t have enough semiconductor knowledge to create their own chips and didn’t really have a means to manufacture those chips even if they did. What was required was a new way of doing chip design. The early part of the design, which was system knowledge heavy, would be done by the system companies. And the later part of the design, which was semiconductor knowledge heavy, would be done by the company that was going to manufacture it.

Two companies in particular, VLSI Technology and LSI Logic, pioneered this. They were startup companies with their own fabs (try getting that funded today) with a business model that they would, largely, build other people’s chips. In the early days, since there weren’t really any chips out there to be built, they had to find other revenue streams. VLSI Technology, for example, made almost all of its money building read-only-memories (ROMs) that went into the cartridges for the first generation of video game consoles.

Daisy, Mentor and Valid, who had electronic design systems mostly targeted at printed circuit boards, realized that they could use those systems for the front end of ASIC design too.

ASIC design typically worked like this. A system company, typically someone building an add-on board for the PC market since that was the big driver of electronics in that era, would come up with some idea for a chip. They would negotiate with several ASIC companies to decide which to select, although it was always a slightly odd negotiation since only a vague idea of the size of the design was available at that point. They would then partner with one ASIC company such as LSI Logic who would supply them with a library of basic building blocks called cells.

The system company would use a software tool called a schematic editor to create the design, picking the cells they wanted from the library and deciding how they should be connected up. The output from this process is called a netlist, essentially a list of cells and connections.

Just like writing software or writing a book, the first draft of the design would be full of errors. But with semiconductor technology it isn’t possible to build the part and see what the errors are. Even back then it cost tens of thousands of dollars and a couple of months to manufacture the first chip, known as the prototype. Also, unlike with writing a book, it’s not possible to simply proofread it and inspect the schematic, too many errors would still slip through.

Instead, a program called a simulator was used. A flight simulator tells a pilot what would happen if he or she moves the controls a certain way, and, unlike in the real world doesn’t cost a fortune if the plane crashes. In the same way, a simulation of the design checked how it behaved given certain inputs without requiring the expense of building the chip. Errors that were detected could be fixed and the simulation run again.

When finally the design was determined to be correct, the netlist was sent from the system company to the ASIC company for the next step. Using a program known as place & route tool the netlist would be converted to a full chip design. In addition to creating the actual layout for manufacture, this process also created detailed timing, just how long every signal would take to change. This detailed timing would be sent back to the system company so that they could do a final check in the simulator to make sure that everything worked with the real timing versus the estimated timing that was used earlier.

At that point the system company took a deep breath and gave the go ahead to spend the money to make the prototypes. This was (and is) called tapeout.

The ASIC company then had the masks manufactured that were be needed to run the design through their fab. In fact there were two main ASIC technologies, called gate-array and cell-based. In a gate-array design, only the interconnect masks were required since the transistors themselves were pre-fabricated on a gate-array base. In cell-based design, all the masks were required since every design was completely different. The attraction of the gate-array approach was that, in return for giving up some flexibility, the manufacturing was faster and cheaper. Faster, because only the interconnect layers needed to be manufactured after tapeout. Cheaper, because the gate-array bases themselves were mass produced in higher volume than any individual design.

A couple of months later the prototypes would have been manufactured and samples shipped back to the system company. These parts typically would then have been incorporated into complete systems and those systems tested. For example, if the chip went into an add-in board for a PC then a few boards would have been manufactured, put into a PC and checked for correct operation.

At the end of that process, the system company took another deep breath and placed an order with the ASIC company for volume manufacturing, ordering thousands or possibly even millions, of chips. They would receive these a few months later, build them into their own products, and ship those products to market. A Brief History of ASIC Part II is HERE.

A Brief History of Semiconductors
A Brief History of ASICs
A Brief History of Programmable Devices
A Brief History of the Fabless Semiconductor Industry
A Brief History of TSMC
A Brief History of EDA
A Brief History of Semiconductor IP
A Brief History of SoCs