100X800 Banner (1)

Another Up Year in a Down Economy for Tanner EDA

Another Up Year in a Down Economy for Tanner EDA
by Daniel Payne on 09-13-2011 at 11:00 am

Almost every week I read about a slowing world economy, yet in EDA we have some bright spots to talk about, like Tanner EDA finishing its 24th year with an 8% increase in revenue. More details are in the press release from today.

I spoke with Greg Lebsack, President of Tanner EDA on Monday to ask about how they are growing. Greg has been with the company for 2 1/2 years now and came from a software business background. During the past 12 months they’ve been able to serve a broad list of design customers, across all regions, with no single account dominating the growth. Our previous meeting was at DAC, three months ago where I got an update on their tools and process design kits.

Annual Highlights
Highlights for the year ending in May 2011 are:

  • 139 new customers
  • HiPer Silicon suite of analog IC design tools increasingly being used for: sensors, imagers, medical, power management and analog IP
  • Version 16 demonstrated at DAC (read my blog from DAC for more details)
  • New analog PDK added for Dongbu HiTek at 180nm, and TowerJazz at 180nm for power management
  • Integration between HiPer Silicon and Berkeley DA tools (Analog Fast SPICE)

Why the Growth?
I see several factors causing the growth in EDA for Tanner: Standardized IC Database, SPICE Integration, Analog PDKs and a market-driven approach.

Standardized IC Database
While many of their users run exclusively on Tanner EDA’s analog design suite (HiPer Silicon), their tools can co-exist in a Cadence flow as version 16 (previewed at DAC) uses the OpenAccess database. This is an important point because you want to save time by using a common database instead of importing and exporting which may loose valuable design intent. Gone are the days of proprietary IC design databases that locked EDA users into a single vendor, instead the trend is towards standards-based design data where multiple EDA tools can be combined into a flow that works.

SPICE Integration
Analog Fast SPICE is a term coined by Berkeley Design Automation years ago as they created a new category of SPICE circuit simulators that fit between SPICE and Fast SPICE tools. By working together with Tanner EDA we get a flow that uses the HiPer Silicon tools shown above with a fast and accurate SPICE simulator called AFS Circuit Simulator (see the SPICE wiki page for comparisons). Common customers often drive formal integration plans like this one. I see Tanner EDA users opting for the AFS Circuit Simulator on post-layout simulations where they can experience the benefits of higher capacity.

Analog PDKs
Unlike digital PDKs grabbing the headlines at 28nm the analog world has PDKs that are economical at 180nm as shown in the following technology roadmap for Dongbu HiTek, an analog foundry located in South Korea:

Another standardization trend adopted by Tanner EDA is the Interoperable PDK movement, known as iPDK. Instead of using proprietary models in their PDK (like Cadence with Skill) this group has standardized in order to reduce development costs. In January 2011 I blogged about how Tanner EDA and TowerJazz are using an iPDK at the 180nm node.

Market Driven Approach
I’ve worked at companies driven by engineering, sales and marketing. I’d say that Tanner EDA is now more market-driven, meaning that they are focused on serving a handful of well-defined markets like analog, AMS and MEMS. For the early years I saw Tanner EDA as being primarily engineering driven, which is a good place to start out.

Summary
Only a handful of companies survive for 24 years in the EDA industry and Tanner happens to be in that distinguished group, because they are focused and executing well we see them in growth mode even in a down economy.


When analog/RF/mixed-signal IC design meets nanometer CMOS geometries!

When analog/RF/mixed-signal IC design meets nanometer CMOS geometries!
by Daniel Nenni on 09-13-2011 at 9:22 am

In working with TSMC and GlobalFoundries on AMS design reference flows I have experienced first hand the increasing verification challenges of nanometer analog, RF, and mixed-signal circuits. Tools in this area have to be both silicon accurate and blindingly fast! Berkeley Design Automation is one of the key vendors in this market and this blog comes from discussions with BDA CEO Ravi Subramanian. I first met Ravi at the EDAC CEO Forecast panel I moderated in January, he is probably the only EDA CEO that spends more time in Taiwan than I do!

When analog/RF/mixed-signal IC design meets nanometer CMOS geometries, the world changes. Analog/RF circuit complexity increases as more transistors are used to realize circuit functions; radically new analog circuit techniques that operate at low voltages are born, creating new analysis headaches; digital techniques are mixed into analog processing chains, creating complex requirements for verifying the performance of such digitally-controlled/calibrated/assisted analog/RF circuits; more and more circuits need to operate where devices are in the nonlinear operating region, creating analysis headaches; layout effects determine whether the full-potential of a new circuit design can be achieved; and second and third order physical effects now become first-order effects in the performance of these circuits.

Circuit simulation remains the verification tool of choice, but with little innovation from traditional analog/RF tool suppliers, designers are forced to fit their design to the limitations of tools – breaking down blocks into sizes that lend themselves to easy convergence in transient or periodic analysis, using linear approximations to estimate the performance of nonlinear circuits, ignoring the impact of device noise because of the impracticality of including stochastic effects in circuit analysis, characterizing circuit performance for variation without leveraging distribution theory, and cutting corners in post-layout analysis because of the long cycle times in including layout dependent effects in circuit performance analysis.

In the face of all of this, design flows are being retooled to leverage the best in class technologies that have emerged to solve these new problems with dramatic impact on productivity and silicon success. Retooling does not mean throwing out the old and bringing in the new – rather it is an evolutionary approach to tune the design flow by employing the best technology in each stage of the flow that will remove limitations or enable new analyses.

On September 22[SUP]nd[/SUP] at TechMart in Santa Clara key EDA vendors will showcase advanced nanometer circuit verification technologies and techniques. Hosted by Berkeley Design Automation, and including technologists from selected EDA, industry and academic partners, this forum will showcase advanced nanometer circuit verification technologies and techniques. You’ll hear real circuit case studies, where these solutions have been used to verify challenging nanometer circuits, including data convertors; clock generation and recovery circuits (PLLs, DLLs); high-speed I/O, image sensors and RFCMOS ICs.You’ll hear real circuit case studies, where these solutions have been used to verify challenging nanometer circuits, including data convertors; clock generation and recovery circuits (PLLs, DLLs); high-speed I/O, image sensors and RFCMOS ICs.

I hope to see you there! Register today, space is limited.



Memo To New AMD CEO: Time For A Breakout Strategy!

Memo To New AMD CEO: Time For A Breakout Strategy!
by Ed McKernan on 09-12-2011 at 2:52 pm

“Where’s the Taurus?” In the history of company turnarounds, it was one of the most penetrating and catalyzing opening questions ever offered by a new CEO to a demoralized executive team. The CEO was Alan Mullaly, who spent years at Boeing and at one point in the 1980s studied the successful rollout of the original Ford Taurus. For one to get a full sense of what Mulally faced you have to follow the progression of the dialogue in a Fast Company article written after the encounter:

Continue reading “Memo To New AMD CEO: Time For A Breakout Strategy!”


Broadcom’s Bet the Company Acquisition of Netlogic

Broadcom’s Bet the Company Acquisition of Netlogic
by Ed McKernan on 09-12-2011 at 1:19 pm

I surmised a month ago that Broadcom could be a likely acquirer of TI’s OMAP business in order to compete more effectively in Smart Phones and Tablets. I was not bold enough. Instead, Broadcom has offered $3.7B for Netlogic in order to be an even bigger player in the communications infrastructure market by picking up TCAMs and a family of muli-processor MIPs solutions. The acquisition is not cheap as they are offering to buy Netlogic at a price equal to 10 times their current sales. In addition it represents, 42% of Broadcom’s current valuation. Although one can argue, that semiconductors as a whole are undervalued in the market.

I want to highlight the significance of the acquisition relative to the two competing visions in the market place as to how best serve the communications market from a semiconductor solution point of view. On the one side are the off the shelf standard ASSP solutions from Broadcom, Marvell, Netlogic Cavium, Freescale, EZChip etc.. On the other are the customizeable solutions that were once done entirely with ASICs but which now are more and more being taken over by FPGAs. Altera and Xilinx have made this a focus because they are able to generate a lot of revenue by selling very high ASP parts as prototype and early production units.

The first camp believes that over time multicore MIPs based processors are the most flexible way to build and update communications equipment. Broadcom’s major weakness was that they were not able to get a view into new high performance designs with switch chips better suited for cost sensitive volume switches. Netlogic, on the other hand gets to see nearly every new high-end design because they are the de-facto sole provider of TCAM chips. Broadcom, in a sense is paying a premium to get this inside worldwide view into the customer base.

Switching to the other side, the FPGA camp. Cisco and Juniper built their businesses in the 1990s and 2000s off of custom ASICs fabricated at IBM and TI. The development teams still consist mostly of ASIC designers. They have been asked to rely less on ASICs and more on FPGAs because they never generate high volume. The design flow of FPGAs is like ASICs which is a big plus. What Xilinx and Altera figured out several years ago is that if they bolted on the fastest Serdes to their latest chips, they would attract more design wins. Altera of late is doing the best at pushing Serdes speeds. Whereas in the 1990s, FPGAs were used for simple bus translations, in the 2000s they became standard front ends to many line cards because of the Serdes speeds and flexibility.

Turning to this decade, the FPGAs are being used more and more for building data buses, deep packet inspection and even traffic management. If Xilinx and Altera offered unlimited gates at a reasonable price, the likelihood is that they would own most of the line card. There is one area that they are coming up short and I expect this will be corrected soon. The networking line card needs some amount of high level processing. Traditionally, this has been MIPs and recently Intel has shown up. FPGAs need to incorporate processors and have a clean interface into the high-speed fabric.

At the 28nm process node, Altera and Xilinx are attempting to take it one step further in their ability to be more economical by offering more gates at the same price. Xilinx is pushing the envelope on gate count by utilizing 3D package technology. This should allow them to effectively double the gate counts at less than 2X the cost.

Altera’s approach is to set aside silicon area for customers to drop in their own IP block. A customized solution that may be of benefit to the Cisco and Juniper’s of the world who don’t like to share their company jewels. There is a metal mask adder but it is still a time to market and lower cost alternative to a full ASIC. I call this approach a “Programmable ASSP” because it combines the benefits of both.

In the short term there will be no clear-cut winner as both approaches have benefits and a preference to design engineers. There is however a longer term financial aspect that can sway the market and it is that the FPGA vendors have a much higher margin structure than Broadcom and the rest of the ASSP vendors. Altera and Xilinx have 14 – 18% higher gross margins but more importantly the operating margins are over two times greater. It comes down to R&D. Altera and Xilinx get much more for their R&D dollar than Broadcom or Netlogic. All this seems to lead to the conclusion that Altera and Xilinx have the advantage in their ability to explore ways of crafting new solutions in the communications space.

Broadcom has a lot to prove over the coming months. In the announcement, they forecasted that the Netlogic acquisition would be accretive. Netlogic’s TCAM business is sound, however it hasn’t grown to the level expected with the rollout of IPv6. More significantly for Broadcom is the fact that the processor business is expensive. Unlike the PC market, the NPU market never achieved the multibillion-dollar size that many analysts expected a decade ago. It is, according to the Linley Group roughly $400M in size, which is relatively small for the investment needed at 28nm and below.


Synopsys STAR Webinar, embedded memory test and repair solutions

Synopsys STAR Webinar, embedded memory test and repair solutions
by Eric Esteve on 09-12-2011 at 8:16 am

The acquisitions of Virage Logic by Synopsys in 2010, have allowed building a stronger, diversified IP port-folio, including the embedded SRAM, embedded non-volatile memory and embedded test and repair solution. Looking back in time, I remember the end of the 80’s: at that time the up-to-date solution to embed SRAM in your ASIC design was to use a compiler, provided by the ASIC vendor, implement the SRAM, and develop yourself the test vectors. In 2000, most of the ASIC vendors were externally sourcing the SRAM compiler (to Virage Logic…), the ASIC designers were taking benefit of faster, denser memories with Built-In-Self-Test (BIST) integrated. But that was not enough, as the SRAM, becoming very large, may have a negative impact on the yield of the ASIC. Then, in 2002, Virage Logic has introduced the repair capability with the STAR product (for Self-Test and Repair).

To register to this STAR Webinar, just go here.

Specifically, the DesignWare Self-Test and Repair (STAR) Memory System consists of:

  • Test and repair register transfer level (RTL) IP, such as STAR Processor, wrapper compiler, shared fuse processor and synthesizable TAP controller
  • Design automation tools such as STAR Builder for automated insertion of RTL and STAR Verifier for automated test bench generation
  • Manufacturing automation tools such as STAR Vector Generator for automated generation of WGL/STIL and programmability in patterns, and STAR Silicon Debugger for rapid isolation, localization and classification of faults
  • An open memory model for all memories. In order to generate DesignWare STAR Memory System views, Synopsys provides the MASIS memory description language. In addition a MASIS compiler is available to memory developers to automate generation and verification of the memory behavioral and structural description
  • The DesignWare STAR ECC IP, which offers a highly automated design implementation and test diagnostic flow that enables SoC designers to quickly address multiple transient errors in advanced automotive, aerospace and high-end computing designs

This webinar will be hold by Yervant Zorian, Chief Architect for embedded test & repair products line and Sandeep Kaushik, the Product Marketing Manager for the Embedded Memory Test and Repair product line at Synopsys. They will explain:

  • The technical trends and challenges associated with embedded test, repair and diagnostics in today’s designs.
  • The trade-offs and design impact of various solutions.
  • How Synopsys’ DesignWare® STAR Memory System® can meet your embedded test, repair and diagnostics needs.

They will tell you why STAR can lead to:
Increased Profit Margin

  • The DesignWare STAR Memory System can enable an increase of the native die yield through memory repair, leading to increased profit margins

Predictable High Quality with Substantial Reduction in Manufacturing Test Costs Shorter Time-to-Volume

  • The DesignWare STAR Memory System has superior diagnostics capabilities to enable quick bring up of working silicon, thereby enabling manufacturing to quickly ramp to volume production. The DesignWare STAR Memory System also has automated test bench capabilities and a proven validation flow to ensure a successful bring up of first silicon on the automatic test equipment

Minimum Impact on Design Characteristics (Performance, Power and Area)

  • Because the test and repair system is transparently integrated within the DesignWare STAR Memory System, it ensures minimal impact on timing and area and allows designers to quickly achieve timing closure. This advanced embedded test automation can reduce insertion time by weeks

To register to this STAR Webinar, just go here.

Eric Esteve


Cadence ClosedAccess

Cadence ClosedAccess
by Paul McLellan on 09-11-2011 at 4:00 pm

There are various rumors around about Cadence starting to close up stuff that has been open for a long time. Way back in the midst of time, as part of the acquisition of CCT, the Federal Trade Commission forced Cadence to open up LEF/DEF and allow interoperability of Cadence tools (actually only place and route) I believe for 10 years. Back then Cadence was the #1 EDA company by a long way, in round figures making about $400M in revenue each quarter and dropping $100M to the bottom line. Cadence opened up LEF/DEF and created the Connections Program as a result.

Recently I’ve heard about a couple of areas where Cadence seems to be throwing its weight around.

Firstly, it seems that they have deliberately made Virtuoso so that some functions only operate with Cadence PDKs (SKILL-based) and not with any of the flavors of open PDKs that are around.

I’ve written before about the various efforts to produce more open PDKs that run on any layout system, as opposed to the Skill-based PDKs that only run in Cadence’s Virtuoso environment.

Apparently what is happening is this. OpenAccess includes a standard interface for PCells which is used by all OA PCells whether implemented in Skill, Python, TCL, C++ or anything else. There is also a provision for an end application to query the “evaluator” used with each PCell. In IC 6.1.5 code has been added to Virtuoso GXL which uses this query and if the answer is anything other than “cdsSkillPCell” then a warning message “whatever could not complete” is issued and that whatever GXL feature is aborted. Previous versions 6.1.4 and earlier, all worked correctly. In particular, modgen no longer works.

Apparently semiconductor companies are very annoyed about this for a couple of reasons. Firstly, they have to generate multiple PDKs if Cadence won’t accept any of the open standards (and Cadence has enough market share that they cannot really be avoided). Second, the incredibly complex rules that modern process nodes require are simply much easier using the more powerful modern languages used in the open PDKs (e.g. Python). At least one major foundry has said publicly that they can deliver modern-language PDKs to their customers weeks earlier than they can with Skill.

Cadence apparently claim that it is purely an accident that certain tools fail with anything other than Skill-based PDKs, as opposed to something that they have deliberately done to block them. But they won’t put any effort into finding what is wrong (well, they know). To be fair to them, they have had major financial problems (remember those Intel guys?) that has meant that they have had to cut back drastically on engineering investment and have to really prioritize what they can do.

One rumor is that this non-interoperability has escalated to CEO level and Cadence’s CEO refused to change this deliberate incompatibility. TSMC must be very frustrated by this since it causes them additional PDK expense. Relations between the two companies seem to be strained over Cadence not supporting the TSMC iPDK format. It would be an interesting battle of the monsters if TSMC took a stand to only support open PDKs, a sort of who’s going to blink first scenario.

The second rumor is that Cadence have changed the definitions of LEF/DEF and now demand that anyone using them license the formats. To be fair, Synopsys does the same with .lib. I don’t know if they are demanding big licensing fees or anything. Of course there may be technical reasons LEF/DEF have to change to accommodate modern process nodes, just as .lib has had to change to accommodate, especially, more powerful timing models.

It is, of course, ironic that Cadence’s consent decree came about due to its dominance in place & route. Whatever else Cadence’s can be accused of, dominance in place & route is no longer one of them. Synopsys and Magma have large market shares and Mentor and Atoptech have credible technical solutions too. It is in custom design with Virtuoso where Cadence arguably has large enough market share to fall under laws that restrict what monopolists can do. Things like the essential facilities doctrine which overrides the general principle that a company does not have to deal with its competitors if it chooses not to.

Obviously I’m not a lawyer, but it’s unclear if Cadence is doing anything it is not allowed to. But is certainly doing things that contradict its EDA360 messaging about being open, and that would probably have been illegal during the decade of the consent decree which recently ended. Plus apparently they have 50 people in legal (no wonder they are tight in engineering) but they seem to have their hands full with class action suits (funny thing: I just tried to use Google to see if there was any color worth adding to this and the #1 hit was a website called Law360!).

And then there is the Cadence “dis-connections” partner program. But that’s a topic for another time.

Related blogs: Layout for AMS Nanometer ICs

Note: You must be logged in to read/write comments


2.5D and 3D designs

2.5D and 3D designs
by Paul McLellan on 09-07-2011 at 1:54 pm

Going up! Power and performance issues, along with manufacturing yield issues, limit how much bigger chips can get in two dimensions. That, and the fact that you can’t manufacture two different processes on the same wafer, mean that we are going up into the third dimension.

The simplest way is what is called package-in-package where, typically, the cache memory is put into the same package as the microprocessor (or the SoC containing it) and bonded using traditional bonding technologies. For example, Apple’s A5 chip contains an SoC (manufactured by Samsung) and memory chips (from Elipida and other suppliers). For chips where both layouts are under control of the same design team, microbumps can also be used as a bonding technique, flipping the top chip over so that the bumps align with equivalent landing pads on the lower chip completing all the interconnectivity.

The next technique, already in production at some companies like Xilinx, is to use a silicon interposer. This is (usually) a large silicon “circuit board” with perhaps 4 layers of metal built in a non-leading edge process and also usually containing a lot of decoupling capacitors. The other die are microbumped and flipped over onto the interposer, and the interposer is connected to the package using through silicon vias (TSVs). Note that this approach does not require TSVs on the active die, avoiding a lot of complications.

I think it is several years before we will see true 3D stacks with TSVs through active die and more than two layers of silicon. It requires a lot of changes to the EDA flow, a lot of changes to the assembly flow, and the exclusion areas around TSVs (where no active circuitry can be placed) may be prohibitive, forcing the TSVs to the periphery of the die and thus lowering significantly the number of connections between die that is possible.

But all of these approaches create new problems to verify power, signal and reliability integrity. To solve this requires a new verification methodology that provides accurate modeling and simulation across the whole system: all the die, interposers, package and perhaps even the board.

TSVs and interposer design can cause inter-die noise and other reliability issues. As I said above, the interposer usually contains decaps and so the power supply integrity needs to take these into account. In fact it is not possible to analyze the die in isolation since the power distribution is on the interposer.

One approach, if all the die layout data (including the interposer) is available, is to do concurrent simulation. Typically some of the die may be from an IP or memory vendor and in this casee a mdel-based analysis can be used, with the CPMs (chip power model) standing in for the detailed data that is unavailable.

One challenge that going up in the 3rd dimension creates is the issue of thermal induced failures. Obviously heat generated has a harder time getting out from the center than in a traditional two dimensional chip design. The solution is to create a chip thermal model (CTM) for each die, that must include temperature dependent power modeling (leakage is very dependent on temperature), metal densite and self-heating power. Handing all these models to ta chip-package-system thermal/stress simulation tool for power-thermal co-analysis, the power and temperature distribution can be calculated.

A final problem is signal integrity. The wide I/O (maybe thousands of connections) between the die and the interposer can cause significant jitter due to simultaneous switching. Any SSO (simultaneously switching outputs) solution needs to consider the drivers and receivers on the different die as well as the layout of the buses on the interposer. Despite the interposer being passive (no transistors) its design still requires a comprehensive CPS methodology.

Going up into the 3rd dimension is an opportunity to get lower power, higher performance and smaller physical size (compared to multiple chips on a board). But it brings with it new verification challenges in power, thermal and signal integrity to ensure that it is all going to perform as expected.

Norman Chang’s full blog entry is here.

Related blog: TSMC Versus Intel: The Race to Semiconductors in 3D!



TSMC and Dr. Morris Chang!

TSMC and Dr. Morris Chang!
by Daniel Nenni on 09-05-2011 at 6:14 pm

While I was in Taiwan last month battling a Super Typhoon, Morris Chang was in Silicon Valley picking up his IEEE Medal of Honor. Gordon Moore, Andrew Grove, and Robert Noyce all have medals. The other winners, including 10 Nobel prize recipients, are listed HERE. An updated wiki on Dr. Morris Chang is located HERE.

The 12+ hour plane ride home gives a person plenty of time for reflection on why TSMC is so successful. Leadership is certainly important, just take a look at the executive staff on the new TSMC corporate website ( www.tsmc.com ). But in my opinion, TSMC’s success boils down to one thing, they are a dedicated IC foundry that is dependent on its customers and ecosystem partners and TSMC has never forgotten that.

Foundry 2010 Revenues:
(1) TSMC $13B
(2) UMC $4B
(3) GFI $3.5B
(4) SMIC $1.5B
(5) Dongbu $512M
(6) Tower/Jazz $509M
(7) Vanguard $508M
(8) IBM $430M
(9) Samsung $420M
(10) MagnaChip $405M

But if you ask how TSMC and Dr. Morris Chang himself got where they are today it can be summed up in three words: Business Model Innovation. Other business model innovators include: eSilicon, ARM, Apple, Dell, Starbucks, Ebay, Google, etc…. I would argue that without TSMC some of these businesses would not even exist.

Morris Chang’s education started at Harvard but quickly moved to MIT as his interest in technology began to drive his future. From MIT mechanical engineering graduate school Morris went directly into the semiconductor industry at the process level and was quickly moved to management. After completing an electrical engineering PhD program at Stanford, Morris leveraged his process level semiconductor management success and went to Taiwan to head the Industrial Technology Research Institute (ITRI) which lead to the founding of TSMC.

In 1987 TSMC started 2 process nodes behind current semiconductor manufacturers (IDMs). Morris Chang made the first TSMC sales calls with a single brochure: TSMC Core Values: Integrity, commitment, innovation, partnership. 4-5 years later TSMC was only behind 1 node and the orders started pouring in. In 10 years TSMC caught up with IDMs (not Intel) and the fabless semiconductor industry blossomed enabling a whole new era of semiconductor design and manufacturing.

Morris Chang Awards

  • 1998, “Top 25 Managers of the Year” and “Stars of Asia” by Business Week.
  • 1998, “One of The Most Significant Contributors in the 50 years of Semiconductor Industry” by BancAmerica Robertson Stephens.
  • 2000, “IEEE RobertN. Noyce Award” for Exceptional Contributions to Microelectronics Industry.
  • 2000, “Exemplary Leadership Award” from the Fabless Semiconductor Association (GSA).
  • 2005, “Top 10 Most Influential Leaders of the World” by Electronic Business.
  • 2008, “Semiconductor Industry Association’s Robert N. Noyce Award”
  • 2009, “EE Times Annual Creativity in Electronics Lifetime Achievement Award”
  • 2011, IEEE Medal of Honor

Dr. Morris Chang turned 80 on July 10[SUP]th[/SUP] 2011, I have seen him in Fab 12 but we have not met. Morris returned to the CEO job in June of 2009 and is still running TSMC full time as CEO and Chairman. He works from 8:30am to 6:30pm like most TSMC employees and says that a successful company life cycle is: rapid expansion, a period of consolidation, and maturity. The same could be said about Morris himself.

Here is a new 5 minute video from TSMC. I highly recommend watching it:

Pioneer of Dedicated IC Foundry Business Model

Related Blogs:TSMC 28nm / 20nm Update!


Manufacturing Analysis and Scoring (MAS): GLOBALFOUNDRIES and Mentor Graphics

Manufacturing Analysis and Scoring (MAS): GLOBALFOUNDRIES and Mentor Graphics
by Daniel Payne on 09-05-2011 at 3:37 pm

Last week GLOBALFOUNDRIES and Mentor Graphics presented at the Tech Design Forum on how they collaborated on a third generation DFM flow. When I reviewed the slides of the presentation it really struck me on how the old thinking in DRC (Design Rule Checking) of Pass/Fail for layout rules had been replaced with a score represented as a number between 0 and 1.

An example is shown above where an enclosure rule is described as:M1 minimum overlap past CA for at least two opposite sides with the other two sides >= 0.00um (Rectangular enclosure).

The top Metal line has a Via with short overlap and its score is about 0.4 (less manufacturable, lower yielding) while the bottom Metal line has a Via with a much longer overlap so its score is about 0.9 (more manufacturable, higher yielding).

When I first started designing full-custom ICs all of our rules were described as Pass/Fail, nothing ambiguous at all. Today however DRC rules are a different story at the 28nm node and lower because of DFM and lithography challenges, so IC designers need a new way to be warned about layout practices that impact yield early in the design process.

One way that GLOBALFOUNDRIES started describing the scoring on a rule was to provide two categories of rules: Low Priority and High Priority


MCD Scoring Model

The blue line on the top is for Low Priority rules and the X axis represents the Score, while the slope of the line gradually moves downward. In contrast the High Priority rule line shown in red has a much steeper slope, meaning that it’s score approaches Zero more quickly.

With just these two rule lines we can bound all of the DRC rules in our process by placing each rule into one of two categories: High or Low priority.

An improvement over the MCD scoring model is called the MAS scoring model where each DRC rule has its own slope based on actual Silicon measurements:


MAS Scoring Model

The green and grey lines correspond with two DRC rules that have been measured and plotted.

This new MAS Scoring Model is support by Mentor’s Calibre CFA tool:

In this dialog I can see the MAS score for each DFM rule and then decide if I want to edit my layout to improve the score.

Many process effects are covered by the MAS score: litho, stress, charging, random particle defect, etc.

I can now sort and rank which DFM layout issues to work on first.

Summary
GLOBALFOUNDRIES is using DFM scoring to qualify IP at the 65nm and smaller nodes as a means to improve quality. Mentor Calibre tools use the equation-based MAS and litho friendly design so that a designer can converge more quickly on DFM fixes in their IP.


What changes to expect in Verification IP landscape after Synopsys acquisition of nSys?

What changes to expect in Verification IP landscape after Synopsys acquisition of nSys?
by Eric Esteve on 09-05-2011 at 4:53 am

Even if nSys acquisition by Synopsys will not have a major impact on Synopsys’ balance sheet, it is a kind of earthquake in the Verification market landscape. After the Denali acquisition by Cadence in 2010, nSys was most probably the market leader in verification IP, if we look at the independent VIP providers (excluding Cadence). The company VIP port-folio can bear the comparison with Cadence, as nSys supports: PCI family, Communication standards, Storage Interfaces, USB Port, DDR3/2 Memory Controller, MIPI, AMBA and some miscellaneous interfaces. nSys being privately owned, we don’t know the company revenue, neither if the company is profitable. Was this acquisition an “asset sale”, and just an opportunistic deal closed by Synopsys at low cost, with the side effect to compete directly with Cadence in a market where the company has heavily invested during the last three years? Or the goal is to consolidate Synopsys position in the Interface IP market, where the company is the dominant provider, present, and leader, in every segment (DDRn, PCIe, USB, SATA, HDMI, MIPI…), by adding “independent” VIP to the current offer?

Synopsys was offering “bundled” VIP, and this is not the best way to valorize the product, as the Design IP customer expect to get a bundled VIP almost for free. If Synopsys acquisition of nSys illustrates a real strategy inflection, another side effect will be the lack of accuracy of the “Yalta description”: Cadencedominant in VIP and Synopsysin IP market!

Only Synopsys and nSys management teams know the answer. Today we only can evaluate the impact of this acquisition on the day to day life of SoC design teams, when this SoC integrates an Interface IP… which happen in most of the cases.

I found an interesting testimonial on nSys web site: “We had debated using bundled VIP solutions, which were available with PCIe IP, but after evaluating the nSys Verification IP for PCIe, we dropped the idea. We were impressed by the level of maturity of the PCIe nVS. We also realized that the PCIe nVS provided us with the ability to do independent verificationof the IP that could not have been achieved with the bundled models. The nSys solution has helped our engineering team increase productivity too.” From Manoj Agarwal, Manager ASIC, EFI.

The important word in this testimonial is “independent”. We have expressed this concern in the past, when saying:

“Common sense remark about the BFM and IP: when you select a VIP provider to verify an external IP, it is better to make sure that the design team for the BFM and for the IP are different and independent (high level specification and architecture made by two different person). This is to avoid the “common mode failure”, principle well known in aeronautic for example.”

A SoC project manager will have the option to buy an “independent” VIP to run the verification of the interface function… to the same vendor selling the IP. He still can buy it to the main competitor (Cadence) or to one of the remaining VIP provider (Avery, ExpertIO, PerfectVIP, Sibridge Technology, SmartDV Technology), but the one stop shop argument (buy the Controller, the PHY and the Verification IP together) will be reinforced, especially because the VIP comes now from a real independent source.

Is Synopsys acquisition of nSys an opportunistic asset sale? Honestly I don’t know, but this is certainly a stone in Cadence’ garden (as the company has bought Denali in 2010 and products from Yogitech SpA, IntelliProp Inc. and HDL Design House in October 2008 to consolidate their VIP port-folio) and a threat for the remaining VIP provider. Is it good news for Synopsys customers? Yes, because the “one stop shop” will ease the procurement and technical support process. Synopsys customers should just make sure to buy at the right price… market consolidation can make life easier… and price higher!

Eric Estevefrom IPnest